Inprobability theory andstatistics,partial correlation measures the degree ofassociation between tworandom variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using theircorrelation coefficient will givemisleading results if there is anotherconfounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in amultiple regression; but while multiple regression givesunbiased results for theeffect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.
For example, giveneconomic data on the consumption, income, and wealth of various individuals, consider the relationship between consumption and income. Failing to control for wealth when computing a correlation coefficient between consumption and income would give a misleading result, since income might be numerically related to wealth which in turn might be numerically related to consumption; a measured correlation between consumption and income might actually be contaminated by these other correlations. The use of a partial correlation avoids this problem.
Like the correlation coefficient, the partial correlation coefficient takes on a value in the range from –1 to 1. The value –1 conveys a perfect negative correlation controlling for some variables (that is, an exact linear relationship in which higher values of one variable are associated with lower values of the other); the value 1 conveys a perfect positive linear relationship, and the value 0 conveys that there is no linear relationship.
The partial correlation coincides with theconditional correlation if the random variables arejointly distributed as themultivariate normal, otherelliptical,multivariate hypergeometric,multivariate negative hypergeometric,multinomial, orDirichlet distribution, but not in general otherwise.[1]
Formally, the partial correlation betweenX andY given a set ofn controlling variablesZ = {Z1,Z2, ...,Zn}, writtenρXY·Z, is thecorrelation between theresidualseX andeY resulting from thelinear regression ofX withZ and ofY withZ, respectively. The first-order partial correlation (i.e., whenn = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. Thecoefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).[2]
A simple way to compute the sample partial correlation for some data is to solve the two associatedlinear regression problems and calculate thecorrelation between the residuals. LetX andY be random variables taking real values, and letZ be then-dimensional vector-valued random variable. Letxi,yi andzi denote theith ofi.i.d. observations from somejoint probability distribution over real random variablesX,Y, andZ, withzi having been augmented with a 1 to allow for a constant term in the regression. Solving the linear regression problem amounts to finding (n+1)-dimensional regression coefficient vectors and such that
where is the number of observations, and is thescalar product between the vectors and.
The residuals are then
and the sample partial correlation is then given by theusual formula for sample correlation, but between these newderived values:
In the first expression the three terms after minus signs all equal 0 since each contains the sum of residuals from anordinary least squares regression.
Consider the following data on three variables,X,Y, andZ:
| X | Y | Z |
|---|---|---|
| 2 | 1 | 0 |
| 4 | 2 | 0 |
| 15 | 3 | 1 |
| 20 | 4 | 1 |
Computing thePearson correlation coefficient between variablesX andY results in approximately 0.970, while computing the partial correlation betweenX andY, using the formula given above, gives a partial correlation of 0.919. The computations were done usingR with the following code.
>x<-c(2,4,15,20)>y<-c(1,2,3,4)>z<-c(0,0,1,1)# regress x onto z and compute residuals>res_x<-lm(x~z)$residuals# regress y onto z and compute residuals>res_y<-lm(y~z)$residuals# compute correlation of residuals>cor(res_x,res_y)# [1] 0.919145# show this is distinct from the correlation between x and y>cor(x,y)# [1] 0.9695016# compute generalized partial correlations>generalCorr::parcorMany(cbind(x,y,z))# nami namj partij partji rijMrji# [1,] "x" "y" "0.8844" "1" "-0.1156"# [2,] "x" "z" "0.1581" "1" "-0.8419"
The lower part of the above code reports generalized nonlinear partial correlation coefficient betweenX andY after removing the nonlinear effect ofZ to be 0.8844. Also, the generalized nonlinear partial correlation coefficient betweenX andZ after removing the nonlinear effect ofY to be 0.1581. See the R package `generalCorr' and its vignettes for details. Simulation and other details are in Vinod (2017) "Generalized correlation and kernel causality with applications in development economics," Communications in Statistics - Simulation and Computation, vol. 46, [4513, 4534], available online: 29 Dec 2015, URLhttps://doi.org/10.1080/03610918.2015.1122048.
It can be computationally expensive to solve the linear regression problems. Actually, thenth-order partial correlation (i.e., with |Z| =n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlationρXY·Ø is defined to be the regularcorrelation coefficientρXY.
It holds, for any that[3]
Naïvely implementing this computation as arecursive algorithm yields an exponential timecomplexity. However, this computation has theoverlapping subproblems property, such that usingdynamic programming or simply caching the results of the recursive calls yields a complexity of.
Note in the case whereZ is a single variable, this reduces to:[citation needed]
The partial correlation can also be written in terms of the joint precision matrix. Consider a set of random variables, of cardinalityn. We want the partial correlation between two variables and given all others, i.e.,. Suppose the (joint/full)covariance matrix ispositive definite and thereforeinvertible. If theprecision matrix is defined as, then
| 1 |
Computing this requires, the inverse of the covariance matrix which runs in time (using the sample covariance matrix to obtain a sample partial correlation). Note that only a single matrix inversion is required to giveall the partial correlations between pairs of variables in.
To prove Equation (1), return to the previous notation (i.e.) and start with the definition of partial correlation:ρXY·Z is thecorrelation between theresidualseX andeY resulting from thelinear regression ofX withZ and ofY withZ, respectively.
First, suppose are the coefficients for linear regression fit; that is,
Write the joint covariance matrix for the vector as
whereThen the standard formula for linear regression gives
Hence, the residuals can be written as
Note that has expectation zero because of the inclusion of an intercept term in. Computing thecovariance now gives
| 2 |
Next, write the precision matrix in a similar block form:
Then, bySchur's formula for block-matrix inversion,
The entries of the right-hand-side matrix are precisely the covariances previously computed in (2), giving
Using the formula for the inverse of a 2×2 matrix gives
So indeed, the partial correlation is
as claimed in (1).

Let three variablesX,Y,Z (whereZ is the "control" or "extra variable") be chosen from a joint probability distribution overn variablesV. Further, letvi, 1 ≤i ≤N, beNn-dimensionali.i.d. observations taken from the joint probability distribution overV. The geometrical interpretation comes from considering theN-dimensional vectorsx (formed by the successive values ofX over the observations),y (formed by the values ofY), andz (formed by the values ofZ).
It can be shown that the residualseX,i coming from the linear regression ofX onZ, if also considered as anN-dimensional vectoreX (denotedrX in the accompanying graph), have a zeroscalar product with the vectorz generated byZ. This means that the residuals vector lies on an (N–1)-dimensionalhyperplaneSz that isperpendicular toz.
The same also applies to the residualseY,i generating a vectoreY. The desired partial correlation is then thecosine of the angleφ between theprojectionseX andeY ofx andy, respectively, onto the hyperplane perpendicular toz.[4]: ch. 7
With the assumption that all involved variables aremultivariate Gaussian, the partial correlationρXY·Z is zero if and only ifX isconditionally independent fromY givenZ.[1] This property does not hold in the general case.
Totest if a sample partial correlation implies that the true population partial correlation differs from 0, Fisher'sz-transform of the partial correlation can be used:
Thenull hypothesis is, to be tested against the two-tail alternative. can be rejected if
where is thecumulative distribution function of aGaussian distribution with zeromean and unitstandard deviation, is thesignificance level of, and is thesample size. Thisz-transform is approximate, and the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exactt-test based on a combination of the partial regression coefficient, the partial correlation coefficient, and the partial variances is available.[5]
The distribution of the sample partial correlation was described by Fisher.[6]
The semipartial (or part) correlation statistic is similar to the partial correlation statistic; both compare variations of two variables after certain factors are controlled for. However, to calculate the semipartial correlation, one holds the third variable constant for eitherX orY but not both; whereas for the partial correlation, one holds the third variable constant for both.[7] The semipartial correlation compares the unique variation of one variable (having removed variation associated with theZ variable(s)) with the unfiltered variation of the other, while the partial correlation compares the unique variation of one variable to the unique variation of the other.
The semipartial correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable."[8] Conversely, it is less theoretically useful because it is less precise about the role of the unique contribution of the independent variable.
The absolute value of the semipartial correlation ofX withY is always less than or equal to that of the partial correlation ofX withY. The reason is this: Suppose the correlation ofX withZ has been removed fromX, giving the residual vectorex . In computing the semipartial correlation,Y still contains both unique variance and variance due to its association withZ. Butex , being uncorrelated withZ, can only explain some of the unique part of the variance ofY and not the part related toZ. In contrast, with the partial correlation, onlyey (the part of the variance ofY that is unrelated toZ) is to be explained, so there is less variance of the type thatex cannot explain.
Intime series analysis, thepartial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag, as[citation needed]
This function is used to determine the appropriate lag length for anautoregression.
When the sample size is smaller than the number of variables, a.k.a. high-dimensional setting, estimating partial correlations can be challenging. In this scenario, the sample covariance is not well-conditioned, and finding its inverse turns problematic.
Shrinkage_estimation methods improve or and produces more reliable partial correlation estimates. One example is the Ledoit-Wolf shrinkage estimator,[9]
where is the sample covariance matrix, is a target matrix (e.g., a diagonal matrix), and the shrinkage intensity.
The partial correlation under the Ledoit-Wolf shrinkage[10] is then:
where is the inverse of. This method is used in a variety of fields including finance and genomics.[11]