The sign of the covariance, therefore, shows the tendency in thelinear relationship between the variables. If greater values of one variable mainly correspond with greater values of the other variable, and the same holds for lesser values (that is, the variables tend to show similar behavior), the covariance is positive.[2] In the opposite case, when greater values of one variable mainly correspond to lesser values of the other (that is, the variables tend to show opposite behavior), the covariance is negative. One feature of covariance is that it has units of measurement and the magnitude of the covariance is affected by said units. This means changing the units (e.g., from meters to millimeters) changes the covariance value proportionally, making it difficult to assess the strength of the relationship from the covariance alone; In some situations, it is desirable to compare the strength of the joint association between different pairs of random variables that do not necessarily have the same units.[3] In those situations, we use thecorrelation coefficient, which normalizes the covariance by dividing by thegeometric mean of the totalvariances (i.e., the product of thestandard deviations) for the two random variables to get a result between -1 and 1 and makes the units irrelevant.[4]
A distinction must be made between (1) the covariance of two random variables, which is apopulationparameter that can be seen as a property of thejoint probability distribution, and (2) thesample covariance, which in addition to serving as a descriptor of the sample, also serves as anestimated value of the population parameter.
where is the expected value of, also known as the mean of. The covariance is also sometimes denoted or, in analogy tovariance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values:This identity is useful for mathematical derivations. From the viewpoint of numerical computation, however, it is susceptible tocatastrophic cancellation (see the section onnumerical computation below).
Theunits of measurement of the covariance are those of times those of. By contrast,correlation coefficients, which depend on the covariance, are adimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
If the (real) random variable pair can take on the values for, with equal probabilities, then the covariance can be equivalently written in terms of the means and as
It can also be equivalently expressed, without directly referring to the means, as[7]
More generally, if there are possible realizations of, namely but with possibly unequal probabilities for, then the covariance is
In the case where two discrete random variables and have a joint probability distribution, represented by elements corresponding to the joint probabilities of, the covariance is calculated using a double summation over the indices of the matrix:
Consider three independent random variables and two constants.In the special case, and, the covariance between and is just the variance of and the name covariance is entirely appropriate.
Geometric interpretation of the covariance example.Each cuboid is theaxis-alignedbounding box of its point(x,y,f (x,y)), and theX andY means (magenta point).The covariance is the sum of the volumes of the cuboids in the 1st and 3rd quadrants (red) and in the 2nd and 4th (blue).
Suppose that and have the followingjoint probability mass function,[8] in which the six central cells give the discrete joint probabilities of the six hypothetical realizations:
x
5
6
7
y
8
0
0.4
0.1
0.5
9
0.3
0
0.2
0.5
0.3
0.4
0.3
1
can take on three values (5, 6 and 7) while can take on two (8 and 9). Their means are and. Then,
A useful identity to compute the covariance between two random variables is the Hoeffding's covariance identity:[9]where is the joint cumulative distribution function of the random vector and are themarginals.
Random variables whose covariance is zero are calleduncorrelated.[6]: 121 Similarly, the components of random vectors whosecovariance matrix is zero in every entry outside the main diagonal are also called uncorrelated.
The converse, however, is not generally true. For example, let be uniformly distributed in and let. Clearly, and are not independent, but
In this case, the relationship between and is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables arejointly normally distributed (but not if they are merelyindividually normally distributed), uncorrelatednessdoes imply independence.[11]
and whose covariance is positive are called positively correlated, which implies if then likely. Conversely, and with negative covariance are negatively correlated, and if then likely.
In fact these properties imply that the covariance defines an inner product over thequotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly theL2 inner product of real-valued functions on the sample space.
As a result, for random variables with finite variance, the inequalityholds via theCauchy–Schwarz inequality.
Proof: If, then it holds trivially. Otherwise, let random variable
The sample covariances among variables based on observations of each, drawn from an otherwise unobserved population, are given by thematrix with the entries
which is an estimate of the covariance between variable and variable.
The sample mean and the sample covariance matrix areunbiased estimates of themean and the covariance matrix of therandom vector, a vector whosejth element is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that the population mean is not known and is replaced by the sample mean. If the population mean is known, the analogous unbiased estimate is given by
For a vector of jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as thevariance–covariance matrix or simply thecovariance matrix) (also denoted by or) is defined as[12]: 335
Let be arandom vector with covariance matrixΣ, and letA be a matrix that can act on on the left. The covariance matrix of the matrix-vector productA X is:
The-th element of this matrix is equal to the covariance between thei-th scalar component of and thej-th scalar component of. In particular, is thetranspose of.
Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space
More generally let and, beHilbert spaces over or with anti linear in the first variable, and let be resp. valued random variables. Then the covariance of and is thesesquilinear form on (anti linear in the first variable) given by
When, the equation is prone tocatastrophic cancellation if and are not computed exactly and thus should be avoided in computer programs when the data has not been centered before.[13]Numerically stable algorithms should be preferred in this case.[14]
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context oflinear algebra (seelinear dependence). When the covariance is normalized, one obtains thePearson correlation coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
Covariance is an important measure inbiology. Certain sequences ofDNA are conserved more than others among species, and thus to study secondary and tertiary structures ofproteins, or ofRNA structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found innoncoding RNA (such asmicroRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits.
In the theory ofevolution andnatural selection, theprice equation describes how agenetic trait changes in frequency over time. The equation uses a covariance between a trait andfitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population.[15][16]
The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known asdata assimilation. The "forecast error covariance matrix" is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The "observation error covariance matrix" is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application toKalman filtering and more generalstate estimation for time-varying systems.
Theeddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.
The denominator can also be written as, which is thegeometric mean of the variances.
Thus we see that the correlation coefficient is a normalized version of the covariance. It is always a number between and, and is unitless (unlike the covariance).
The correlation coefficient is often denoted with an, and is frequently reported in scientific studies.
The covariance matrix is used inprincipal component analysis to reduce feature dimensionality indata preprocessing. The principal components are the dimensions that explain the most variance in the data. A well known application is to intelligence, producing theg factor. Another is to personality, with models like thefive factor model being derived from principal component analysis.
^Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 104.
^abcdePark, Kun Il (2018).Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer.ISBN9783319680743.
^Yuli Zhang; Huaiyu Wu; Lei Cheng (June 2012). "Some new deformation formulas about variance and covariance".Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.
^Dekking, Michel, ed. (2005).A modern introduction to probability and statistics: understanding why and how. Springer texts in statistics. London [Heidelberg]: Springer.ISBN978-1-85233-896-1.
^abGubner, John A. (2006).Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press.ISBN978-0-521-86470-1.