Inprobability andstatistics, amultivariate random variable orrandom vector is a list orvector of mathematicalvariables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individualstatistical unit. For example, while a given person has a specific age, height and weight, the representation of these features ofan unspecified person from within a group would be a random vector. Normally each element of a random vector is areal number.
Every random vector gives rise to a probability measure on with theBorel algebra as the underlying sigma-algebra. This measure is also known as thejoint probability distribution, the joint distribution, or the multivariate distribution of the random vector.
Random vectors can be subjected to the same kinds ofalgebraic operations as can non-random vectors: addition, subtraction, multiplication by ascalar, and the taking ofinner products.
More generally we can study invertible mappings of random vectors.[2]: p.284–285
Let be a one-to-one mapping from an open subset of onto a subset of, let have continuous partial derivatives in and let theJacobian determinant of be zero at no point of. Assume that the real random vector has a probability density function and satisfies. Then the random vector is of probability density
Thecovariance matrix (also calledsecond central moment or variance-covariance matrix) of an random vector is anmatrix whose (i,j)th element is thecovariance between thei th and thej th random variables. The covariance matrix is the expected value, element by element, of the matrixcomputed as, where the superscript T refers to the transpose of the indicated vector:[2]: p. 464 [3]: p.335
Eq.3
By extension, thecross-covariance matrix between two random vectors and ( having elements and having elements) is the matrix[3]: p.336
Eq.4
where again the matrix expectation is taken element-by-element in the matrix. Here the (i,j)th element is the covariance between thei th element of and thej th element of.
Thecorrelation matrix (also calledsecond moment) of an random vector is an matrix whose (i,j)th element is the correlation between thei th and thej th random variables. The correlation matrix is the expected value, element by element, of the matrix computed as, where the superscript T refers to the transpose of the indicated vector:[4]: p.190 [3]: p.334
Eq.5
By extension, thecross-correlation matrix between two random vectors and ( having elements and having elements) is the matrix
Two random vectors and are calledindependent if for all and
where and denote the cumulative distribution functions of and and denotes their joint cumulative distribution function. Independence of and is often denoted by.Written component-wise, and are called independent if for all
One can take the expectation of aquadratic form in the random vector as follows:[5]: p.170–171
where is the covariance matrix of and refers to thetrace of a matrix — that is, to the sum of the elements on itsmain diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation.
Proof: Let be an random vector with and and let be an non-stochastic matrix.
Then based on the formula for the covariance, if we denote and, we see that:
One can take the expectation of the product of two different quadratic forms in a zero-meanGaussian random vector as follows:[5]: pp. 162–176
where again is the covariance matrix of. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.
Inportfolio theory infinance, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector of random returns on the individual assets, and the portfolio returnp (a random scalar) is the inner product of the vector of random returns with a vectorw of portfolio weights — the fractions of the portfolio placed in the respective assets. Sincep =wT, the expected value of the portfolio return iswTE() and the variance of the portfolio return can be shown to bewTCw, where C is the covariance matrix of.
Inlinear regression theory, we have data onn observations on a dependent variabley andn observations on each ofk independent variablesxj. The observations on the dependent variable are stacked into a column vectory; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into adesign matrixX (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data:
where β is a postulated fixed but unknown vector ofk response coefficients, ande is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such asordinary least squares, a vector is chosen as an estimate of β, and the estimate of the vectore, denoted, is computed as
Then the statistician must analyze the properties of and, which are viewed as random vectors since a randomly different selection ofn cases to observe would have resulted in different values for them.
The evolution of ak×1 random vector through time can be modelled as avector autoregression (VAR) as follows:
where thei-periods-back vector observation is called thei-th lag of,c is ak × 1 vector of constants (intercepts),Ai is a time-invariantk × kmatrix and is ak × 1 random vector oferror terms.
Stark, Henry; Woods, John W. (2012). "Random Vectors".Probability, Statistics, and Random Processes for Engineers (Fourth ed.). Pearson. pp. 295–339.ISBN978-0-13-231123-6.