Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Multivariate random variable

From Wikipedia, the free encyclopedia
(Redirected fromRandom vector)
Random variable with multiple component dimensions
Part of a series onstatistics
Probability theory
For broader coverage of this topic, seeMultivariate statistics.

Inprobability andstatistics, amultivariate random variable orrandom vector is a list orvector of mathematicalvariables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individualstatistical unit. For example, while a given person has a specific age, height and weight, the representation of these features ofan unspecified person from within a group would be a random vector. Normally each element of a random vector is areal number.

Random vectors are often used as the underlying implementation of various types of aggregaterandom variables, e.g. arandom matrix,random tree,random sequence,stochastic process, etc.

Formally, a multivariate random variable is acolumn vectorX=(X1,,Xn)T{\displaystyle \mathbf {X} =(X_{1},\dots ,X_{n})^{\mathsf {T}}} (or itstranspose, which is arow vector) whose components arerandom variables on theprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, whereΩ{\displaystyle \Omega } is thesample space,F{\displaystyle {\mathcal {F}}} is thesigma-algebra (the collection of all events), andP{\displaystyle P} is theprobability measure (a function returning each event'sprobability).

Probability distribution

[edit]
Main article:Multivariate probability distribution

Every random vector gives rise to a probability measure onRn{\displaystyle \mathbb {R} ^{n}} with theBorel algebra as the underlying sigma-algebra. This measure is also known as thejoint probability distribution, the joint distribution, or the multivariate distribution of the random vector.

Thedistributions of each of the component random variablesXi{\displaystyle X_{i}} are calledmarginal distributions. Theconditional probability distribution ofXi{\displaystyle X_{i}} givenXj{\displaystyle X_{j}} is the probability distribution ofXi{\displaystyle X_{i}} whenXj{\displaystyle X_{j}} is known to be a particular value.

Thecumulative distribution functionFX:Rn[0,1]{\displaystyle F_{\mathbf {X} }:\mathbb {R} ^{n}\mapsto [0,1]} of a random vectorX=(X1,,Xn)T{\displaystyle \mathbf {X} =(X_{1},\dots ,X_{n})^{\mathsf {T}}} is defined as[1]: p.15 

FX(x)=P(X1x1,,Xnxn){\displaystyle F_{\mathbf {X} }(\mathbf {x} )=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{n}\leq x_{n})}Eq.1

wherex=(x1,,xn)T{\displaystyle \mathbf {x} =(x_{1},\dots ,x_{n})^{\mathsf {T}}}.

Operations on random vectors

[edit]

Random vectors can be subjected to the same kinds ofalgebraic operations as can non-random vectors: addition, subtraction, multiplication by ascalar, and the taking ofinner products.

Affine transformations

[edit]

Similarly, a new random vectorY{\displaystyle \mathbf {Y} } can be defined by applying anaffine transformationg:RnRn{\displaystyle g\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} to a random vectorX{\displaystyle \mathbf {X} }:

Y=AX+b{\displaystyle \mathbf {Y} =\mathbf {A} \mathbf {X} +b}, whereA{\displaystyle \mathbf {A} } is ann×n{\displaystyle n\times n} matrix andb{\displaystyle b} is ann×1{\displaystyle n\times 1} column vector.

IfA{\displaystyle \mathbf {A} } is aninvertible matrix andX{\displaystyle \textstyle \mathbf {X} } has a probability density functionfX{\displaystyle f_{\mathbf {X} }}, then the probability density ofY{\displaystyle \mathbf {Y} } is

fY(y)=fX(A1(yb))|detA|{\displaystyle f_{\mathbf {Y} }(y)={\frac {f_{\mathbf {X} }(\mathbf {A} ^{-1}(y-b))}{|\det \mathbf {A} |}}}.

Invertible mappings

[edit]

More generally we can study invertible mappings of random vectors.[2]: p.284–285 

Letg{\displaystyle g} be a one-to-one mapping from an open subsetD{\displaystyle {\mathcal {D}}} ofRn{\displaystyle \mathbb {R} ^{n}} onto a subsetR{\displaystyle {\mathcal {R}}} ofRn{\displaystyle \mathbb {R} ^{n}}, letg{\displaystyle g} have continuous partial derivatives inD{\displaystyle {\mathcal {D}}} and let theJacobian determinantdet(yx){\displaystyle \det \left({\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}\right)} ofg{\displaystyle g} be zero at no point ofD{\displaystyle {\mathcal {D}}}. Assume that the real random vectorX{\displaystyle \mathbf {X} } has a probability density functionfX(x){\displaystyle f_{\mathbf {X} }(\mathbf {x} )} and satisfiesP(XD)=1{\displaystyle P(\mathbf {X} \in {\mathcal {D}})=1}. Then the random vectorY=g(X){\displaystyle \mathbf {Y} =g(\mathbf {X} )} is of probability density

fY(y)=fX(x)|det(yx)||x=g1(y)1(yRY){\displaystyle \left.f_{\mathbf {Y} }(\mathbf {y} )={\frac {f_{\mathbf {X} }(\mathbf {x} )}{\left|\det \left({\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}\right)\right|}}\right|_{\mathbf {x} =g^{-1}(\mathbf {y} )}\mathbf {1} (\mathbf {y} \in R_{\mathbf {Y} })}

where1{\displaystyle \mathbf {1} } denotes theindicator function and setRY={y=g(x):fX(x)>0}R{\displaystyle R_{\mathbf {Y} }=\{\mathbf {y} =g(\mathbf {x} ):f_{\mathbf {X} }(\mathbf {x} )>0\}\subseteq {\mathcal {R}}} denotes support ofY{\displaystyle \mathbf {Y} }.

Expected value

[edit]

Theexpected value or mean of a random vectorX{\displaystyle \mathbf {X} } is a fixed vectorE[X]{\displaystyle \operatorname {E} [\mathbf {X} ]} whose elements are the expected values of the respective random variables.[3]: p.333 

E[X]=(E[X1],...,E[Xn])T{\displaystyle \operatorname {E} [\mathbf {X} ]=(\operatorname {E} [X_{1}],...,\operatorname {E} [X_{n}])^{\mathrm {T} }}Eq.2

Covariance and cross-covariance

[edit]

Definitions

[edit]

Thecovariance matrix (also calledsecond central moment or variance-covariance matrix) of ann×1{\displaystyle n\times 1} random vector is ann×n{\displaystyle n\times n}matrix whose (i,j)th element is thecovariance between thei th and thej th random variables. The covariance matrix is the expected value, element by element, of then×n{\displaystyle n\times n} matrixcomputed as[XE[X]][XE[X]]T{\displaystyle [\mathbf {X} -\operatorname {E} [\mathbf {X} ]][\mathbf {X} -\operatorname {E} [\mathbf {X} ]]^{T}}, where the superscript T refers to the transpose of the indicated vector:[2]: p. 464 [3]: p.335 

KXX=Var[X]=E[(XE[X])(XE[X])T]=E[XXT]E[X]E[X]T{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {Var} [\mathbf {X} ]=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{T}]=\operatorname {E} [\mathbf {X} \mathbf {X} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}}Eq.3

By extension, thecross-covariance matrix between two random vectorsX{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } (X{\displaystyle \mathbf {X} } havingn{\displaystyle n} elements andY{\displaystyle \mathbf {Y} } havingp{\displaystyle p} elements) is then×p{\displaystyle n\times p} matrix[3]: p.336 

KXY=Cov[X,Y]=E[(XE[X])(YE[Y])T]=E[XYT]E[X]E[Y]T{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{T}]=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}}Eq.4

where again the matrix expectation is taken element-by-element in the matrix. Here the (i,j)th element is the covariance between thei th element ofX{\displaystyle \mathbf {X} } and thej th element ofY{\displaystyle \mathbf {Y} }.

Properties

[edit]

The covariance matrix is asymmetric matrix, i.e.[2]: p. 466 

KXXT=KXX{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }^{T}=\operatorname {K} _{\mathbf {X} \mathbf {X} }}.

The covariance matrix is apositive semidefinite matrix, i.e.[2]: p. 465 

aTKXXa0for all aRn{\displaystyle \mathbf {a} ^{T}\operatorname {K} _{\mathbf {X} \mathbf {X} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {R} ^{n}}.

The cross-covariance matrixCov[Y,X]{\displaystyle \operatorname {Cov} [\mathbf {Y} ,\mathbf {X} ]} is simply the transpose of the matrixCov[X,Y]{\displaystyle \operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]}, i.e.

KYX=KXYT{\displaystyle \operatorname {K} _{\mathbf {Y} \mathbf {X} }=\operatorname {K} _{\mathbf {X} \mathbf {Y} }^{T}}.

Uncorrelatedness

[edit]

Two random vectorsX=(X1,...,Xm)T{\displaystyle \mathbf {X} =(X_{1},...,X_{m})^{T}} andY=(Y1,...,Yn)T{\displaystyle \mathbf {Y} =(Y_{1},...,Y_{n})^{T}} are calleduncorrelated if

E[XYT]=E[X]E[Y]T{\displaystyle \operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]=\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}}.

They are uncorrelatedif and only if their cross-covariance matrixKXY{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }} is zero.[3]: p.337 

Correlation and cross-correlation

[edit]

Definitions

[edit]

Thecorrelation matrix (also calledsecond moment) of ann×1{\displaystyle n\times 1} random vector is ann×n{\displaystyle n\times n} matrix whose (i,j)th element is the correlation between thei th and thej th random variables. The correlation matrix is the expected value, element by element, of then×n{\displaystyle n\times n} matrix computed asXXT{\displaystyle \mathbf {X} \mathbf {X} ^{T}}, where the superscript T refers to the transpose of the indicated vector:[4]: p.190 [3]: p.334 

RXX=E[XXT]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [\mathbf {X} \mathbf {X} ^{\mathrm {T} }]}Eq.5

By extension, thecross-correlation matrix between two random vectorsX{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } (X{\displaystyle \mathbf {X} } havingn{\displaystyle n} elements andY{\displaystyle \mathbf {Y} } havingp{\displaystyle p} elements) is then×p{\displaystyle n\times p} matrix

RXY=E[XYT]{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]}Eq.6

Properties

[edit]

The correlation matrix is related to the covariance matrix by

RXX=KXX+E[X]E[X]T{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }+\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}}.

Similarly for the cross-correlation matrix and the cross-covariance matrix:

RXY=KXY+E[X]E[Y]T{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }=\operatorname {K} _{\mathbf {X} \mathbf {Y} }+\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}}

Orthogonality

[edit]

Two random vectors of the same sizeX=(X1,...,Xn)T{\displaystyle \mathbf {X} =(X_{1},...,X_{n})^{T}} andY=(Y1,...,Yn)T{\displaystyle \mathbf {Y} =(Y_{1},...,Y_{n})^{T}} are calledorthogonal if

E[XTY]=0{\displaystyle \operatorname {E} [\mathbf {X} ^{T}\mathbf {Y} ]=0}.

Independence

[edit]
Main article:Independence (probability theory)

Two random vectorsX{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } are calledindependent if for allx{\displaystyle \mathbf {x} } andy{\displaystyle \mathbf {y} }

FX,Y(x,y)=FX(x)FY(y){\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )=F_{\mathbf {X} }(\mathbf {x} )\cdot F_{\mathbf {Y} }(\mathbf {y} )}

whereFX(x){\displaystyle F_{\mathbf {X} }(\mathbf {x} )} andFY(y){\displaystyle F_{\mathbf {Y} }(\mathbf {y} )} denote the cumulative distribution functions ofX{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } andFX,Y(x,y){\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )} denotes their joint cumulative distribution function. Independence ofX{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } is often denoted byXY{\displaystyle \mathbf {X} \perp \!\!\!\perp \mathbf {Y} }.Written component-wise,X{\displaystyle \mathbf {X} } andY{\displaystyle \mathbf {Y} } are called independent if for allx1,,xm,y1,,yn{\displaystyle x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n}}

FX1,,Xm,Y1,,Yn(x1,,xm,y1,,yn)=FX1,,Xm(x1,,xm)FY1,,Yn(y1,,yn){\displaystyle F_{X_{1},\ldots ,X_{m},Y_{1},\ldots ,Y_{n}}(x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n})=F_{X_{1},\ldots ,X_{m}}(x_{1},\ldots ,x_{m})\cdot F_{Y_{1},\ldots ,Y_{n}}(y_{1},\ldots ,y_{n})}.

Characteristic function

[edit]

Thecharacteristic function of a random vectorX{\displaystyle \mathbf {X} } withn{\displaystyle n} components is a functionRnC{\displaystyle \mathbb {R} ^{n}\to \mathbb {C} } that maps every vectorω=(ω1,,ωn)T{\displaystyle \mathbf {\omega } =(\omega _{1},\ldots ,\omega _{n})^{T}} to a complex number. It is defined by[2]: p. 468 

φX(ω)=E[ei(ωTX)]=E[ei(ω1X1++ωnXn)]{\displaystyle \varphi _{\mathbf {X} }(\mathbf {\omega } )=\operatorname {E} \left[e^{i(\mathbf {\omega } ^{T}\mathbf {X} )}\right]=\operatorname {E} \left[e^{i(\omega _{1}X_{1}+\ldots +\omega _{n}X_{n})}\right]}.

Further properties

[edit]

Expectation of a quadratic form

[edit]

One can take the expectation of aquadratic form in the random vectorX{\displaystyle \mathbf {X} } as follows:[5]: p.170–171 

E[XTAX]=E[X]TAE[X]+tr(AKXX),{\displaystyle \operatorname {E} [\mathbf {X} ^{T}A\mathbf {X} ]=\operatorname {E} [\mathbf {X} ]^{T}A\operatorname {E} [\mathbf {X} ]+\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }),}

whereKXX{\displaystyle K_{\mathbf {X} \mathbf {X} }} is the covariance matrix ofX{\displaystyle \mathbf {X} } andtr{\displaystyle \operatorname {tr} } refers to thetrace of a matrix — that is, to the sum of the elements on itsmain diagonal (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation.

Proof: Letz{\displaystyle \mathbf {z} } be anm×1{\displaystyle m\times 1} random vector withE[z]=μ{\displaystyle \operatorname {E} [\mathbf {z} ]=\mu } andCov[z]=V{\displaystyle \operatorname {Cov} [\mathbf {z} ]=V} and letA{\displaystyle A} be anm×m{\displaystyle m\times m} non-stochastic matrix.

Then based on the formula for the covariance, if we denotezT=X{\displaystyle \mathbf {z} ^{T}=\mathbf {X} } andzTAT=Y{\displaystyle \mathbf {z} ^{T}A^{T}=\mathbf {Y} }, we see that:

Cov[X,Y]=E[XYT]E[X]E[Y]T{\displaystyle \operatorname {Cov} [\mathbf {X} ,\mathbf {Y} ]=\operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}}

Hence

E[XYT]=Cov[X,Y]+E[X]E[Y]TE[zTAz]=Cov[zT,zTAT]+E[zT]E[zTAT]T=Cov[zT,zTAT]+μT(μTAT)T=Cov[zT,zTAT]+μTAμ,{\displaystyle {\begin{aligned}\operatorname {E} [XY^{T}]&=\operatorname {Cov} [X,Y]+\operatorname {E} [X]\operatorname {E} [Y]^{T}\\\operatorname {E} [z^{T}Az]&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\operatorname {E} [z^{T}]\operatorname {E} [z^{T}A^{T}]^{T}\\&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\mu ^{T}(\mu ^{T}A^{T})^{T}\\&=\operatorname {Cov} [z^{T},z^{T}A^{T}]+\mu ^{T}A\mu ,\end{aligned}}}

which leaves us to show that

Cov[zT,zTAT]=tr(AV).{\displaystyle \operatorname {Cov} [z^{T},z^{T}A^{T}]=\operatorname {tr} (AV).}

This is true based on the fact that one cancyclically permute matrices when taking a trace without changing the end result (e.g.:tr(AB)=tr(BA){\displaystyle \operatorname {tr} (AB)=\operatorname {tr} (BA)}).

We seethat

Cov[zT,zTAT]=E[(zTE(zT))(zTATE(zTAT))T]=E[(zTμT)(zTATμTAT)T]=E[(zμ)T(AzAμ)].{\displaystyle {\begin{aligned}\operatorname {Cov} [z^{T},z^{T}A^{T}]&=\operatorname {E} \left[\left(z^{T}-E(z^{T})\right)\left(z^{T}A^{T}-E\left(z^{T}A^{T}\right)\right)^{T}\right]\\&=\operatorname {E} \left[(z^{T}-\mu ^{T})(z^{T}A^{T}-\mu ^{T}A^{T})^{T}\right]\\&=\operatorname {E} \left[(z-\mu )^{T}(Az-A\mu )\right].\end{aligned}}}

And since

(zμ)T(AzAμ){\displaystyle \left({z-\mu }\right)^{T}\left({Az-A\mu }\right)}

is ascalar, then

(zμ)T(AzAμ)=tr((zμ)T(AzAμ))=tr((zμ)TA(zμ)){\displaystyle (z-\mu )^{T}(Az-A\mu )=\operatorname {tr} \left({(z-\mu )^{T}(Az-A\mu )}\right)=\operatorname {tr} \left((z-\mu )^{T}A(z-\mu )\right)}

trivially. Using the permutation we get:

tr((zμ)TA(zμ))=tr(A(zμ)(zμ)T),{\displaystyle \operatorname {tr} \left({(z-\mu )^{T}A(z-\mu )}\right)=\operatorname {tr} \left({A(z-\mu )(z-\mu )^{T}}\right),}

and by plugging this into the original formula we get:

Cov[zT,zTAT]=E[(zμ)T(AzAμ)]=E[tr(A(zμ)(zμ)T)]=tr(AE((zμ)(zμ)T))=tr(AV).{\displaystyle {\begin{aligned}\operatorname {Cov} \left[{z^{T},z^{T}A^{T}}\right]&=E\left[{\left({z-\mu }\right)^{T}(Az-A\mu )}\right]\\&=E\left[\operatorname {tr} \left(A(z-\mu )(z-\mu )^{T}\right)\right]\\&=\operatorname {tr} \left({A\cdot \operatorname {E} \left((z-\mu )(z-\mu )^{T}\right)}\right)\\&=\operatorname {tr} (AV).\end{aligned}}}

Expectation of the product of two different quadratic forms

[edit]

One can take the expectation of the product of two different quadratic forms in a zero-meanGaussian random vectorX{\displaystyle \mathbf {X} } as follows:[5]: pp. 162–176 

E[(XTAX)(XTBX)]=2tr(AKXXBKXX)+tr(AKXX)tr(BKXX){\displaystyle \operatorname {E} \left[(\mathbf {X} ^{T}A\mathbf {X} )(\mathbf {X} ^{T}B\mathbf {X} )\right]=2\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }BK_{\mathbf {X} \mathbf {X} })+\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} })\operatorname {tr} (BK_{\mathbf {X} \mathbf {X} })}

where againKXX{\displaystyle K_{\mathbf {X} \mathbf {X} }} is the covariance matrix ofX{\displaystyle \mathbf {X} }. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.

Applications

[edit]

Portfolio theory

[edit]

Inportfolio theory infinance, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vectorr{\displaystyle \mathbf {r} } of random returns on the individual assets, and the portfolio returnp (a random scalar) is the inner product of the vector of random returns with a vectorw of portfolio weights — the fractions of the portfolio placed in the respective assets. Sincep =wTr{\displaystyle \mathbf {r} }, the expected value of the portfolio return iswTE(r{\displaystyle \mathbf {r} }) and the variance of the portfolio return can be shown to bewTCw, where C is the covariance matrix ofr{\displaystyle \mathbf {r} }.

Regression theory

[edit]

Inlinear regression theory, we have data onn observations on a dependent variabley andn observations on each ofk independent variablesxj. The observations on the dependent variable are stacked into a column vectory; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into adesign matrixX (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data:

y=Xβ+e,{\displaystyle y=X\beta +e,}

where β is a postulated fixed but unknown vector ofk response coefficients, ande is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such asordinary least squares, a vectorβ^{\displaystyle {\hat {\beta }}} is chosen as an estimate of β, and the estimate of the vectore, denotede^{\displaystyle {\hat {e}}}, is computed as

e^=yXβ^.{\displaystyle {\hat {e}}=y-X{\hat {\beta }}.}

Then the statistician must analyze the properties ofβ^{\displaystyle {\hat {\beta }}} ande^{\displaystyle {\hat {e}}}, which are viewed as random vectors since a randomly different selection ofn cases to observe would have resulted in different values for them.

Vector time series

[edit]

The evolution of ak×1 random vectorX{\displaystyle \mathbf {X} } through time can be modelled as avector autoregression (VAR) as follows:

Xt=c+A1Xt1+A2Xt2++ApXtp+et,{\displaystyle \mathbf {X} _{t}=c+A_{1}\mathbf {X} _{t-1}+A_{2}\mathbf {X} _{t-2}+\cdots +A_{p}\mathbf {X} _{t-p}+\mathbf {e} _{t},\,}

where thei-periods-back vector observationXti{\displaystyle \mathbf {X} _{t-i}} is called thei-th lag ofX{\displaystyle \mathbf {X} },c is ak × 1 vector of constants (intercepts),Ai is a time-invariantk × kmatrix andet{\displaystyle \mathbf {e} _{t}} is ak × 1 random vector oferror terms.

References

[edit]
  1. ^Gallager, Robert G. (2013).Stochastic Processes Theory for Applications. Cambridge University Press.ISBN 978-1-107-03975-9.
  2. ^abcdeTaboga, Marco (2017).Lectures on Probability Theory and Mathematical Statistics. CreateSpace Independent Publishing Platform.ISBN 978-1981369195.
  3. ^abcdeGubner, John A. (2006).Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press.ISBN 978-0-521-86470-1.
  4. ^Papoulis, Athanasius (1991).Probability, Random Variables and Stochastic Processes (Third ed.). McGraw-Hill.ISBN 0-07-048477-5.
  5. ^abKendrick, David (1981).Stochastic Control for Economic Models. McGraw-Hill.ISBN 0-07-033962-7.

Further reading

[edit]
  • Stark, Henry; Woods, John W. (2012). "Random Vectors".Probability, Statistics, and Random Processes for Engineers (Fourth ed.). Pearson. pp. 295–339.ISBN 978-0-13-231123-6.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Multivariate_random_variable&oldid=1316225646"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp