Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Covariance and contravariance of vectors

From Wikipedia, the free encyclopedia
Vector behavior under coordinate changes
For use of "covariance" in the context of special relativity, seeLorentz covariance. For other uses of "covariant" or "contravariant", seeCovariance and contravariance (disambiguation).
A  vector,v, represented in terms of
tangent basis
 e1,e2,e3 to the  coordinate curves (left),
dual basis, covector basis, or reciprocal basis
 e1,e2,e3 to  coordinate surfaces (right),
in3-d generalcurvilinear coordinates(q1,q2,q3), atuple of numbers to define a point in aposition space. Note the basis and cobasis coincide only when the basis isorthonormal.[1][specify]

Inphysics, especially inmultilinear algebra andtensor analysis,covariance andcontravariance describe how the quantitative description of certain geometric or physical entities changes with achange of basis.[2] Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just calledvectors and covariant vectors are calledcovectors ordual vectors. The termscovariant andcontravariant were introduced byJames Joseph Sylvester in 1851.[3][4]

Curvilinear coordinate systems, such ascylindrical orspherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another.Tensors are objects inmultilinear algebra that can have aspects of both covariance and contravariance.

Introduction

[edit]

In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (ortuple) of numbers such as

(v1,v2,v3).{\displaystyle (v_{1},v_{2},v_{3}).}

The numbers in the list depend on the choice ofcoordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the componentsv1,v2, andv3 are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors willtransform in a certain way in passing from one coordinate system to another.

A simple illustrative case is that of aEuclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always varyopposite to that of the basis vectors. That vector is therefore defined as acontravariant tensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is,dividing the scale of the reference axes by 100, so that the basis vectors now are.01{\displaystyle .01} meters long), the components of the measuredpositionvector aremultiplied by 100. A vector's components change scaleinversely to changes in scale to the reference axes, and consequently a vector is called acontravariant tensor.

Avector, which is an example of acontravariant tensor, has components that transform inversely to the transformation of the reference axes, (with example transformations includingrotation anddilation).The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by ann×n{\displaystyle n\times n}invertible matrixM, so that the basis vectors transform according to[e1 e2 ... en]=[e1 e2 ... en]M{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}^{\prime }\ \mathbf {e} _{2}^{\prime }\ ...\ \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}M}, then the components of a vectorv in the original basis (vi{\displaystyle v^{i}} ) must be similarly transformed via[v1v2...vn]=M1[v1v2...vn]{\displaystyle {\begin{bmatrix}v^{1}{^{\prime }}\\v^{2}{^{\prime }}\\...\\v^{n}{^{\prime }}\end{bmatrix}}=M^{-1}{\begin{bmatrix}v^{1}\\v^{2}\\...\\v^{n}\end{bmatrix}}}. The components of avector are often represented arranged in a column.

By contrast, acovector has components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined asv {\displaystyle \mathbf {v} \ \cdot } , wherev{\displaystyle \mathbf {v} } is a vector. The components of this covector in some arbitrary basis are[ve1ve2...ven]{\displaystyle {\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}&\mathbf {v} \cdot \mathbf {e} _{2}&...&\mathbf {v} \cdot \mathbf {e} _{n}\end{bmatrix}}}, with[e1 e2 ... en]{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}} being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vectorw{\displaystyle \mathbf {w} } , with components[w1w2...wn]{\displaystyle {\begin{bmatrix}w^{1}\\w^{2}\\...\\w^{n}\end{bmatrix}}}). The covariance of these covector components is then seen by noting that if a transformation described by ann×n{\displaystyle n\times n}invertible matrixM were to be applied to the basis vectors in the corresponding vector space,[e1 e2 ... en]=[e1 e2 ... en]M{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}^{\prime }\ \mathbf {e} _{2}^{\prime }\ ...\ \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}M}, then the components of the covectorv {\displaystyle \mathbf {v} \ \cdot } will transform with the same matrixM{\displaystyle M}, namely,[ve1ve2...ven]=[ve1ve2...ven]M{\displaystyle {\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}^{\prime }&\mathbf {v} \cdot \mathbf {e} _{2}^{\prime }&...&\mathbf {v} \cdot \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}&\mathbf {v} \cdot \mathbf {e} _{2}&...&\mathbf {v} \cdot \mathbf {e} _{n}\end{bmatrix}}M}. The components of acovector are often represented arranged in a row.

A third concept related to covariance and contravariance isinvariance. Ascalar (also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physicalobservable that is a scalar is themass of a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is calledinvariant. The magnitude of a vector (such asdistance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length3{\displaystyle 3} meters, if allCartesian basis vectors are changed from1{\displaystyle 1} meters in length to.01{\displaystyle .01} meters in length, the length of the position vector remains unchanged at3{\displaystyle 3} meters, although the vector components will all increase by a factor of100{\displaystyle 100}). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors aredual to vectors.

Thus, to summarize:

  • A vector ortangent vector, has components thatcontra-vary with a change of basis to compensate. That is, the matrix that transforms the vector components must be the inverse of the matrix that transforms the basis vectors. The components of vectors (as opposed to those of covectors) are said to becontravariant. InEinstein notation (implicit summation over repeated index), contravariant components are denoted withupper indices as in
    v=viei{\displaystyle \mathbf {v} =v^{i}\mathbf {e} _{i}}
  • A covector orcotangent vector has components thatco-vary with a change of basis in the corresponding (initial) vector space. That is, the components must be transformed by the same matrix as the change of basis matrix in the corresponding (initial) vector space. The components of covectors (as opposed to those of vectors) are said to becovariant. InEinstein notation, covariant components are denoted withlower indices as in
    w=wiei.{\displaystyle \mathbf {w} =w_{i}\mathbf {e} ^{i}.}
  • The scalar product of a vector and covector is the scalarviwi{\displaystyle v^{i}w_{i}}, which is invariant. It is the duality pairing of vectors and covectors.

Definition

[edit]

The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under achange of basis (passive transformation).[5] Thus letV be avector space of dimensionn over afield ofscalarsS, and let each off = (X1, ...,Xn) andf′ = (Y1, ...,Yn) be abasis ofV.[note 1] Also, let thechange of basis fromf tof′ be given by

ff=(ia1iXi,,ianiXi)=fA{\displaystyle \mathbf {f} \mapsto \mathbf {f} '={\biggl (}\sum _{i}a_{1}^{i}X_{i},\dots ,\sum _{i}a_{n}^{i}X_{i}{\biggr )}=\mathbf {f} A}1

for someinvertiblen×n matrixA with entriesaji{\displaystyle a_{j}^{i}}.Here, each vectorYj of thef′ basis is a linear combination of the vectorsXi of thef basis, so that

Yj=iajiXi,{\displaystyle Y_{j}=\sum _{i}a_{j}^{i}X_{i},}

which are the columns of the matrix productfA{\displaystyle \mathbf {f} A}.

Contravariant transformation

[edit]
Main article:Contravariant transformation

A vectorv{\displaystyle v} inV is expressed uniquely as alinear combination of the elementsXi{\displaystyle X_{i}} of thef basis as

v=ivi[f]Xi,{\displaystyle v=\sum _{i}v^{i}[\mathbf {f} ]X_{i},}2

wherevi[f] are elements of the fieldS known as thecomponents ofv in thef basis. Denote thecolumn vector of components ofv byv[f]:

v[f]=[v1[f]v2[f]vn[f]]{\displaystyle \mathbf {v} [\mathbf {f} ]={\begin{bmatrix}v^{1}[\mathbf {f} ]\\v^{2}[\mathbf {f} ]\\\vdots \\v^{n}[\mathbf {f} ]\end{bmatrix}}}

so that (2) can be rewritten as a matrix product

v=fv[f].{\displaystyle v=\mathbf {f} \,\mathbf {v} [\mathbf {f} ].}

The vectorv may also be expressed in terms of thef′ basis, so that

v=fv[f].{\displaystyle v=\mathbf {f'} \,\mathbf {v} [\mathbf {f'} ].}

However, since the vectorv itself is invariant under the choice of basis,

fv[f]=v=fv[f].{\displaystyle \mathbf {f} \,\mathbf {v} [\mathbf {f} ]=v=\mathbf {f'} \,\mathbf {v} [\mathbf {f'} ].}

The invariance ofv combined with the relationship (1) betweenf andf′ implies that

fv[f]=fAv[fA],{\displaystyle \mathbf {f} \,\mathbf {v} [\mathbf {f} ]=\mathbf {f} A\,\mathbf {v} [\mathbf {f} A],}

giving the transformation rule

v[f]=v[fA]=A1v[f].{\displaystyle \mathbf {v} [\mathbf {f'} ]=\mathbf {v} [\mathbf {f} A]=A^{-1}\mathbf {v} [\mathbf {f} ].}

In terms of components,

vi[fA]=ja~jivj[f]{\displaystyle v^{i}[\mathbf {f} A]=\sum _{j}{\tilde {a}}_{j}^{i}v^{j}[\mathbf {f} ]}

where the coefficientsa~ji{\displaystyle {\tilde {a}}_{j}^{i}} are the entries of theinverse matrix ofA.

Because the components of the vectorv transform with theinverse of the matrixA, these components are said totransform contravariantly under a change of basis.

The wayA relates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change:

ffv[f]v[f]{\displaystyle {\begin{aligned}\mathbf {f} &\longrightarrow \mathbf {f'} \\v[\mathbf {f} ]&\longleftarrow v[\mathbf {f'} ]\end{aligned}}}

Covariant transformation

[edit]
Main article:Covariant transformation

Alinear functionalα onV is expressed uniquely in terms of itscomponents (elements inS) in thef basis as

α(Xi)=αi[f],i=1,2,,n.{\displaystyle \alpha (X_{i})=\alpha _{i}[\mathbf {f} ],\quad i=1,2,\dots ,n.}

These components are the action ofα on the basis vectorsXi of thef basis.

Under the change of basis fromf tof′ (via1), the components transform so that

αi[fA]=α(Yi)=α(jaijXj)=jaijα(Xj)=jaijαj[f].{\displaystyle {\begin{aligned}\alpha _{i}[\mathbf {f} A]&=\alpha (Y_{i})\\&=\alpha {\biggl (}\sum _{j}a_{i}^{j}X_{j}{\biggr )}\\&=\sum _{j}a_{i}^{j}\alpha (X_{j})\\&=\sum _{j}a_{i}^{j}\alpha _{j}[\mathbf {f} ].\end{aligned}}}3

Denote therow vector of components ofα byα[f]:

α[f]=[α1[f],α2[f],,αn[f]]{\displaystyle \mathbf {\alpha } [\mathbf {f} ]={\begin{bmatrix}\alpha _{1}[\mathbf {f} ],\alpha _{2}[\mathbf {f} ],\dots ,\alpha _{n}[\mathbf {f} ]\end{bmatrix}}}

so that (3) can be rewritten as the matrix product

α[fA]=α[f]A.{\displaystyle \alpha [\mathbf {f} A]=\alpha [\mathbf {f} ]A.}

Because the components of the linear functional α transform with the matrixA, these components are said totransform covariantly under a change of basis.

The wayA relates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction:

ffα[f]α[f]{\displaystyle {\begin{aligned}\mathbf {f} &\longrightarrow \mathbf {f'} \\\alpha [\mathbf {f} ]&\longrightarrow \alpha [\mathbf {f'} ]\end{aligned}}}

Had a column vector representation been used instead, the transformation law would be thetranspose

αT[fA]=ATαT[f].{\displaystyle \alpha ^{\mathrm {T} }[\mathbf {f} A]=A^{\mathrm {T} }\alpha ^{\mathrm {T} }[\mathbf {f} ].}

Coordinates

[edit]

The choice of basisf on the vector spaceV defines uniquely a set of coordinate functions onV, by means of

xi[f](v)=vi[f].{\displaystyle x^{i}[\mathbf {f} ](v)=v^{i}[\mathbf {f} ].}

The coordinates onV are therefore contravariant in the sense that

xi[fA]=k=1na~kixk[f].{\displaystyle x^{i}[\mathbf {f} A]=\sum _{k=1}^{n}{\tilde {a}}_{k}^{i}x^{k}[\mathbf {f} ].}

Conversely, a system ofn quantitiesvi that transform like the coordinatesxi onV defines a contravariant vector (or simply vector). A system ofn quantities that transform oppositely to the coordinates is then a covariant vector (or covector).

This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (amanifold) on which vectors live astangent vectors orcotangent vectors. Given a local coordinate systemxi on the manifold, the reference axes for the coordinate system are thevector fields

X1=x1,,Xn=xn.{\displaystyle X_{1}={\frac {\partial }{\partial x^{1}}},\dots ,X_{n}={\frac {\partial }{\partial x^{n}}}.}

This gives rise to the framef = (X1, ...,Xn) at every point of the coordinate patch.

Ifyi is a different coordinate system and

Y1=y1,,Yn=yn,{\displaystyle Y_{1}={\frac {\partial }{\partial y^{1}}},\dots ,Y_{n}={\frac {\partial }{\partial y^{n}}},}

then the framef' is related to the framef by the inverse of theJacobian matrix of the coordinate transition:

f=fJ1,J=(yixj)i,j=1n.{\displaystyle \mathbf {f} '=\mathbf {f} J^{-1},\quad J=\left({\frac {\partial y^{i}}{\partial x^{j}}}\right)_{i,j=1}^{n}.}

Or, in indices,

yi=j=1nxjyixj.{\displaystyle {\frac {\partial }{\partial y^{i}}}=\sum _{j=1}^{n}{\frac {\partial x^{j}}{\partial y^{i}}}{\frac {\partial }{\partial x^{j}}}.}

A tangent vector is by definition a vector that is a linear combination of the coordinate partials/xi{\displaystyle \partial /\partial x^{i}}. Thus a tangent vector is defined by

v=i=1nvi[f]Xi=f v[f].{\displaystyle v=\sum _{i=1}^{n}v^{i}[\mathbf {f} ]X_{i}=\mathbf {f} \ \mathbf {v} [\mathbf {f} ].}

Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has

v[f]=v[fJ1]=Jv[f].{\displaystyle \mathbf {v} \left[\mathbf {f} '\right]=\mathbf {v} \left[\mathbf {f} J^{-1}\right]=J\,\mathbf {v} [\mathbf {f} ].}

Therefore, the components of a tangent vector transform via

vi[f]=j=1nyixjvj[f].{\displaystyle v^{i}\left[\mathbf {f} '\right]=\sum _{j=1}^{n}{\frac {\partial y^{i}}{\partial x^{j}}}v^{j}[\mathbf {f} ].}

Accordingly, a system ofn quantitiesvi depending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector.

Covariant and contravariant components of a vector with a metric

[edit]
Covariant and contravariant components of a vector when the basis is not orthogonal.

In a finite-dimensionalvector spaceV over a fieldK with a non-degenerate symmetricbilinear formg :V ×VK (which may be referred to as themetric tensor), there is little distinction between covariant and contravariant vectors, because thebilinear form allows covectors to be identified with vectors. That is, a vectorv uniquely determines a covectorα via

α(w)=g(v,w){\displaystyle \alpha (w)=g(v,w)}

for all vectorsw. Conversely, each covectorα determines a unique vectorv by this equation. Because of this identification of vectors with covectors, one may speak of thecovariant components orcontravariant components of a vector, that is, they are just representations of the same vector using thereciprocal basis.

Given a basisf = (X1, ...,Xn) ofV, there is a unique reciprocal basisf# = (Y1, ...,Yn) ofV determined by requiring that

g(Yi,Xj)=δji,{\displaystyle g(Y^{i},X_{j})=\delta _{j}^{i},}

theKronecker delta. In terms of these bases, any vectorv can be written in two ways:

v=ivi[f]Xi=fv[f]=ivi[f]Yi=fv[f].{\displaystyle {\begin{aligned}v&=\sum _{i}v^{i}[\mathbf {f} ]X_{i}=\mathbf {f} \,\mathbf {v} [\mathbf {f} ]\\&=\sum _{i}v_{i}[\mathbf {f^{\sharp }} ]Y^{i}=\mathbf {f} ^{\sharp }\mathbf {v} ^{\sharp }[\mathbf {f} ].\end{aligned}}}

The componentsvi[f] are thecontravariant components of the vectorv in the basisf, and the componentsvi[f] are thecovariant components ofv in the basisf. The terminology is justified because under a change of basis,

v[fA]=A1v[f],v[fA]=ATv[f]{\displaystyle \mathbf {v} [\mathbf {f} A]=A^{-1}\mathbf {v} [\mathbf {f} ],\quad \mathbf {v} ^{\sharp }[\mathbf {f} A]=A^{T}\mathbf {v} ^{\sharp }[\mathbf {f} ]}

whereA{\displaystyle A} is an invertiblen×n{\displaystyle n\times n} matrix, and thematrix transpose has its usual meaning.

Euclidean plane

[edit]

In the Euclidean plane, thedot product allows for vectors to be identified with covectors. Ife1,e2{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2}} is a basis, then the dual basise1,e2{\displaystyle \mathbf {e} ^{1},\mathbf {e} ^{2}} satisfies

e1e1=1,e1e2=0e2e1=0,e2e2=1.{\displaystyle {\begin{aligned}\mathbf {e} ^{1}\cdot \mathbf {e} _{1}=1,&\quad \mathbf {e} ^{1}\cdot \mathbf {e} _{2}=0\\\mathbf {e} ^{2}\cdot \mathbf {e} _{1}=0,&\quad \mathbf {e} ^{2}\cdot \mathbf {e} _{2}=1.\end{aligned}}}

Thus,e1 ande2 are perpendicular to each other, as aree2 ande1, and the lengths ofe1 ande2 normalized againste1 ande2, respectively.

Example

[edit]

For example,[6] suppose that we are given a basise1,e2 consisting of a pair of vectors making a 45° angle with one another, such thate1 has length 2 ande2 has length 1. Then the dual basis vectors are given as follows:

  • e2 is the result of rotatinge1 through an angle of 90° (where the sense is measured by assuming the paire1,e2 to be positively oriented), and then rescaling so thate2e2 = 1 holds.
  • e1 is the result of rotatinge2 through an angle of 90°, and then rescaling so thate1e1 = 1 holds.

Applying these rules, we find

e1=12e112e2{\displaystyle \mathbf {e} ^{1}={\frac {1}{2}}\mathbf {e} _{1}-{\frac {1}{\sqrt {2}}}\mathbf {e} _{2}}

and

e2=12e1+2e2.{\displaystyle \mathbf {e} ^{2}=-{\frac {1}{\sqrt {2}}}\mathbf {e} _{1}+2\mathbf {e} _{2}.}

Thus the change of basis matrix in going from the original basis to the reciprocal basis is

R=[1212122],{\displaystyle R={\begin{bmatrix}{\frac {1}{2}}&-{\frac {1}{\sqrt {2}}}\\-{\frac {1}{\sqrt {2}}}&2\end{bmatrix}},}

since

[e1 e2]=[e1 e2][1212122].{\displaystyle [\mathbf {e} ^{1}\ \mathbf {e} ^{2}]=[\mathbf {e} _{1}\ \mathbf {e} _{2}]{\begin{bmatrix}{\frac {1}{2}}&-{\frac {1}{\sqrt {2}}}\\-{\frac {1}{\sqrt {2}}}&2\end{bmatrix}}.}

For instance, the vector

v=32e1+2e2{\displaystyle v={\frac {3}{2}}\mathbf {e} _{1}+2\mathbf {e} _{2}}

is a vector with contravariant components

v1=32,v2=2.{\displaystyle v^{1}={\frac {3}{2}},\quad v^{2}=2.}

The covariant components are obtained by equating the two expressions for the vectorv:

v=v1e1+v2e2=v1e1+v2e2{\displaystyle v=v_{1}\mathbf {e} ^{1}+v_{2}\mathbf {e} ^{2}=v^{1}\mathbf {e} _{1}+v^{2}\mathbf {e} _{2}}

so

[v1v2]=R1[v1v2]=[4221][v1v2]=[6+222+32].{\displaystyle {\begin{aligned}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}&=R^{-1}{\begin{bmatrix}v^{1}\\v^{2}\end{bmatrix}}\\&={\begin{bmatrix}4&{\sqrt {2}}\\{\sqrt {2}}&1\end{bmatrix}}{\begin{bmatrix}v^{1}\\v^{2}\end{bmatrix}}\\&={\begin{bmatrix}6+2{\sqrt {2}}\\2+{\frac {3}{\sqrt {2}}}\end{bmatrix}}\end{aligned}}.}

Three-dimensional Euclidean space

[edit]

In the three-dimensionalEuclidean space, one can also determine explicitly the dual basis to a given set ofbasis vectorse1,e2,e3 ofE3 that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are:

e1=e2×e3e1(e2×e3);e2=e3×e1e2(e3×e1);e3=e1×e2e3(e1×e2).{\displaystyle \mathbf {e} ^{1}={\frac {\mathbf {e} _{2}\times \mathbf {e} _{3}}{\mathbf {e} _{1}\cdot (\mathbf {e} _{2}\times \mathbf {e} _{3})}};\qquad \mathbf {e} ^{2}={\frac {\mathbf {e} _{3}\times \mathbf {e} _{1}}{\mathbf {e} _{2}\cdot (\mathbf {e} _{3}\times \mathbf {e} _{1})}};\qquad \mathbf {e} ^{3}={\frac {\mathbf {e} _{1}\times \mathbf {e} _{2}}{\mathbf {e} _{3}\cdot (\mathbf {e} _{1}\times \mathbf {e} _{2})}}.}

Even when theei andei are notorthonormal, they are still mutually reciprocal:

eiej=δji,{\displaystyle \mathbf {e} ^{i}\cdot \mathbf {e} _{j}=\delta _{j}^{i},}

Then the contravariant components of any vectorv can be obtained by thedot product ofv with the dual basis vectors:

q1=ve1;q2=ve2;q3=ve3.{\displaystyle q^{1}=\mathbf {v} \cdot \mathbf {e} ^{1};\qquad q^{2}=\mathbf {v} \cdot \mathbf {e} ^{2};\qquad q^{3}=\mathbf {v} \cdot \mathbf {e} ^{3}.}

Likewise, the covariant components ofv can be obtained from the dot product ofv with basis vectors, viz.

q1=ve1;q2=ve2;q3=ve3.{\displaystyle q_{1}=\mathbf {v} \cdot \mathbf {e} _{1};\qquad q_{2}=\mathbf {v} \cdot \mathbf {e} _{2};\qquad q_{3}=\mathbf {v} \cdot \mathbf {e} _{3}.}

Thenv can be expressed in two (reciprocal) ways, viz.

v=qiei=q1e1+q2e2+q3e3.{\displaystyle \mathbf {v} =q^{i}\mathbf {e} _{i}=q^{1}\mathbf {e} _{1}+q^{2}\mathbf {e} _{2}+q^{3}\mathbf {e} _{3}.}

or

v=qiei=q1e1+q2e2+q3e3{\displaystyle \mathbf {v} =q_{i}\mathbf {e} ^{i}=q_{1}\mathbf {e} ^{1}+q_{2}\mathbf {e} ^{2}+q_{3}\mathbf {e} ^{3}}

Combining the above relations, we have

v=(vei)ei=(vei)ei{\displaystyle \mathbf {v} =(\mathbf {v} \cdot \mathbf {e} ^{i})\mathbf {e} _{i}=(\mathbf {v} \cdot \mathbf {e} _{i})\mathbf {e} ^{i}}

and we can convert between the basis and dual basis with

qi=vei=(qjej)ei=(ejei)qj{\displaystyle q_{i}=\mathbf {v} \cdot \mathbf {e} _{i}=(q^{j}\mathbf {e} _{j})\cdot \mathbf {e} _{i}=(\mathbf {e} _{j}\cdot \mathbf {e} _{i})q^{j}}

and

qi=vei=(qjej)ei=(ejei)qj.{\displaystyle q^{i}=\mathbf {v} \cdot \mathbf {e} ^{i}=(q_{j}\mathbf {e} ^{j})\cdot \mathbf {e} ^{i}=(\mathbf {e} ^{j}\cdot \mathbf {e} ^{i})q_{j}.}

If the basis vectors areorthonormal, then they are the same as the dual basis vectors.

Vector spaces of any dimension

[edit]

The following applies to any vector space of dimensionn equipped with a non-degenerate commutative and distributive dot product, and thus also to the Euclidean spaces of any dimension.

All indices in the formulas run from 1 ton. TheEinstein notation for the implicit summation of the terms with the same upstairs (contravariant) and downstairs (covariant) indices is followed.

The historical and geometrical meaning of the termscontravariant andcovariant will be explained at the end of this section.

Definitions

[edit]
  1. Covariant basis of a vector space of dimensionn:ej{\displaystyle \mathbf {e_{j}} \triangleq } {any linearly independent basis for which in general iseiejδij{\displaystyle \mathbf {e_{i}} \cdot \mathbf {e_{j}} \neq \delta _{ij}}}, i.e. not necessarilyorthonormal (D.1).
  2. Contravariant components of a vectorv{\displaystyle \mathbf {v} }:vi{viv=viei}{\displaystyle v^{i}\triangleq \{v^{i}\mid \mathbf {v} =v^{i}\mathbf {e_{i}} \}} (D.2).
  3. Dual (contravariant) basis of a vector space of dimensionn:ei{ei:eiej=δji}{\displaystyle \mathbf {e^{i}} \triangleq \{\mathbf {e^{i}} :\mathbf {e^{i}} \cdot \mathbf {e_{j}} =\delta _{j}^{i}\}} (D.3).
  4. Covariant components of a vectorv{\displaystyle \mathbf {v} }:vi{viv=viei}{\displaystyle v_{i}\triangleq \{v_{i}\mid \mathbf {v} =v_{i}\mathbf {e^{i}} \}} (D.4).
  5. Components of the covariant metric tensor:gijeiej{\displaystyle g_{ij}\triangleq \mathbf {e_{i}} \cdot \mathbf {e_{j}} }; the metric tensor can be considered a square matrix, since it only has two covariant indices:G{gij}{\displaystyle G\triangleq \{g_{ij}\}}; for the commutative property of the dot product, thegij{\displaystyle g_{ij}} are symmetric (D.5).
  6. Components of the contravariant metric tensor:gij{hij:G1={hij}}{\displaystyle g^{ij}\triangleq \{h_{ij}:G^{-1}=\{h_{ij}\}\}}; these are the elements of the inverse of the covariant metric tensor/matrixG1{\displaystyle G^{-1}}, and for the properties of the inverse of a symmetric matrix, they're also symmetric (D.6).

Corollaries

[edit]

Historical and geometrical meaning

[edit]
Aid for explaining the geometrical meaning of covariant and contravariant vector components.

Considering this figure for the case of an Euclidean space withn=2{\displaystyle n=2}, sincev=OA+OB{\displaystyle \mathbf {v} =\mathbf {OA} +\mathbf {OB} }, if we want to expressv{\displaystyle \mathbf {v} } in terms of the covariant basis, we have to multiply the basis vectors by the coefficientsv1=|OA||e1|,v2=|OB||e2|{\displaystyle v^{1}={\frac {\vert \mathbf {OA} \vert }{\vert \mathbf {e_{1}} \vert }},v^{2}={\frac {\vert \mathbf {OB} \vert }{\vert \mathbf {e_{2}} \vert }}}.

Withv{\displaystyle \mathbf {v} } and thusOA{\displaystyle \mathbf {OA} } andOB{\displaystyle \mathbf {OB} } fixed, if the module ofei{\displaystyle \mathbf {e_{i}} } increases, the value of thevi{\displaystyle v^{i}} component decreases, and that's why they're calledcontra-variant (with respect to the variation of the basis vectors module).

Symmetrically, corollary (7) states that thevi{\displaystyle v_{i}} components equal the dot productvei{\displaystyle \mathbf {v} \cdot \mathbf {e_{i}} } between the vector and the covariant basis vectors, and since this is directly proportional to the basis vectors module, they're calledco-variant.

If we consider the dual (contravariant) basis, the situation is perfectly specular: the covariant components arecontra-variant with respect to the module of the dual basis vectors, while the contravariant components areco-variant.

So in the end it all boils down to a matter of convention: historically the first non-orthonormal basis of the vector space of choice was called "covariant", its dual basis "contravariant", and the corresponding components named specularly.

If the covariant basis becomesorthonormal, the dual contravariant basis aligns with it and the covariant components collapse into the contravariant ones, the most familiar situation when dealing with geometrical Euclidean vectors.G{\displaystyle G} andG1{\displaystyle G^{-1}} become the identity matrixI{\displaystyle I}, and:

gij=δij,gij=δij,uv=δijuivj=iuivi=δijuivj=iuivi.{\displaystyle g_{ij}=\delta _{ij},g^{ij}=\delta ^{ij},\mathbf {u} \cdot \mathbf {v} =\delta _{ij}u^{i}v^{j}=\sum _{i}u^{i}v^{i}=\delta ^{ij}u_{i}v_{j}=\sum _{i}u_{i}v_{i}.}

If the metric is non-Euclidean, but for instanceMinkowskian like in thespecial relativity andgeneral relativity theories, the basis are never orthonormal, even in the case of special relativity whereG{\displaystyle G} andG1{\displaystyle G^{-1}} become, forn=4, ηdiag(1,1,1,1){\displaystyle n=4,\ \eta \triangleq diag(1,-1,-1,-1)}. In this scenario, the covariant and contravariant components always differ.

Use in tensor analysis

[edit]

The distinction between covariance and contravariance is particularly important for computations withtensors, which often havemixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and inEinstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although moderndifferential geometry uses more sophisticatedindex-free methods to represent tensors.

Intensor analysis, acovariant vector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression.

On amanifold, atensor field will typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with ametric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices bycontracting with the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates.

The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in thetangent bundle as well as thecotangent bundle.

A contravariant vector is one which transforms likedxμdτ{\displaystyle {\frac {dx^{\mu }}{d\tau }}}, wherexμ{\displaystyle x^{\mu }\!} are the coordinates of a particle at itsproper timeτ{\displaystyle \tau }. A covariant vector is one which transforms likeφxμ{\displaystyle {\frac {\partial \varphi }{\partial x^{\mu }}}}, whereφ{\displaystyle \varphi } is a scalar field.

Algebra and geometry

[edit]

Incategory theory, there arecovariant functors andcontravariant functors. The assignment of thedual space to a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from aGL(n){\displaystyle {\text{GL}}(n)}-torsor to the fundamental representation ofGL(n){\displaystyle {\text{GL}}(n)}. Similarly, tensors of higher degree are functors with values in other representations ofGL(n){\displaystyle {\text{GL}}(n)}. However, some constructions ofmultilinear algebra are of "mixed" variance, which prevents them from being functors.

Indifferential geometry, the components of a vector relative to a basis of thetangent bundle are covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or1-forms) actuallypull back under smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually acontravariant functor. Likewise, vectors whose components are contravariantpush forward under smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is acovariant functor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates.

See also

[edit]

Notes

[edit]
  1. ^A basisf may here profitably be viewed as alinear isomorphism fromRn toV. Regardingf as a row vector whose entries are the elements of the basis, the associated linear isomorphism is thenxfx.{\displaystyle \mathbf {x} \mapsto \mathbf {f} \mathbf {x} .}

Citations

[edit]
  1. ^Misner, C.; Thorne, K.S.; Wheeler, J.A. (1973).Gravitation. W.H. Freeman.ISBN 0-7167-0344-0.
  2. ^Frankel, Theodore (2012).The geometry of physics : an introduction. Cambridge: Cambridge University Press. p. 42.ISBN 978-1-107-60260-1.OCLC 739094283.
  3. ^Sylvester, J.J. (1851)."On the general theory of associated algebraical forms".Cambridge and Dublin Mathematical Journal. Vol. 6. pp. 289–293.
  4. ^Sylvester, J.J. University Press (16 February 2012).The collected mathematical papers of James Joseph Sylvester. Vol. 3,1870–1883. Cambridge University Press.ISBN 978-1107661431.OCLC 758983870.
  5. ^J A Schouten (1954).Ricci calculus (2 ed.). Springer. p. 6.
  6. ^Bowen, Ray; Wang, C.-C. (2008) [1976]."§3.14 Reciprocal Basis and Change of Basis".Introduction to Vectors and Tensors. Dover. pp. 78, 79, 81.ISBN 9780486469140.

References

[edit]

External links

[edit]
Scope
Mathematics
Notation
Tensor
definitions
Operations
Related
abstractions
Notable tensors
Mathematics
Physics
Mathematicians
Retrieved from "https://en.wikipedia.org/w/index.php?title=Covariance_and_contravariance_of_vectors&oldid=1293551062"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp