Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Ricci calculus

From Wikipedia, the free encyclopedia
(Redirected fromTensor calculus)
Tensor index notation for tensor-based calculations
"Tensor index notation" redirects here. For a summary of tensors in general, seeGlossary of tensor theory.

Inmathematics,Ricci calculus constitutes the rules of index notation and manipulation fortensors andtensor fields on adifferentiable manifold, with or without ametric tensor orconnection.[a][1][2][3] It is also the modern name for what used to be called theabsolute differential calculus (the foundation of tensor calculus),tensor calculus ortensor analysis developed byGregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupilTullio Levi-Civita in 1900.[4]Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications togeneral relativity anddifferential geometry in the early twentieth century.[5] The basis of modern tensor analysis was developed byBernhard Riemann in a paper from 1861.[6]

A component of a tensor is areal number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to adifferential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularlymultidimensional arrays.

A tensor may be expressed as a linear sum of thetensor product ofvector andcovector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value perdimension of the underlyingvector space. The number of indices equals the degree (or order) of the tensor.

For compactness and convenience, the Ricci calculus incorporatesEinstein notation, which implies summation over indices repeated within a term anduniversal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.

Applications

[edit]

Tensor calculus has many applications inphysics,engineering andcomputer science includingelasticity,continuum mechanics,electromagnetism (seemathematical descriptions of the electromagnetic field),general relativity (seemathematics of general relativity),quantum field theory, andmachine learning.

Working with a main proponent of theexterior calculusÉlie Cartan, the influential geometerShiing-Shen Chern summarizes the role of tensor calculus:[7]

In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.

Notation for indices

[edit]
See also:Index notation

Basis-related distinctions

[edit]

Space and time coordinates

[edit]

Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:[8]

  • The lowercaseLatin alphabeta,b,c, ... is used to indicate restriction to 3-dimensionalEuclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
  • The lowercaseGreek alphabetα,β,γ, ... is used for 4-dimensionalspacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components.

Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.

Coordinate and index notation

[edit]

The author(s) will usually make it clear whether a subscript is intended as an index or as a label.

For example, in 3-D Euclidean space and usingCartesian coordinates; thecoordinate vectorA = (A1,A2,A3) = (Ax,Ay,Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labelsx,y,z. In the expressionAi,i is interpreted as an index ranging over the values 1, 2, 3, while thex,y,z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the labelt.

Reference to basis

[edit]

Indices themselves may belabelled usingdiacritic-like symbols, such as ahat (ˆ),bar (¯),tilde (˜), or prime (′) as in:

Xϕ^,Yλ¯,Zη~,Tμ{\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}}

to denote a possibly differentbasis for that index. An example is inLorentz transformations from oneframe of reference to another, where one frame could be unprimed and the other primed, as in:

vμ=vνLνμ.{\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.}

This is not to be confused withvan der Waerden notation forspinors, which uses hats and overdots on indices to reflect the chirality of a spinor.

Upper and lower indices

[edit]

Ricci calculus, andindex notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter arenot exponents, even though they may look as such to the reader only familiar with other parts of mathematics.

In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such asaijbjk{\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.

Covariant tensor components

[edit]

Alower index (subscript) indicates covariance of the components with respect to that index:

Aαβγ{\displaystyle A_{\alpha \beta \gamma \cdots }}

Contravariant tensor components

[edit]

Anupper index (superscript) indicates contravariance of the components with respect to that index:

Aαβγ{\displaystyle A^{\alpha \beta \gamma \cdots }}

Mixed-variance tensor components

[edit]

A tensor may have both upper and lower indices:

Aαβγδ.{\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.}

Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with thegeneralized Kronecker delta).

Tensor type and degree

[edit]

The number of each upper and lower indices of a tensor gives itstype: a tensor withp upper andq lower indices is said to be of type(p,q), or to be a type-(p,q) tensor.

The number of indices of a tensor, regardless of variance, is called thedegree of the tensor (alternatively, itsvalence,order orrank, althoughrank is ambiguous). Thus, a tensor of type(p,q) has degreep +q.

Summation convention

[edit]

The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:

AαBααAαBαorAαBααAαBα.{\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.}

The operation implied by such a summation is calledtensor contraction:

AαBβAαBααAαBα.{\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.}

This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:

AαγBαCγβαγAαγBαCγβ.{\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.}

Other combinations of repeated indices within a term are considered to be ill-formed, such as

Aααγ{\displaystyle A_{\alpha \alpha }{}^{\gamma }\qquad }(both occurrences ofα{\displaystyle \alpha } are lower;Aααγ{\displaystyle A_{\alpha }{}^{\alpha \gamma }} would be fine)
AαγγBαCγβ{\displaystyle A_{\alpha \gamma }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }}(γ{\displaystyle \gamma } occurs twice as a lower index;AαγγBα{\displaystyle A_{\alpha \gamma }{}^{\gamma }B^{\alpha }} orAαδγBαCγβ{\displaystyle A_{\alpha \delta }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }} would be fine).

The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.

Multi-index notation

[edit]

If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:[9]

Ai1inBi1inj1jmCj1jmAIBIJCJ,{\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},}

whereI =i1i2 ⋅⋅⋅in andJ =j1j2 ⋅⋅⋅jm.

Sequential summation

[edit]

A pair of vertical bars| ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression iscompletely antisymmetric in each of the two sets of indices:[10]

A|αβγ|Bαβγ=AαβγB|αβγ|=α<β<γAαβγBαβγ{\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }}

means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:

A|αβγ||δϵλ|Bαβγδϵλ|μνζ|Cμνζ=α<β<γ δ<ϵ<<λ μ<ν<<ζAαβγδϵλBαβγδϵλμνζCμνζ{\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\[3pt]={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}}

When using multi-index notation, an underarrow is placed underneath the block of indices:[11]

APQBPQRCR=PQRAPQBPQRCR{\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}}

where

P=|αβγ|,Q=|δϵλ|,R=|μνζ|{\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |}

Raising and lowering indices

[edit]

By contracting an index with a non-singularmetric tensor, thetype of a tensor can be changed, converting a lower index to an upper index or vice versa:

Bγβ=gγαAαβandAαβ=gαγBγβ{\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }}

The base symbol in many cases is retained (e.g. usingA whereB appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.

Correlations between index positions and invariance

[edit]

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under apassive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.[12]

TheKronecker delta is used,see also below.

Basis transformationComponent transformationInvariance
Covector, covariant vector, 1-formωα¯=Lβα¯ωβ{\displaystyle \omega ^{\bar {\alpha }}=L_{\beta }{}^{\bar {\alpha }}\omega ^{\beta }}aα¯=aγLγα¯{\displaystyle a_{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}}aα¯ωα¯=aγLγα¯Lβα¯ωβ=aγδγβωβ=aβωβ{\displaystyle a_{\bar {\alpha }}\omega ^{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}L_{\beta }{}^{\bar {\alpha }}\omega ^{\beta }=a_{\gamma }\delta ^{\gamma }{}_{\beta }\omega ^{\beta }=a_{\beta }\omega ^{\beta }}
Vector, contravariant vectoreα¯=eγLα¯γ{\displaystyle e_{\bar {\alpha }}=e_{\gamma }L_{\bar {\alpha }}{}^{\gamma }}uα¯=Lα¯βuβ{\displaystyle u^{\bar {\alpha }}=L^{\bar {\alpha }}{}_{\beta }u^{\beta }}eα¯uα¯=eγLα¯γLα¯βuβ=eγδγβuβ=eγuγ{\displaystyle e_{\bar {\alpha }}u^{\bar {\alpha }}=e_{\gamma }L_{\bar {\alpha }}{}^{\gamma }L^{\bar {\alpha }}{}_{\beta }u^{\beta }=e_{\gamma }\delta ^{\gamma }{}_{\beta }u^{\beta }=e_{\gamma }u^{\gamma }}

General outlines for index notation and operations

[edit]

Tensors are equalif and only if every corresponding component is equal; e.g., tensorA equals tensorB if and only if

Aαβγ=Bαβγ{\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }}

for allα,β,γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure todimensional analysis).

Free and dummy indices

[edit]

Indices not involved in contractions are calledfree indices. Indices used in contractions are termeddummy indices, orsummation indices.

A tensor equation represents many ordinary (real-valued) equations

[edit]

The components of tensors (likeAα,Bβγ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality hasn free indices, and if the dimensionality of the underlying vector space ism, the equality representsmn equations: each index takes on every value of a specific set of values.

For instance, if

AαBβγCγδ+DαβEδ=Tαβδ{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }}

is infour dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α,β,δ), there are 43 = 64 equations. Three of these are:

A0B10C00+A0B11C10+A0B12C20+A0B13C30+D01E0=T010A1B00C00+A1B01C10+A1B02C20+A1B03C30+D10E0=T100A1B20C02+A1B21C12+A1B22C22+A1B23C32+D12E2=T122.{\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}}

This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.

Indices are replaceable labels

[edit]

Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verifyvector calculus identities or identities of theKronecker delta andLevi-Civita symbol (see also below). An example of a correct change is:

AαBβγCγδ+DαβEδAλBβμCμδ+DλβEδ,{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,}

whereas an erroneous change is:

AαBβγCγδ+DαβEδAλBβγCμδ+DαβEδ.{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.}

In the first replacement,λ replacedα andμ replacedγeverywhere, so the expression still has the same meaning. In the second,λ did not fully replaceα, andμ did not fully replaceγ (incidentally, the contraction on theγ index became a tensor product), which is entirely inconsistent for reasons shown next.

Indices are the same in every term

[edit]

The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:

AαBβγCγδ+DαδEβ=Tαβδ{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }}

as for an erroneous expression:

AαBβγCγδ+DαβγEδ.{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.}

In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity,α,β,δ line up throughout andγ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, whileβ lines up,α andδ do not, andγ appears twice in one term (contraction)and once in another term, which is inconsistent.

Brackets and punctuation used once where implied

[edit]

When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.

If the brackets enclosecovariant indices – the rule applies only toall covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.

Similarly if brackets enclosecontravariant indices – the rule applies only toall enclosed contravariant indices, not to intermediately placed covariant indices.

Symmetric and antisymmetric parts

[edit]

Symmetric part of tensor

[edit]

Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizingp indices usingσ to range over permutations of the numbers 1 top, one takes a sum over thepermutations of those indicesασ(i) fori = 1, 2, 3, ...,p, and then divides by the number of permutations:

A(α1α2αp)αp+1αq=1p!σAασ(1)ασ(p)αp+1αq.{\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.}

For example, two symmetrizing indices mean there are two indices to permute and sum over:

A(αβ)γ=12!(Aαβγ+Aβαγ){\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)}

while for three symmetrizing indices, there are three indices to sum over and permute:

A(αβγ)δ=13!(Aαβγδ+Aγαβδ+Aβγαδ+Aαγβδ+Aγβαδ+Aβαγδ){\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)}

The symmetrization isdistributive over addition;

A(α(Bβ)γ+Cβ)γ)=A(αBβ)γ+A(αCβ)γ{\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }}

Indices are not part of the symmetrization when they are:

Here theα andγ indices are symmetrized,β is not.

Antisymmetric or alternating part of tensor

[edit]

Square brackets, [ ], around multiple indices denotes theantisymmetrized part of the tensor. Forp antisymmetrizing indices – the sum over the permutations of those indicesασ(i) multiplied by thesignature of the permutationsgn(σ) is taken, then divided by the number of permutations:

A[α1αp]αp+1αq=1p!σsgn(σ)Aασ(1)ασ(p)αp+1αq=δα1αpβ1βpAβ1βpαp+1αq{\displaystyle {\begin{aligned}&A_{[\alpha _{1}\cdots \alpha _{p}]\alpha _{p+1}\cdots \alpha _{q}}\\[3pt]={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}}

whereδβ1⋅⋅⋅βp
α1⋅⋅⋅αp
is thegeneralized Kronecker delta of degree2p, with scaling as defined below.

For example, two antisymmetrizing indices imply:

A[αβ]γ=12!(AαβγAβαγ){\displaystyle A_{[\alpha \beta ]\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)}

while three antisymmetrizing indices imply:

A[αβγ]δ=13!(Aαβγδ+Aγαβδ+AβγαδAαγβδAγβαδAβαγδ){\displaystyle A_{[\alpha \beta \gamma ]\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)}

as for a more specific example, ifF represents theelectromagnetic tensor, then the equation

0=F[αβ,γ]=13!(Fαβ,γ+Fγα,β+Fβγ,αFβα,γFαγ,βFγβ,α){\displaystyle 0=F_{[\alpha \beta ,\gamma ]}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,}

representsGauss's law for magnetism andFaraday's law of induction.

As before, the antisymmetrization is distributive over addition;

A[α(Bβ]γ+Cβ]γ)=A[αBβ]γ+A[αCβ]γ{\displaystyle A_{[\alpha }\left(B_{\beta ]\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{[\alpha }B_{\beta ]\gamma \cdots }+A_{[\alpha }C_{\beta ]\gamma \cdots }}

As with symmetrization, indices are not antisymmetrized when they are:

Here theα andγ indices are antisymmetrized,β is not.

Sum of symmetric and antisymmetric parts

[edit]

Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:

Aαβγ=A(αβ)γ+A[αβ]γ{\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{[\alpha \beta ]\gamma \cdots }}

as can be seen by adding the above expressions forA(αβ)γ⋅⋅⋅ andA[αβ]γ⋅⋅⋅. This does not hold for other than two indices.

Differentiation

[edit]
See also:Four-gradient,d'Alembertian, andIntrinsic derivative

For compactness, derivatives may be indicated by adding indices after a comma or semicolon.[13][14]

Partial derivative

[edit]

While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with acoordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted byxμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple ofdifferences in coordinates,Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.

To indicate partial differentiation of the components of a tensor field with respect to a coordinate variablexγ, acomma is placed before an appended lower index of the coordinate variable.

Aαβ,γ=xγAαβ{\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }}

This may be repeated (without adding further commas):

Aα1α2αp,αp+1αq=xαqxαp+2xαp+1Aα1α2αp.{\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.}

These components donot transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by theproduct rule and the derivatives of the coordinates

xα,γ=δγα,{\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },}

whereδ is theKronecker delta.

Covariant derivative

[edit]

The covariant derivative is only defined if aconnection is defined. For any tensor field, asemicolon ( ;) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include aforward slash ( /)[15] or in three-dimensional curved space a single vertical bar ( | ).[16]

The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:

f;β=f,β{\displaystyle f_{;\beta }=f_{,\beta }}
Aα;β=Aα,β+ΓαγβAγ{\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }}
Aα;β=Aα,βΓγαβAγ,{\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,}

whereΓαγβ are the connection coefficients.

For an arbitrary tensor:[17]

Tα1αrβ1βs;γ=Tα1αrβ1βs,γ+Γα1δγTδα2αrβ1βs++ΓαrδγTα1αr1δβ1βsΓδβ1γTα1αrδβ2βsΓδβsγTα1αrβ1βs1δ.{\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}}

An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbolβ. For the case of a vector fieldAα:[18]

βAα=Aα;β.{\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.}

The covariant formulation of thedirectional derivative of any tensor field along a vectorvγ may be expressed as its contraction with the covariant derivative, e.g.:

vγAα;γ.{\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.}

The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.

This derivative is characterized by the product rule:

(AαβBγδ);ϵ=Aαβ;ϵBγδ+AαβBγδ;ϵ.{\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.}

Connection types

[edit]

AKoszul connection on thetangent bundle of adifferentiable manifold is called anaffine connection.

A connection is ametric connection when the covariant derivative of the metric tensor vanishes:

gμν;ξ=0.{\displaystyle g_{\mu \nu ;\xi }=0\,.}

Anaffine connection that is also a metric connection is called aRiemannian connection. A Riemannian connection that is torsion-free (i.e., for which thetorsion tensor vanishes:Tαβγ = 0) is aLevi-Civita connection.

TheΓαβγ for a Levi-Civita connection in a coordinate basis are calledChristoffel symbols of the second kind.

Exterior derivative

[edit]

The exterior derivative of a totally antisymmetric type(0,s) tensor field with componentsAα1⋅⋅⋅αs (also called adifferential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:[19]: 232–233 

(dA)γα1αs=x[γAα1αs]=A[α1αs,γ].{\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{[\gamma }}}A_{\alpha _{1}\cdots \alpha _{s}]}=A_{[\alpha _{1}\cdots \alpha _{s},\gamma ]}.}

This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.

Lie derivative

[edit]

The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type(r,s) tensor fieldT along (the flow of) a contravariant vector fieldXρmay be expressed using a coordinate basis as[20]

(LXT)α1αrβ1βs=XγTα1αrβ1βs,γXα1,γTγα2αrβ1βsXαr,γTα1αr1γβ1βs+Xγ,β1Tα1αrγβ2βs++Xγ,βsTα1αrβ1βs1γ.{\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}}

This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:

(LXX)α=XγXα,γXα,γXγ=0.{\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.}

Notable tensors

[edit]

Kronecker delta

[edit]

The Kronecker delta is like theidentity matrix when multiplied and contracted:

δβαAβ=AαδνμBμ=Bν.{\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}}

The componentsδα
β
are the same in any basis and form an invariant tensor of type(1, 1), i.e. the identity of thetangent bundle over theidentity mapping of thebase manifold, and so its trace is an invariant.[21]Itstrace is the dimensionality of the space; for example, in four-dimensionalspacetime,

δρρ=δ00+δ11+δ22+δ33=4.{\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.}

The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier ofp! on the right):

δβ1βpα1αp=δβ1[α1δβpαp],{\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{[\alpha _{1}}\cdots \delta _{\beta _{p}}^{\alpha _{p}]},}

and acts as an antisymmetrizer onp indices:

δβ1βpα1αpAβ1βp=A[α1αp].{\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{[\alpha _{1}\cdots \alpha _{p}]}.}

Torsion tensor

[edit]

An affine connection has a torsion tensorTαβγ:

Tαβγ=ΓαβγΓαγβγαβγ,{\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },}

whereγαβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.

For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations

Γαβγ=Γαγβ.{\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.}

Riemann curvature tensor

[edit]

If this tensor is defined as

Rρσμν=Γρνσ,μΓρμσ,ν+ΓρμλΓλνσΓρνλΓλμσ,{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,}

then it is thecommutator of the covariant derivative with itself:[22][23]

Aν;ρσAν;σρ=AβRβνρσ,{\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,}

since the connection is torsionless, which means that the torsion tensor vanishes.

This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:

Tα1αrβ1βs;γδTα1αrβ1βs;δγ=Rα1ργδTρα2αrβ1βsRαrργδTα1αr1ρβ1βs+Rσβ1γδTα1αrσβ2βs++RσβsγδTα1αrβ1βs1σ{\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}}

which are often referred to as theRicci identities.[24]

Metric tensor

[edit]

The metric tensorgαβ is used for lowering indices and gives the length of anyspace-like curve

length=y1y2gαβdxαdγdxβdγdγ,{\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}

whereγ is anysmoothstrictly monotoneparameterization of the path. It also gives the duration of anytime-like curve

duration=t1t21c2gαβdxαdγdxβdγdγ,{\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}

whereγ is any smooth strictly monotone parameterization of the trajectory. See alsoLine element.

Theinverse matrixgαβ of the metric tensor is another important tensor, used for raising indices:

gαβgβγ=δγα.{\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.}

See also

[edit]

Notes

[edit]
  1. ^While the raising and lowering of indices is dependent on ametric tensor, thecovariant derivative is only dependent on theconnection while theexterior derivative and theLie derivative are dependent on neither.

References

[edit]
  1. ^Synge J.L.; Schild A. (1949).Tensor Calculus. first Dover Publications 1978 edition. pp. 6–108.
  2. ^J.A. Wheeler; C. Misner; K.S. Thorne (1973).Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5.ISBN 0-7167-0344-0.
  3. ^R. Penrose (2007).The Road to Reality. Vintage books.ISBN 978-0-679-77631-4.
  4. ^Ricci, Gregorio;Levi-Civita, Tullio (March 1900)."Méthodes de calcul différentiel absolu et leurs applications" [Methods of the absolute differential calculus and their applications].Mathematische Annalen (in French).54 (1–2). Springer:125–201.doi:10.1007/BF01454201.S2CID 120009332. Retrieved19 October 2019.
  5. ^Schouten, Jan A. (1924). R. Courant (ed.).Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimensional differential geometry). Grundlehren der mathematischen Wissenschaften (in German). Vol. 10. Berlin: Springer Verlag.
  6. ^Jahnke, Hans Niels (2003).A history of analysis. Providence, RI: American Mathematical Society. p. 244.ISBN 0-8218-2623-9.OCLC 51607350.
  7. ^"Interview with Shiing Shen Chern"(PDF).Notices of the AMS.45 (7):860–5. August 1998.
  8. ^C. Møller (1952),The Theory of Relativity, p. 234 is an example of a variation: 'Greek indices run from 1 to 3, Latin indices from 1 to 4'
  9. ^T. Frankel (2012),The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67,ISBN 978-1107-602601
  10. ^J.A. Wheeler; C. Misner; K.S. Thorne (1973).Gravitation. W.H. Freeman & Co. p. 91.ISBN 0-7167-0344-0.
  11. ^T. Frankel (2012),The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67,ISBN 978-1107-602601
  12. ^J.A. Wheeler; C. Misner; K.S. Thorne (1973).Gravitation. W.H. Freeman & Co. pp. 61,202–203, 232.ISBN 0-7167-0344-0.
  13. ^G. Woan (2010).The Cambridge Handbook of Physics Formulas. Cambridge University Press.ISBN 978-0-521-57507-2.
  14. ^Covariant derivative – Mathworld, Wolfram
  15. ^T. Frankel (2012),The Geometry of Physics (3rd ed.), Cambridge University Press, p. 298,ISBN 978-1107-602601
  16. ^J.A. Wheeler; C. Misner; K.S. Thorne (1973).Gravitation. W.H. Freeman & Co. pp. 510, §21.5.ISBN 0-7167-0344-0.
  17. ^T. Frankel (2012),The Geometry of Physics (3rd ed.), Cambridge University Press, p. 299,ISBN 978-1107-602601
  18. ^D. McMahon (2006).Relativity. Demystified. McGraw Hill. p. 67.ISBN 0-07-145545-0.
  19. ^R. Penrose (2007).The Road to Reality. Vintage books.ISBN 978-0-679-77631-4.
  20. ^Bishop, R.L.; Goldberg, S.I. (1968),Tensor Analysis on Manifolds, p. 130
  21. ^Bishop, R.L.; Goldberg, S.I. (1968),Tensor Analysis on Manifolds, p. 85
  22. ^Synge J.L.; Schild A. (1949).Tensor Calculus. first Dover Publications 1978 edition. pp. 83, p. 107.
  23. ^P. A. M. Dirac.General Theory of Relativity. pp. 20–21.
  24. ^Lovelock, David; Hanno Rund (1989).Tensors, Differential Forms, and Variational Principles. p. 84.

Sources

[edit]

Further reading

[edit]

External links

[edit]
Differentiable computing
General
Hardware
Software libraries
Scope
Mathematics
Notation
Tensor
definitions
Operations
Related
abstractions
Notable tensors
Mathematics
Physics
Mathematicians
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Major topics inmathematical analysis
Retrieved from "https://en.wikipedia.org/w/index.php?title=Ricci_calculus&oldid=1269114062"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp