Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Vector calculus identities

From Wikipedia, the free encyclopedia
Mathematical identities
Part of a series of articles about
Calculus
abf(t)dt=f(b)f(a){\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}

The following are importantidentities involving derivatives and integrals invector calculus.

Operator notation

[edit]

Gradient

[edit]
Main article:Gradient

For a functionf(x,y,z){\displaystyle f(x,y,z)} in three-dimensionalCartesian coordinate variables, the gradient is the vector field:grad(f)=f=(x, y, z)f=fxi+fyj+fzk{\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} }wherei,j,k are thestandardunit vectors for thex,y,z-axes. More generally, for a function ofn variablesψ(x1,,xn){\displaystyle \psi (x_{1},\ldots ,x_{n})}, also called ascalar field, the gradient is thevector field:ψ=(x1,,xn)ψ=ψx1e1++ψxnen{\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}}whereei(i=1,2,...,n){\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)} are mutually orthogonal unit vectors.

As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change.

For a vector fieldA=(A1,,An){\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)}, also called a tensor field of order 1, the gradient ortotal derivative is then × nJacobian matrix:[1]JA=dA=(A)T=(Aixj)ij.{\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.}

For atensor fieldT{\displaystyle \mathbf {T} } of any orderk, the gradientgrad(T)=dT=(T)T{\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}} is a tensor field of orderk + 1.

For a tensor fieldT{\displaystyle \mathbf {T} } of orderk > 0, the tensor fieldT{\displaystyle \nabla \mathbf {T} } of orderk + 1 is defined by the recursive relation(T)C=(TC){\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )}whereC{\displaystyle \mathbf {C} } is an arbitrary constant vector.

Divergence

[edit]
Main article:Divergence

In Cartesian coordinates, the divergence of acontinuously differentiablevector fieldF=Fxi+Fyj+Fzk{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } is the scalar-valued function:divF=F=(x, y, z)(Fx, Fy, Fz)=Fxx+Fyy+Fzz.{\displaystyle {\begin{aligned}\operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} &={\begin{pmatrix}{\dfrac {\partial }{\partial x}},\ {\dfrac {\partial }{\partial y}},\ {\dfrac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}\\[1ex]&={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.\end{aligned}}}

As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge.

The divergence of atensor fieldT{\displaystyle \mathbf {T} } of non-zero orderk is written asdiv(T)=T{\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} }, acontraction of a tensor field of orderk − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum ofouter products and using the identity,(AT)=T(A)+(A)T{\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} }whereA{\displaystyle \mathbf {A} \cdot \nabla } is thedirectional derivative in the direction ofA{\displaystyle \mathbf {A} } multiplied by its magnitude. Specifically, for the outer product of two vectors,[2](ABT)=B(A)+(A)B.{\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .}

For a tensor fieldT{\displaystyle \mathbf {T} } of orderk > 1, the tensor fieldT{\displaystyle \nabla \cdot \mathbf {T} } of orderk − 1 is defined by the recursive relation(T)C=(TC){\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )}whereC{\displaystyle \mathbf {C} } is an arbitrary constant vector.

Curl

[edit]
Main article:Curl (mathematics)

In Cartesian coordinates, forF=Fxi+Fyj+Fzk{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } the curl is the vector field:curlF=×F=(x, y, z)×(Fx, Fy, Fz)=|ijkxyzFxFyFz|=(FzyFyz)i+(FxzFzx)j+(FyxFxy)k{\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}\\[1em]&={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}}wherei,j, andk are theunit vectors for thex-,y-, andz-axes, respectively.

As the name implies the curl is a measure of how much nearby vectors tend in a circular direction.

InEinstein notation, the vector fieldF=(F1, F2, F3){\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}} has curl given by:×F=εijkeiFkxj{\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}}whereε{\displaystyle \varepsilon } = ±1 or 0 is theLevi-Civita parity symbol.

For a tensor fieldT{\displaystyle \mathbf {T} } of orderk > 1, the tensor field×T{\displaystyle \nabla \times \mathbf {T} } of orderk is defined by the recursive relation(×T)C=×(TC){\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )}whereC{\displaystyle \mathbf {C} } is an arbitrary constant vector.

A tensor field of order greater than one may be decomposed into a sum ofouter products, and then the following identity may be used:×(AT)=(×A)TA×(T).{\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).}Specifically, for the outer product of two vectors,[3]×(ABT)=(×A)BTA×(B).{\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).}

Laplacian

[edit]
Main article:Laplace operator

InCartesian coordinates, the Laplacian of a functionf(x,y,z){\displaystyle f(x,y,z)} isΔf=2f=()f=2fx2+2fy2+2fz2.{\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.}

The Laplacian is a measure of how much a function is changing over a small sphere centered at the point.

When the Laplacian is equal to 0, the function is called aharmonic function. That is,Δf=0.{\displaystyle \Delta f=0.}

For atensor field,T{\displaystyle \mathbf {T} }, the Laplacian is generally written as:ΔT=2T=()T{\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} }and is a tensor field of the same order.

For a tensor fieldT{\displaystyle \mathbf {T} } of orderk > 0, the tensor field2T{\displaystyle \nabla ^{2}\mathbf {T} } of orderk is defined by the recursive relation(2T)C=2(TC){\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )}whereC{\displaystyle \mathbf {C} } is an arbitrary constant vector.

Special notations

[edit]

InFeynman subscript notation,B(AB)=A×(×B)+(A)B{\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }where the notation ∇B means the subscripted gradient operates on only the factorB.[4][5][6]

More general but similar is theHestenesoverdot notation ingeometric algebra.[7][8] The above identity is then expressed as:˙(AB˙)=A×(×B)+(A)B{\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }where overdots define the scope of the vector derivative. The dotted vector, in this caseB, is differentiated, while the (undotted)A is held constant.

The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identityC⋅(A×B) = (C×A)⋅B:

(A×B)=A(A×B)+B(A×B)=(A×A)B+(B×A)B=(A×A)B(A×B)B=(A×A)BA(B×B)=(×A)BA(×B){\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}

An alternative method is to use the Cartesian components of the del operator as follows (withimplicit summation over the index i):

(A×B)=eii(A×B)=eii(A×B)=ei(iA×B+A×iB)=ei(iA×B)+ei(A×iB)=(ei×iA)B+(ei×A)iB=(ei×iA)B(A×ei)iB=(ei×iA)BA(ei×iB)=(eii×A)BA(eii×B)=(×A)BA(×B){\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot \partial _{i}(\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}

Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule.For example, from the identityA⋅(B×C) = (A×B)⋅Cwe may deriveA⋅(∇×C) = (A×∇)⋅C but not ∇⋅(B×C) = (∇×B)⋅C,nor fromA⋅(B×A) = 0 may we deriveA⋅(∇×A) = 0.On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so thatA⋅(∇A×A) = ∇A⋅(A×A) = ∇⋅(A×A) = 0.Also, fromA×(A×C) =A(AC) − (AA)C we may derive ∇×(∇×C) = ∇(∇⋅C) − ∇2C,but from (Aψ)⋅(Aφ) = (AA)(ψφ) we may not derive (∇ψ)⋅(∇φ) = ∇2(ψφ).

A subscriptc on a quantity indicates that it is temporarily considered to be a constant. Since a constant is not a variable, when the substitution rule (see the preceding paragraph) is used it, unlike a variable, may be moved into or out of the scope of a del operator, as in the following example:[9]

(A×B)=(A×Bc)+(Ac×B)=(A×Bc)(B×Ac)=(×A)Bc(×B)Ac=(×A)B(×B)A{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })+\nabla \cdot (\mathbf {A} _{\mathrm {c} }\times \mathbf {B} )\\[2pt]&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })-\nabla \cdot (\mathbf {B} \times \mathbf {A} _{\mathrm {c} })\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} _{\mathrm {c} }-(\nabla \times \mathbf {B} )\cdot \mathbf {A} _{\mathrm {c} }\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} \end{aligned}}}

Another way to indicate that a quantity is a constant is to affix it as a subscript to the scope of a del operator, as follows:[10](AB)A=A×(×B)+(A)B{\displaystyle \nabla \left(\mathbf {A{\cdot }B} \right)_{\mathbf {A} }=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }

For the remainder of this article, Feynman subscript notation will be used where appropriate.

First derivative identities

[edit]

For scalar fieldsψ{\displaystyle \psi },ϕ{\displaystyle \phi } and vector fieldsA{\displaystyle \mathbf {A} },B{\displaystyle \mathbf {B} }, we have the following derivative identities.

Distributive properties

[edit]

First derivative associative properties

[edit]

Product rule for multiplication by a scalar

[edit]

We have the following generalizations of theproduct rule in single-variablecalculus.

Quotient rule for division by a scalar

[edit]

Chain rule

[edit]

Letf(x){\displaystyle f(x)} be a one-variable function from scalars to scalars,r(t)=(x1(t),,xn(t)){\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))} aparametrized curve,ϕ:RnR{\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} } a function from vectors to scalars, andA:RnRn{\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} a vector field. We have the following special cases of the multi-variablechain rule.

For a vector transformationx:RnRn{\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} we have:

(Ax)=tr((x)(Ax)){\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)}

Here we take thetrace of the dot product of two second-order tensors, which corresponds to the product of their matrices.

Dot product rule

[edit]

(AB) = (A)B+(B)A+A×(×B)+B×(×A) = AJB+BJA = (B)A+(A)B{\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}}

whereJA=(A)T=(Ai/xj)ij{\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}} denotes theJacobian matrix of the vector fieldA=(A1,,An){\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})}.

Alternatively, using Feynman subscript notation,

(AB)=A(AB)+B(AB) .{\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .}

See these notes.[11]

As a special case, whenA =B,

12(AA) = AJA = (A)A = (A)A+A×(×A) = AA.{\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.}

The generalization of thedot product formula toRiemannian manifolds is a defining property of aRiemannian connection, which differentiates a vector field to give a vector-valued1-form.

Cross product rule

[edit]

Note that the matrixJBJBT{\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}} is antisymmetric.

Second derivative identities

[edit]

Divergence of curl is zero

[edit]

Thedivergence of the curl ofany continuously twice-differentiablevector fieldA is always zero:(×A)=0{\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0}

This is a special case of the vanishing of the square of theexterior derivative in theDe Rhamchain complex.

Divergence of gradient is Laplacian

[edit]

TheLaplacian of a scalar field is the divergence of its gradient:Δψ=2ψ=(ψ){\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )}The result is a scalar quantity.

Divergence of divergence is not defined

[edit]

The divergence of a vector fieldA is a scalar, and the divergence of a scalar quantity is undefined. Therefore,(A) is undefined.{\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}

Curl of gradient is zero

[edit]

Thecurl of thegradient ofany continuously twice-differentiablescalar fieldφ{\displaystyle \varphi } (i.e.,differentiability classC2{\displaystyle C^{2}}) is always thezero vector:×(φ)=0.{\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .}

It can be easily proved by expressing×(φ){\displaystyle \nabla \times (\nabla \varphi )} in aCartesian coordinate system withSchwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of theexterior derivative in theDe Rhamchain complex.

Curl of curl

[edit]

×(×A) = (A)2A{\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} }

Here ∇2 is thevector Laplacian operating on the vector fieldA.

Curl of divergence is not defined

[edit]

Thedivergence of a vector fieldA is a scalar, and the curl of a scalar quantity is undefined. Therefore,×(A) is undefined.{\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}

Second derivative associative properties

[edit]
DCG chart: Some rules for second derivatives.

A mnemonic

[edit]

The figure to the right is a mnemonic for some of these identities. The abbreviations used are:

  • D: divergence,
  • C: curl,
  • G: gradient,
  • L: Laplacian,
  • CC: curl of curl.

Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist.

Summary of important identities

[edit]

Differentiation

[edit]

Gradient

[edit]

Divergence

[edit]

Curl

[edit]

Vector-dot-Del Operator

[edit]

Second derivatives

[edit]

Third derivatives

[edit]

Integration

[edit]

Below, thecurly symbol ∂ means "boundary of" a surface or solid.

Surface–volume integrals

[edit]

In the following surface–volume integral theorems,V denotes a three-dimensional volume with a corresponding two-dimensionalboundaryS = ∂V (aclosed surface):

Curve–surface integrals

[edit]

In the following curve–surface integral theorems,S denotes a 2d open surface with a corresponding 1d boundaryC = ∂S (aclosed curve):

Integration around a closed curve in theclockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in adefinite integral):

\ointclockwiseS{\displaystyle {\scriptstyle \partial S}}Ad={\displaystyle \mathbf {A} \cdot d{\boldsymbol {\ell }}=-}\ointctrclockwiseS{\displaystyle {\scriptstyle \partial S}}Ad.{\displaystyle \mathbf {A} \cdot d{\boldsymbol {\ell }}.}

Endpoint-curve integrals

[edit]

In the following endpoint–curve integral theorems,P denotes a 1d open path with signed 0d boundary pointsqp=P{\displaystyle \mathbf {q} -\mathbf {p} =\partial P} and integration alongP is fromp{\displaystyle \mathbf {p} } toq{\displaystyle \mathbf {q} }:

Tensor integrals

[edit]

A tensor form of a vector integral theorem may be obtained by replacing the vector (or one of them) by a tensor, provided that the vector is first made to appear only as the right-most vector of each integrand. For example, Stokes' theorem becomes[18]

SdT = SdS(×T).{\displaystyle \oint _{\partial S}d{\boldsymbol {\ell }}\cdot \mathbf {T} \ =\ \iint _{S}d\mathbf {S} \cdot \left(\nabla \times \mathbf {T} \right).}

A scalar field may also be treated as a vector and replaced by a vector or tensor. For example, Green's first identity becomes

\oiintV{\displaystyle \scriptstyle \partial V}ψdSA = V(ψ2A+ψA)dV{\displaystyle \psi \,d\mathbf {S} \cdot \nabla \!\mathbf {A} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\mathbf {A} +\nabla \!\psi \cdot \nabla \!\mathbf {A} \right)\,dV}.

Similar rules apply to algebraic and differentiation formulas. For algebraic formulas one may alternatively use the left-most vector position.

See also

[edit]

References

[edit]
  1. ^Wilson, p. 404.
  2. ^Wilson, p. 407.
  3. ^Wilson, p. 407.
  4. ^Coffin, Joseph George (1911).Vector Analysis. New York: John Wiley & Sons, Inc. pp. 105–106, 120–123.
  5. ^Feynman, R. P.; Leighton, R. B.; Sands, M. (1964).The Feynman Lectures on Physics. Addison-Wesley. Vol II, pp. 27–4, 5.ISBN 0-8053-9049-9.{{cite book}}:ISBN / Date incompatibility (help)
  6. ^Kholmetskii, A. L.; Missevitch, O. V. (2005). "The Faraday induction law in relativity theory". p. 4.arXiv:physics/0504223.
  7. ^Coffin, pp. 227–228.
  8. ^Doran, C.; Lasenby, A. (2003).Geometric algebra for physicists. Cambridge University Press. p. 169.ISBN 978-0-521-71595-9.
  9. ^Borisenko, A. I.; Tarapov, I. E. (1968).Vector and Tensor Analysis. New York: Dover Publications, Inc. pp. 170, 180.
  10. ^Wilson, Edwin Bidwell (1901).Vector Analysis. New York: Charles Scribner's Sons. pp. 159, 161–162.
  11. ^Kelly, P. (2013)."Chapter 1.14 Tensor Calculus 1: Tensor Fields".Mechanics Lecture Notes Part III: Foundations of Continuum Mechanics. University of Auckland. Archived fromthe original(PDF) on 3 December 2017. Retrieved7 December 2017.
  12. ^"lecture15.pdf"(PDF).
  13. ^Kuo, Kenneth K.; Acharya, Ragini (2012).Applications of turbulent and multi-phase combustion. Hoboken, N.J.: Wiley. p. 520.doi:10.1002/9781118127575.app1.ISBN 9781118127575.Archived from the original on 19 April 2021. Retrieved19 April 2020.
  14. ^Page and Adams, pp. 65–66.
  15. ^Wangsness, Roald K.; Cloud, Michael J. (1986).Electromagnetic Fields (2nd ed.). Wiley.ISBN 978-0-471-81186-2.
  16. ^Page, Leigh; Adams, Norman Ilsley, Jr. (1940).Electrodynamics. New York: D. Van Nostrand Company, Inc. pp. 44–45, Eq. (18-3).{{cite book}}: CS1 maint: multiple names: authors list (link)
  17. ^Pérez-Garrido, Antonio (2024). "Recovering seldom-used theorems of vector calculus and their application to problems of electromagnetism".American Journal of Physics.92 (5):354–359.arXiv:2312.17268.Bibcode:2024AmJPh..92e.354P.doi:10.1119/5.0182191.
  18. ^Wilson, p. 409.

Further reading

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Vector_calculus_identities&oldid=1324170267"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp