Movatterモバイル変換


[0]ホーム

URL:


Jump to content
Wikiversity
Search

Elasticity/Tensors

From Wikiversity
<Elasticity
(Redirected fromIntroduction to Elasticity/Tensors)

Tensors in Solid Mechanics

[edit |edit source]

A sound understanding of tensors and tensor operation is essential if you want to read and understand modern papers on solid mechanics and finite element modeling of complex material behavior. This brief introduction gives you an overview of tensors and tensor notation. For more details you can read A Brief on Tensor Analysis by J. G. Simmonds, the appendix on vector and tensor notation fromDynamics of Polymeric Liquids - Volume 1 by R. B. Bird, R. C. Armstrong, and O. Hassager, and themonograph by R. M. Brannon. An introduction to tensors in continuum mechanics can be found inAn Introduction to Continuum Mechanics by M. E. Gurtin. Most of the material in this page is based on these sources.

Notation

[edit |edit source]

The following notation is usually used in the literature:

s= scalar (lightface italic small)v= vector (boldface roman small)σ= second-order tensor (boldface Greek)A= third-order tensor (boldface italic capital)A= fourth-order tensor (sans-serif capital){\displaystyle {\begin{aligned}s&=~{\text{scalar (lightface italic small)}}\\\mathbf {v} &=~{\text{vector (boldface roman small)}}\\{\boldsymbol {\sigma }}&=~{\text{second-order tensor (boldface Greek)}}\\{\boldsymbol {A}}&=~{\text{third-order tensor (boldface italic capital)}}\\{\boldsymbol {\mathsf {A}}}&=~{\text{fourth-order tensor (sans-serif capital)}}\end{aligned}}}

Motivation

[edit |edit source]

A forcef{\displaystyle \mathbf {f} \,} has a magnitude and a direction, can be added to another force, be multiplied by a scalar and so on. These properties make the forcef{\displaystyle \mathbf {f} \,} a vector.

Similarly, the displacementu{\displaystyle \mathbf {u} } is a vector because it can be added to other displacements and satisfies the other properties of a vector.

However, a force cannot be added to a displacement to yield a physically meaningful quantity. So the physical spaces that these two quantities lie on must be different.

Recall that a constant forcef{\displaystyle \mathbf {f} } moving through a displacementu{\displaystyle \mathbf {u} \,} doesfu{\displaystyle \mathbf {f} \bullet \mathbf {u} } units of work. How do we compute this product when the spaces off{\displaystyle \mathbf {f} \,} andu{\displaystyle \mathbf {u} \,} are different? If you try to compute the product on a graph, you will have to convert both quantities to a single basis and then compute the scalar product.

An alternative way of thinking about the operationfu{\displaystyle \mathbf {f} \bullet \mathbf {u} } is to think off{\displaystyle \mathbf {f} \,} as a linear operator that acts onu{\displaystyle \mathbf {u} } to produce a scalar quantity (work). In the notation of sets we can write

fu      f:uR .{\displaystyle \mathbf {f} \bullet \mathbf {u} ~~~\equiv ~~~\mathbf {f} :\mathbf {u} \rightarrow \mathbb {R} ^{}~.}

A first order tensor is a linear operator that sends vectors to scalars.

Next, assume that the forcef{\displaystyle \mathbf {f} \,} acts at a pointx{\displaystyle \mathbf {x} \,}. The moment of the force about the origin is given byx×f{\displaystyle \mathbf {x} \times \mathbf {f} \,} which is a vector. The vector product can be thought of as an linear operation too. In this casethe effect of the operator is to convert a vector into another vector.

A second order tensor is a linear operator that sends vectors to vectors.

According to Simmonds, "the name tensor comes from elasticity theory where in a loaded elastic body the stress tensor acting on a unit vector normal to a plane through a point delivers the tension (i.e., the force per unit area) acting across the plane at that point."

Examples of second order tensors are the stress tensor, the deformation gradient tensor, the velocity gradient tensor, and so on.

Another type of tensor that we encounter frequently in mechanics is the fourth order tensor that takes strains to stresses. In elasticity, this is the stiffness tensor.

A fourth order tensor is a linear operator that sends second order tensors to second order tensors.

Tensor algebra

[edit |edit source]

A tensorA{\displaystyle {\boldsymbol {A}}\,} is a linear transformation from a vector spaceV{\displaystyle {\mathcal {V}}} toV{\displaystyle {\mathcal {V}}}. Thus, we can write

A:uVvV .{\displaystyle {\boldsymbol {A}}:\mathbf {u} \in {\mathcal {V}}\rightarrow \mathbf {v\in {\mathcal {V}}} ~.}

More often, we use the following notation:

v=AuA(u)Au .{\displaystyle \mathbf {v} ={\boldsymbol {A}}\mathbf {u} \equiv {\boldsymbol {A}}(\mathbf {u} )\equiv {\boldsymbol {A}}\bullet \mathbf {u} ~.}

I have used the "dot" notation in this handout. None of the above notations is obviously superior to the others and each is used widely.

Addition of tensors

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}\,} andB{\displaystyle {\boldsymbol {B}}\,} be two tensors. Then the sum(A+B){\displaystyle ({\boldsymbol {A}}+{\boldsymbol {B}})\,} is another tensorC{\displaystyle {\boldsymbol {C}}\,} defined by

C=A+BCv=(A+B)v=Av+Bv .{\displaystyle {\boldsymbol {C}}={\boldsymbol {A}}+{\boldsymbol {B}}\implies {\boldsymbol {C}}\bullet \mathbf {v} =({\boldsymbol {A}}+{\boldsymbol {B}})\bullet \mathbf {v} ={\boldsymbol {A}}\bullet \mathbf {v} +{\boldsymbol {B}}\bullet \mathbf {v} ~.}

Multiplication of a tensor by a scalar

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}\,} be a tensor and letλ{\displaystyle \lambda \,} be a scalar. Then the productC=λA{\displaystyle {\boldsymbol {C}}=\lambda {\boldsymbol {A}}\,} is a tensor defined by

C=λACv=(λA)v=λ(Av) .{\displaystyle {\boldsymbol {C}}=\lambda {\boldsymbol {A}}\implies {\boldsymbol {C}}\bullet \mathbf {v} =(\lambda {\boldsymbol {A}})\bullet \mathbf {v} =\lambda ({\boldsymbol {A}}\bullet \mathbf {v} )~.}

Zero tensor

[edit |edit source]

The zero tensor0{\displaystyle {\boldsymbol {\mathit {0}}}\,} is the tensor which maps every vectorv{\displaystyle \mathbf {v} \,} into the zero vector.

0v=0 .{\displaystyle {\boldsymbol {\mathit {0}}}\bullet \mathbf {v} =\mathbf {0} ~.}

Identity tensor

[edit |edit source]

The identity tensorI{\displaystyle {\boldsymbol {\mathit {I}}}\,} takes every vectorv{\displaystyle \mathbf {v} \,} into itself.

Iv=v .{\displaystyle {\boldsymbol {\mathit {I}}}\bullet \mathbf {v} =\mathbf {v} ~.}

The identity tensor is also often written as1{\displaystyle {\boldsymbol {\mathit {1}}}\,}.

Product of two tensors

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}\,} andB{\displaystyle {\boldsymbol {B}}\,} be two tensors. Then the productC=AB{\displaystyle {\boldsymbol {C}}={\boldsymbol {A}}\bullet {\boldsymbol {B}}} is the tensor that is defined by

C=ABCv=(AB)v=A(Bv) .{\displaystyle {\boldsymbol {C}}={\boldsymbol {A}}\bullet {\boldsymbol {B}}\implies {\boldsymbol {C}}\bullet \mathbf {v} =({\boldsymbol {A}}\bullet {\boldsymbol {B}})\bullet {\mathbf {v} }={\boldsymbol {A}}\bullet ({\boldsymbol {B}}\bullet {\mathbf {v} })~.}

In generalABBA{\displaystyle {\boldsymbol {A}}\bullet {\boldsymbol {B}}\neq {\boldsymbol {B}}\bullet {\boldsymbol {A}}}.

Transpose of a tensor

[edit |edit source]

The transpose of a tensorA{\displaystyle {\boldsymbol {A}}\,} is the unique tensorAT{\displaystyle {\boldsymbol {A}}^{T}\,} defined by

(Au)v=u(ATv) .{\displaystyle ({\boldsymbol {A}}\bullet \mathbf {u} )\bullet \mathbf {v} =\mathbf {u} \bullet ({\boldsymbol {A}}^{T}\bullet \mathbf {v} )~.}

The following identities follow from the above definition:

(A+B)T=AT+BT ,(AB)T=BTAT ,(AT)T=A .{\displaystyle {\begin{aligned}({\boldsymbol {A}}+{\boldsymbol {B}})^{T}&={\boldsymbol {A}}^{T}+{\boldsymbol {B}}^{T}~,\\({\boldsymbol {A}}\bullet {\boldsymbol {B}})^{T}&={\boldsymbol {B}}^{T}\bullet {\boldsymbol {A}}^{T}~,\\({\boldsymbol {A}}^{T})^{T}&={\boldsymbol {A}}~.\end{aligned}}}

Symmetric and skew tensors

[edit |edit source]

A tensorA{\displaystyle {\boldsymbol {A}}\,} is symmetric if

A=AT .{\displaystyle {\boldsymbol {A}}={\boldsymbol {A}}^{T}~.}

A tensorA{\displaystyle {\boldsymbol {A}}\,} is skew if

A=AT .{\displaystyle {\boldsymbol {A}}=-{\boldsymbol {A}}^{T}~.}

Every tensorA{\displaystyle {\boldsymbol {A}}\,} can be expressed uniquely as the sum of a symmetric tensorE{\displaystyle {\boldsymbol {E}}\,} (the symmetric part ofA{\displaystyle {\boldsymbol {A}}\,}) and a skew tensorW{\displaystyle {\boldsymbol {W}}\,} (the skew part ofA{\displaystyle {\boldsymbol {A}}\,}).

A=E+W ;  E=A+AT2 ;  W=AAT2 .{\displaystyle {\boldsymbol {A}}={\boldsymbol {E}}+{\boldsymbol {W}}~;~~{\boldsymbol {E}}={\cfrac {{\boldsymbol {A}}+{\boldsymbol {A}}^{T}}{2}}~;~~{\boldsymbol {W}}={\cfrac {{\boldsymbol {A}}-{\boldsymbol {A}}^{T}}{2}}~.}

Tensor product of two vectors

[edit |edit source]

The tensor (or dyadic) productab{\displaystyle \mathbf {a} \mathbf {b} \,} (also writtenab{\displaystyle \mathbf {a} \otimes \mathbf {b} \,}) of two vectorsa{\displaystyle \mathbf {a} \,} andb{\displaystyle \mathbf {b} \,} is a tensor that assigns to each vectorv{\displaystyle \mathbf {v} \,} the vector(bv)a{\displaystyle (\mathbf {b} \bullet \mathbf {v} )\mathbf {a} }.

(ab)v=(ab)v=(bv)a .{\displaystyle (\mathbf {a} \mathbf {b} )\bullet \mathbf {v} =(\mathbf {a} \otimes \mathbf {b} )\bullet \mathbf {v} =(\mathbf {b} \bullet \mathbf {v} )\mathbf {a} ~.}

Notice that all the above operations on tensors are remarkably similar to matrix operations.

Spectral theorem

[edit |edit source]

The spectral theorem for tensors is widely used in mechanics. We will start off by definining eigenvalues and eigenvectors.

Eigenvalues and eigenvectors

[edit |edit source]

LetS{\displaystyle {\boldsymbol {S}}} be a second order tensor. Letλ{\displaystyle \lambda } be a scalar andn{\displaystyle \mathbf {n} } be a vector such that

Sn=λ n{\displaystyle {\boldsymbol {S}}\cdot \mathbf {n} =\lambda ~\mathbf {n} }

Thenλ{\displaystyle \lambda } is called an eigenvalue ofS{\displaystyle {\boldsymbol {S}}} andn{\displaystyle \mathbf {n} } is an eigenvector.

A second order tensor has three eigenvalues and three eigenvectors, since the space is three-dimensional. Some of the eigenvalues might be repeated. The number of times an eigenvalue is repeated is calledmultiplicity.

In mechanics, many second order tensors are symmetric and positive definite. Note the following important properties of such tensors:

  1. IfS{\displaystyle {\boldsymbol {S}}} is positive definite, thenλ>0{\displaystyle \lambda >0}.
  2. IfS{\displaystyle {\boldsymbol {S}}} is symmetric, the eigenvectorsn{\displaystyle \mathbf {n} } are mutually orthogonal.

For more on eigenvalues and eigenvectors seeApplied linear operators and spectral methods.

Spectral theorem

[edit |edit source]

LetS{\displaystyle {\boldsymbol {S}}} be a symmetric second-order tensor. Then

  1. the normalized eigenvectorsn1,n2,n3{\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} form an orthonormal basis.
  2. ifλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}} are the corresponding eigenvalues thenS=i=13λinini{\displaystyle {\boldsymbol {S}}=\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}.

This relation is called thespectral decomposition ofS{\displaystyle {\boldsymbol {S}}}.

Polar decomposition theorem

[edit |edit source]

LetF{\displaystyle {\boldsymbol {F}}} be second order tensor withdetF>0{\displaystyle \det {\boldsymbol {F}}>0}. Then

  1. there exist positive definite, symmetric tensorsU{\displaystyle {\boldsymbol {U}}},V{\displaystyle {\boldsymbol {V}}} and a rotation (orthogonal) tensorR{\displaystyle {\boldsymbol {R}}} such thatF=RU=VR{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\boldsymbol {V}}\cdot {\boldsymbol {R}}}.
  2. also each of these decompositions isunique.

Principal invariants of a tensor

[edit |edit source]

LetS{\displaystyle {\boldsymbol {S}}} be a second order tensor. Then the determinant ofSλ I{\displaystyle {\boldsymbol {S}}-\lambda ~{\boldsymbol {\mathit {I}}}} can be expressed as

det(Sλ I)=λ3+I1(S) λ2I2(S) λ+I3(S){\displaystyle \det({\boldsymbol {S}}-\lambda ~{\boldsymbol {\mathit {I}}})=-\lambda ^{3}+I_{1}({\boldsymbol {S}})~\lambda ^{2}-I_{2}({\boldsymbol {S}})~\lambda +I_{3}({\boldsymbol {S}})}

The quantitiesI1,I2,I3{\displaystyle I_{1},I_{2},I_{3}\,} are called theprincipal invariants ofS{\displaystyle {\boldsymbol {S}}}. Expressions of the principal invariants are given below.

Principal invariants ofS{\displaystyle {\boldsymbol {S}}}

Note thatλ{\displaystyle \lambda } is an eigenvalue ofS{\displaystyle {\boldsymbol {S}}} if and only if

det(Sλ I)=0{\displaystyle \det({\boldsymbol {S}}-\lambda ~{\boldsymbol {\mathit {I}}})=0}

The resulting equations is called thecharacteristic equation and is usually written in expanded form as

λ3I1(S) λ2+I2(S) λI3(S)=0{\displaystyle \lambda ^{3}-I_{1}({\boldsymbol {S}})~\lambda ^{2}+I_{2}({\boldsymbol {S}})~\lambda -I_{3}({\boldsymbol {S}})=0}

Cayley-Hamilton theorem

[edit |edit source]

The Cayley-Hamilton theorem is a very useful result in continuum mechanics. It states that

Cayley-Hamilton theorem

Index notation

[edit |edit source]

All the equations so far have made no mention of the coordinate system. When we use vectors and tensor in computations we have to express them in some coordinate system (basis) and use the components of the object in that basis for our computations.

Commonly used bases are the Cartesian coordinate frame, the cylindrical coordinate frame, and the spherical coordinate frame.

A Cartesian coordinate frame consists of an orthonormal basis(e1,e2,e3){\displaystyle (\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3})\,} together with a pointo{\displaystyle \mathbf {o} \,} called the origin. Since these vectors are mutually perpendicular, we have the following relations:

(1)e1e1=1 ;  e1e2=0 ;  e1e3=0 ;e2e1=0 ;  e2e2=1 ;  e2e3=0 ;e3e1=0 ;  e3e2=0 ;  e3e3=1 .{\displaystyle {\begin{aligned}{\text{(1)}}\qquad \mathbf {e} _{1}\bullet \mathbf {e} _{1}&=1~;~~\mathbf {e} _{1}\bullet \mathbf {e} _{2}=0~;~~\mathbf {e} _{1}\bullet \mathbf {e} _{3}=0~;\\\mathbf {e} _{2}\bullet \mathbf {e} _{1}&=0~;~~\mathbf {e} _{2}\bullet \mathbf {e} _{2}=1~;~~\mathbf {e} _{2}\bullet \mathbf {e} _{3}=0~;\\\mathbf {e} _{3}\bullet \mathbf {e} _{1}&=0~;~~\mathbf {e} _{3}\bullet \mathbf {e} _{2}=0~;~~\mathbf {e} _{3}\bullet \mathbf {e} _{3}=1~.\end{aligned}}}

Kronecker delta

[edit |edit source]

To make the above relations more compact, we introduce the Kronecker delta symbol

δij={1 if i=j .0 if ij .{\displaystyle {\delta _{ij}={\begin{cases}1&~{\rm {{if}~i=j~.}}\\0&~{\rm {{if}~i\neq j~.}}\end{cases}}}}

Then, instead of the nine equations in (1) we can write (in index notation)

eiej=δij .{\displaystyle \mathbf {e} _{i}\bullet \mathbf {e} _{j}=\delta _{ij}~.}

Einstein summation convention

[edit |edit source]

Recall that the vectoru{\displaystyle \mathbf {u} \,} can be written as

(2)u=u1e1+u2e2+u3e3=i=13uiei .{\displaystyle {\text{(2)}}\qquad \mathbf {u} =u_{1}\mathbf {e} _{1}+u_{2}\mathbf {e} _{2}+u_{3}\mathbf {e} _{3}=\sum _{i=1}^{3}u_{i}\mathbf {e} _{i}~.}

In index notation, equation (2) can be written as

u=uiei .{\displaystyle {\mathbf {u} =u_{i}\mathbf {e} _{i}~.}}

This convention is called the Einstein summation convention. If indices are repeated, we understand that to mean that there is a sum over the indices.

Components of a vector

[edit |edit source]

We can write the Cartesian components of a vectoru{\displaystyle \mathbf {u} \,} in the basis(e1,e2,e3){\displaystyle (\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3})\,} as

ui=eiu ,   i=1,2,3 .{\displaystyle u_{i}=\mathbf {e} _{i}\bullet \mathbf {u} ~,~~~i=1,2,3~.}

Components of a tensor

[edit |edit source]

Similarly, the componentsAij{\displaystyle A_{ij}\,} of a tensorA{\displaystyle {\boldsymbol {A}}\,} are defined by

Aij=ei(Aej) .{\displaystyle {A_{ij}=\mathbf {e} _{i}\bullet ({\boldsymbol {A}}\bullet \mathbf {e} _{j})~.}}

Using the definition of the tensor product, we can also write

A=i,j=13Aijeieji,j=13Aijeiej .{\displaystyle {\boldsymbol {A}}=\sum _{i,j=1}^{3}A_{ij}\mathbf {e} _{i}\mathbf {e} _{j}\equiv \sum _{i,j=1}^{3}A_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}~.}

Using the summation convention,

A=AijeiejAijeiej .{\displaystyle {{\boldsymbol {A}}=A_{ij}\mathbf {e} _{i}\mathbf {e} _{j}\equiv A_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}~.}}

In this case, the bases of the tensor are{eiej}{\displaystyle \{\mathbf {e} _{i}\otimes \mathbf {e} _{j}\}} and the components areAij{\displaystyle A_{ij}\,}.

Operation of a tensor on a vector

[edit |edit source]

From the definition of the components of tensorA{\displaystyle {\boldsymbol {A}}\,}, we can also see that (using the summation convention)

v=Au      vi=Aijuj .{\displaystyle {\mathbf {v} ={\boldsymbol {A}}\bullet \mathbf {u} ~~~\equiv ~~~v_{i}=A_{ij}u_{j}~.}}

Dyadic product

[edit |edit source]

Similarly, the dyadic product can be expressed as

(ab)ij(ab)ij=aibj .{\displaystyle {(\mathbf {a} \mathbf {b} )_{ij}\equiv (\mathbf {a} \otimes \mathbf {b} )_{ij}=a_{i}b_{j}~.}}

Matrix notation

[edit |edit source]

We can also write a tensorA{\displaystyle {\boldsymbol {A}}} in matrix notation as

A=Aijeiej=AijeiejA=[A11A12A13A21A22A23A31A32A33] .{\displaystyle {\boldsymbol {A}}=A_{ij}\mathbf {e} _{i}\mathbf {e} _{j}=A_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\implies \mathbf {A} ={\begin{bmatrix}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\\A_{31}&A_{32}&A_{33}\end{bmatrix}}~.}

Note that the Kronecker delta represents the components of the identity tensor in a Cartesian basis. Therefore, we can write

I=δijeiej=δijeiejI=[100010001] .{\displaystyle {\boldsymbol {I}}=\delta _{ij}\mathbf {e} _{i}\mathbf {e} _{j}=\delta _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\implies \mathbf {I} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}~.}

Tensor inner product

[edit |edit source]

The inner productA:B{\displaystyle {\boldsymbol {A}}:{\boldsymbol {B}}\,} of two tensorsA{\displaystyle {\boldsymbol {A}}\,} andB{\displaystyle {\boldsymbol {B}}\,} is an operation that generates a scalar. We define (summation implied)

A:B=AijBij .{\displaystyle {{\boldsymbol {A}}:{\boldsymbol {B}}=A_{ij}B_{ij}~.}}

The inner product can also be expressed using the trace :

A:B=Tr(ATB) .{\displaystyle {{\boldsymbol {A}}:{\boldsymbol {B}}=Tr({\boldsymbol {A^{T}}}\bullet {\boldsymbol {B}})~.}}

Proof using the definition of the trace below :

Tr(ATB)=I:(ATB)=δijeiej:(AlkekelBmnemen)=δijeiej:(AmkBmneken)={\displaystyle {Tr({\boldsymbol {A^{T}}}\bullet {\boldsymbol {B}})={\boldsymbol {I}}:({\boldsymbol {A^{T}}}\bullet {\boldsymbol {B}})=\delta _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}:(A_{lk}\mathbf {e} _{k}\otimes \mathbf {e} _{l}\bullet B_{mn}\mathbf {e} _{m}\otimes \mathbf {e} _{n})=\delta _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}:(A_{mk}B_{mn}\mathbf {e} _{k}\otimes \mathbf {e} _{n})=}}
AmkBmnδijδinδjk=AmkBmiδijδjk=AmkBmjδjk=AmjBmj=A:B{\displaystyle {A_{mk}B_{mn}\delta _{ij}\delta _{in}\delta _{jk}=A_{mk}B_{mi}\delta _{ij}\delta _{jk}=A_{mk}B_{mj}\delta _{jk}=A_{mj}B_{mj}=A:B}}

Trace of a tensor

[edit |edit source]

The trace of a tensor is the scalar given by

Tr(A)=I:A=δijeiej:Amnemen=δijδimδjnAmn=Aii{\displaystyle {\text{Tr}}({\boldsymbol {A}})={\boldsymbol {I}}:{\boldsymbol {A}}=\delta _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}:A_{mn}\mathbf {e} _{m}\otimes \mathbf {e} _{n}=\delta _{ij}\delta _{im}\delta _{jn}A_{mn}=A_{ii}}

The trace of an N x N-matrix is the sum of the components on the downward-sloping diagonal.

Magnitude of a tensor

[edit |edit source]

The magnitude of a tensorA{\displaystyle {\boldsymbol {A}}\,} is defined by

A=A:AAijAij .{\displaystyle \Vert {\boldsymbol {A}}\Vert ={\sqrt {{\boldsymbol {A}}:{\boldsymbol {A}}}}\equiv {\sqrt {A_{ij}A_{ij}}}~.}

Tensor product of a tensor with a vector

[edit |edit source]

Another tensor operation that is often seen is thetensor product of a tensor with a vector. LetA{\displaystyle {\boldsymbol {A}}\,} be a tensor and letv{\displaystyle \mathbf {v} \,} be a vector. Then the tensor cross product gives a tensorC{\displaystyle {\boldsymbol {C}}\,} defined by

C=A×vCij=ekljAikvl .{\displaystyle {{\boldsymbol {C}}={\boldsymbol {A}}\times \mathbf {v} \implies C_{ij}=e_{klj}A_{ik}v_{l}~.}}

Permutation symbol

[edit |edit source]

The permutation symboleijk{\displaystyle e_{ijk}\,} is defined as

eijk={1 if ijk=123,231, or 3121 if ijk=321,132, or 2130 if any two indices are alike{\displaystyle {e_{ijk}={\begin{cases}1&~{\text{if}}~ijk=123,231,~{\text{or}}~312\\-1&~{\text{if}}~ijk=321,132,~{\text{or}}~213\\0&~{\text{if any two indices are alike}}\end{cases}}}}

Identities in tensor algebra

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}},B{\displaystyle {\boldsymbol {B}}} andC{\displaystyle {\boldsymbol {C}}} be three second order tensors. Then

A:(BC)=(CAT):BT=(BTA):C{\displaystyle {\boldsymbol {A}}:({\boldsymbol {B}}\cdot {\boldsymbol {C}})=({\boldsymbol {C}}\cdot {\boldsymbol {A}}^{T}):{\boldsymbol {B}}^{T}=({\boldsymbol {B}}^{T}\cdot {\boldsymbol {A}}):{\boldsymbol {C}}}

Proof:

It is easiest to show these relations by using index notation with respect to an orthonormal basis. Then we can write

A:(BC)Aij(Bik Ckj)=Ckj AjiT BkiT(CAT):BT{\displaystyle {\boldsymbol {A}}:({\boldsymbol {B}}\cdot {\boldsymbol {C}})\equiv A_{ij}(B_{ik}~C_{kj})=C_{kj}~A_{ji}^{T}~B_{ki}^{T}\equiv ({\boldsymbol {C}}\cdot {\boldsymbol {A}}^{T}):{\boldsymbol {B}}^{T}}

Similarly,

A:(BC)Aij(Bik Ckj)=BkiT Aij Ckj(BTA):C{\displaystyle {\boldsymbol {A}}:({\boldsymbol {B}}\cdot {\boldsymbol {C}})\equiv A_{ij}(B_{ik}~C_{kj})=B_{ki}^{T}~A_{ij}~C_{kj}\equiv ({\boldsymbol {B}}^{T}\cdot {\boldsymbol {A}}):{\boldsymbol {C}}}

Tensor calculus

[edit |edit source]

Recall that the vector differential operator (with respect to a Cartesian basis) is defined as

=x1e1+x2e2+x3e3xiei .{\displaystyle {\boldsymbol {\nabla }}{}={\cfrac {\partial }{\partial x_{1}}}\mathbf {e} _{1}+{\cfrac {\partial }{\partial x_{2}}}\mathbf {e} _{2}+{\cfrac {\partial }{\partial x_{3}}}\mathbf {e} _{3}\equiv {\cfrac {\partial }{\partial x_{i}}}\mathbf {e} _{i}~.}

In this section we summarize some operations of{\displaystyle {\boldsymbol {\nabla }}{}} on vectors and tensors.

The gradient of a vector field

[edit |edit source]

The dyadic productv{\displaystyle {\boldsymbol {\nabla }}{\mathbf {v} }\,} (orv{\displaystyle {\boldsymbol {\nabla }}{}\otimes \mathbf {v} }) is called the gradient of the vector fieldv{\displaystyle \mathbf {v} \,}. Therefore, the quantityv{\displaystyle {\boldsymbol {\nabla }}{\mathbf {v} }} is a tensor given by

v=ijvjxieiejvj,ieiej .{\displaystyle {{\boldsymbol {\nabla }}{\mathbf {v} }=\sum _{i}\sum _{j}{\cfrac {\partial v_{j}}{\partial x_{i}}}\mathbf {e} _{i}\mathbf {e} _{j}\equiv v_{j,i}\mathbf {e} _{i}\mathbf {e} _{j}~.}}

In the alternative dyadic notation,

vv=ijvjxieiejvj,ieiej .{\displaystyle {{\boldsymbol {\nabla }}{\mathbf {v} }\equiv {\boldsymbol {\nabla }}{}\otimes \mathbf {v} =\sum _{i}\sum _{j}{\cfrac {\partial v_{j}}{\partial x_{i}}}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\equiv v_{j,i}\mathbf {e} _{i}\otimes \mathbf {e} _{j}~.}}

'Warning: Some authors define theij{\displaystyle ij} component ofv{\displaystyle {\boldsymbol {\nabla }}{\mathbf {v} }} asvi/xj=vi,j{\displaystyle \partial v_{i}/\partial x_{j}=v_{i,j}}.

The divergence of a tensor field

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}\,} be a tensor field. Then the divergence of the tensor field is a vectorA{\displaystyle {\boldsymbol {\nabla }}\bullet {\boldsymbol {A}}} given by

A=j[iAijxi]ejAijxiej=Aij,iej .{\displaystyle {{\boldsymbol {\nabla }}\bullet {\boldsymbol {A}}=\sum _{j}\left[\sum _{i}{\cfrac {\partial A_{ij}}{\partial x_{i}}}\right]\mathbf {e} _{j}\equiv {\cfrac {\partial A_{ij}}{\partial x_{i}}}\mathbf {e} _{j}=A_{ij,i}\mathbf {e} _{j}~.}}

To fix the definition of divergence of a general tensor field (possibly of higher order than 2), we use the relation

(A)c=(Ac){\displaystyle ({\boldsymbol {\nabla }}\bullet {\boldsymbol {A}})\bullet \mathbf {c} ={\boldsymbol {\nabla }}\bullet ({\boldsymbol {A}}\bullet \mathbf {c} )}

wherec{\displaystyle \mathbf {c} } is an arbitrary constant vector.

The Laplacian of a vector field

[edit |edit source]

The Laplacian of a vector field is given by

2v=v=j[i2vjxi2]ejvj,iiej .{\displaystyle {\nabla ^{2}{\mathbf {v} }={\boldsymbol {\nabla }}\bullet {{\boldsymbol {\nabla }}{\mathbf {v} }}=\sum _{j}\left[\sum _{i}{\cfrac {\partial ^{2}v_{j}}{\partial x_{i}^{2}}}\right]\mathbf {e} _{j}\equiv v_{j,ii}\mathbf {e} _{j}~.}}

Tensor Identities

[edit |edit source]

Some important identities involving tensors are:

  1. v=(v)×(×v){\displaystyle {\boldsymbol {\nabla }}\bullet {{\boldsymbol {\nabla }}{\mathbf {v} }}={\boldsymbol {\nabla }}{({\boldsymbol {\nabla }}\bullet {\mathbf {v} })}-{\boldsymbol {\nabla }}\times {({\boldsymbol {\nabla }}\times {\mathbf {v} })}}.
  2. vv=12(vv)v×(×v){\displaystyle \mathbf {v} \bullet {\boldsymbol {\nabla }}{\mathbf {v} }={\frac {1}{2}}{\boldsymbol {\nabla }}{(\mathbf {v} \bullet \mathbf {v} )}-\mathbf {v} \times ({\boldsymbol {\nabla }}\times {\mathbf {v} )}} .
  3. (vw)=vw+w(v){\displaystyle {\boldsymbol {\nabla }}\bullet {(\mathbf {v} \otimes \mathbf {w} )}=\mathbf {v} \bullet {\boldsymbol {\nabla }}{\mathbf {w} }+\mathbf {w} ({\boldsymbol {\nabla }}\bullet {\mathbf {v} })} .
  4. (φA)=φA+φA{\displaystyle {\boldsymbol {\nabla }}\bullet {(\varphi {\boldsymbol {A}})}={\boldsymbol {\nabla }}{\varphi }\bullet {\boldsymbol {A}}+\varphi {\boldsymbol {\nabla }}\bullet {\boldsymbol {A}}} .
  5. (vw)=(v)w+(w)v{\displaystyle {\boldsymbol {\nabla }}{(\mathbf {v} \bullet \mathbf {w} )}=({\boldsymbol {\nabla }}{\mathbf {v} })\bullet \mathbf {w} +({\boldsymbol {\nabla }}{\mathbf {w} })\bullet \mathbf {v} } .
  6. (Aw)=(A)w+AT:(w){\displaystyle {\boldsymbol {\nabla }}\bullet {({\boldsymbol {A}}\bullet \mathbf {w} )}=({\boldsymbol {\nabla }}\bullet {\boldsymbol {A}})\bullet \mathbf {w} +{\boldsymbol {A}}^{T}:({\boldsymbol {\nabla }}{\mathbf {w} })} .

Integral theorems

[edit |edit source]

The following integral theorems are useful in continuum mechanics and finite elements.

The Gauss divergence theorem

[edit |edit source]

IfΩ{\displaystyle \Omega } is a region in space enclosed by a surfaceΓ{\displaystyle \Gamma \,} andA{\displaystyle {\boldsymbol {A}}\,} is a tensor field, then

ΩA dV=ΓnA dA{\displaystyle {\int _{\Omega }{\boldsymbol {\nabla }}\bullet {\boldsymbol {A}}~dV=\int _{\Gamma }\mathbf {n} \bullet {\boldsymbol {A}}~dA}}

wheren{\displaystyle \mathbf {n} \,} is the unit outward normal to the surface.

The Stokes curl theorem

[edit |edit source]

IfΓ{\displaystyle \Gamma \,} is a surface bounded by a closed curveC{\displaystyle {\mathcal {C}}}, then

Γn(×A) dA=CtA ds{\displaystyle \int _{\Gamma }\mathbf {n} \bullet ({\boldsymbol {\nabla }}\times {{\boldsymbol {A}})}~dA=\oint _{\mathcal {C}}\mathbf {t} \bullet {\boldsymbol {A}}~ds}

whereA{\displaystyle {\boldsymbol {A}}\,} is a tensor field,n{\displaystyle \mathbf {n} \,} is the unit normal vector toΓ{\displaystyle \Gamma \,} in the direction of a right-handed screw motion alongC{\displaystyle {\mathcal {C}}}, andt{\displaystyle \mathbf {t} \,} is a unit tangential vector in the direction of integration alongC{\displaystyle {\mathcal {C}}}.

The Leibniz formula

[edit |edit source]

LetΩ{\displaystyle \Omega } be a closed moving region of space enclosed by a surfaceΓ{\displaystyle \Gamma \,}. Let the velocity of any surface element bev{\displaystyle \mathbf {v} \,}. Then ifA(x,t){\displaystyle {\boldsymbol {A}}(\mathbf {x} ,t)\,} is a tensor function of position and time,

tΩA dV=ΩAt dV+ΓA(vn) dA{\displaystyle {\cfrac {\partial }{\partial t}}\int _{\Omega }{\boldsymbol {A}}~dV=\int _{\Omega }{\cfrac {\partial {\boldsymbol {A}}}{\partial t}}~dV+\int _{\Gamma }{\boldsymbol {A}}(\mathbf {v} \bullet \mathbf {n} )~dA}

wheren{\displaystyle \mathbf {n} \,} is the outward unit normal to the surfaceΓ{\displaystyle \Gamma \,}.

Directional derivatives

[edit |edit source]

We often have to find the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. Thedirectional directive provides a systematic way of finding these derivatives.

The definitions of directional derivatives for various situations aregiven below. It is assumed that the functions are sufficiently smooththat derivatives can be taken.

Derivatives of scalar valued functions of vectors

[edit |edit source]

Letf(v){\displaystyle f(\mathbf {v} )} be a real valued function of the vectorv{\displaystyle \mathbf {v} }. Then the derivative off(v){\displaystyle f(\mathbf {v} )} with respect tov{\displaystyle \mathbf {v} } (or atv{\displaystyle \mathbf {v} }) in the directionu{\displaystyle \mathbf {u} } is the vector defined as

fvu=Df(v)[u]=[α f(v+α u)]α=0{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )[\mathbf {u} ]=\left[{\frac {\partial }{\partial \alpha }}~f(\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}

for all vectorsu{\displaystyle \mathbf {u} }.

Properties:

1) Iff(v)=f1(v)+f2(v){\displaystyle f(\mathbf {v} )=f_{1}(\mathbf {v} )+f_{2}(\mathbf {v} )} thenfvu=(f1v+f2v)u{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial f_{1}}{\partial \mathbf {v} }}+{\frac {\partial f_{2}}{\partial \mathbf {v} }}\right)\cdot \mathbf {u} }

2) Iff(v)=f1(v) f2(v){\displaystyle f(\mathbf {v} )=f_{1}(\mathbf {v} )~f_{2}(\mathbf {v} )} thenfvu=(f1vu) f2(v)+f1(v) (f2vu){\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial f_{1}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)~f_{2}(\mathbf {v} )+f_{1}(\mathbf {v} )~\left({\frac {\partial f_{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}

3) Iff(v)=f1(f2(v)){\displaystyle f(\mathbf {v} )=f_{1}(f_{2}(\mathbf {v} ))} thenfvu=f1f2 f2vu{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} ={\frac {\partial f_{1}}{\partial f_{2}}}~{\frac {\partial f_{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} }

Derivatives of vector valued functions of vectors

[edit |edit source]

Letf(v){\displaystyle \mathbf {f} (\mathbf {v} )} be a vector valued function of the vectorv{\displaystyle \mathbf {v} }. Then the derivative off(v){\displaystyle \mathbf {f} (\mathbf {v} )} with respect tov{\displaystyle \mathbf {v} } (or atv{\displaystyle \mathbf {v} }) in the directionu{\displaystyle \mathbf {u} } is the second order tensor defined as

fvu=Df(v)[u]=[α f(v+α u)]α=0{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )[\mathbf {u} ]=\left[{\frac {\partial }{\partial \alpha }}~\mathbf {f} (\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}

for all vectorsu{\displaystyle \mathbf {u} }.

Properties:

1) Iff(v)=f1(v)+f2(v){\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {v} )+\mathbf {f} _{2}(\mathbf {v} )} thenfvu=(f1v+f2v)u{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {v} }}+{\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\right)\cdot \mathbf {u} }

2) Iff(v)=f1(v)×f2(v){\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {v} )\times \mathbf {f} _{2}(\mathbf {v} )} thenfvu=(f1vu)×f2(v)+f1(v)×(f2vu){\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)\times \mathbf {f} _{2}(\mathbf {v} )+\mathbf {f} _{1}(\mathbf {v} )\times \left({\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}

3) Iff(v)=f1(f2(v)){\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {f} _{2}(\mathbf {v} ))} thenfvu=f1f2(f2vu){\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} ={\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {f} _{2}}}\cdot \left({\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}

Derivatives of scalar valued functions of tensors

[edit |edit source]

Letf(S){\displaystyle f({\boldsymbol {S}})} be a real valued function of the second order tensorS{\displaystyle {\boldsymbol {S}}}. Then the derivative off(S){\displaystyle f({\boldsymbol {S}})} with respect toS{\displaystyle {\boldsymbol {S}}} (or atS{\displaystyle {\boldsymbol {S}}}) in the directionT{\displaystyle {\boldsymbol {T}}} is the second order tensor defined as

fS:T=Df(S)[T]=[α f(S+α T)]α=0{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {\partial }{\partial \alpha }}~f({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}

for all second order tensorsT{\displaystyle {\boldsymbol {T}}}.

Properties:

1) Iff(S)=f1(S)+f2(S){\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {S}})+f_{2}({\boldsymbol {S}})} thenfS:T=(f1S+f2S):T{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial f_{1}}{\partial {\boldsymbol {S}}}}+{\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}\right):{\boldsymbol {T}}}

2) Iff(S)=f1(S) f2(S){\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {S}})~f_{2}({\boldsymbol {S}})} thenfS:T=(f1S:T) f2(S)+f1(S) (f2S:T){\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial f_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)~f_{2}({\boldsymbol {S}})+f_{1}({\boldsymbol {S}})~\left({\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

3) Iff(S)=f1(f2(S)){\displaystyle f({\boldsymbol {S}})=f_{1}(f_{2}({\boldsymbol {S}}))} thenfS:T=f1f2 (f2S:T){\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial f_{1}}{\partial f_{2}}}~\left({\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

Derivatives of tensor valued functions of tensors

[edit |edit source]

LetF(S){\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} be a second order tensor valued function of the second order tensorS{\displaystyle {\boldsymbol {S}}}. Then the derivative ofF(S){\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} with respect toS{\displaystyle {\boldsymbol {S}}} (or atS{\displaystyle {\boldsymbol {S}}}) in the directionT{\displaystyle {\boldsymbol {T}}} is the fourth order tensor defined as

FS:T=DF(S)[T]=[α F(S+α T)]α=0{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {\partial }{\partial \alpha }}~{\boldsymbol {F}}({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}

for all second order tensorsT{\displaystyle {\boldsymbol {T}}}.

Properties:

1) IfF(S)=F1(S)+F2(S){\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {S}})+{\boldsymbol {F}}_{2}({\boldsymbol {S}})} thenFS:T=(F1S+F2S):T{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}+{\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}\right):{\boldsymbol {T}}}

2) IfF(S)=F1(S)F2(S){\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})} thenFS:T=(F1S:T)F2(S)+F1(S)(F2S:T){\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})+{\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot \left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

3) IfF(S)=F1(F2(S)){\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {F}}_{2}({\boldsymbol {S}}))} thenFS:T=F1F2:(F2S:T){\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {F}}_{2}}}:\left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

3) Iff(S)=f1(F2(S)){\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {F}}_{2}({\boldsymbol {S}}))} thenfS:T=f1F2:(F2S:T){\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial f_{1}}{\partial {\boldsymbol {F}}_{2}}}:\left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

Derivative of the determinant of a tensor

[edit |edit source]

Derivative of the determinant of a tensor

The derivative of the determinant of a second order tensorA{\displaystyle {\boldsymbol {A}}} is given by

Adet(A)=det(A) [A1]T .{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det({\boldsymbol {A}})=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}~.}

In an orthonormal basis the components ofA{\displaystyle {\boldsymbol {A}}} can be written asa matrixA{\displaystyle \mathbf {A} }. In that case, the right hand side corresponds the cofactors of the matrix.

Proof:

LetA{\displaystyle {\boldsymbol {A}}} be a second order tensor and letf(A)=det(A){\displaystyle f({\boldsymbol {A}})=\det({\boldsymbol {A}})}. Then,from the definition of the derivative of a scalar valued function of a tensor, we have

fA:T=ddαdet(A+α T)|α=0=ddαdet[α A(1α 1+A1T)]|α=0=ddα[α3 det(A) det(1α 1+A1T)]|α=0 .{\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\det({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\det \left[\alpha ~{\boldsymbol {A}}\left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\det \left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}~.\end{aligned}}}

Recall that we can expand the determinant of a tensor in the form of a characteristic equation in terms of the invariantsI1,I2,I3{\displaystyle I_{1},I_{2},I_{3}} using(note the sign ofλ{\displaystyle \lambda })

det(λ 1+A)=λ3+I1(A) λ2+I2(A) λ+I3(A) .{\displaystyle \det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})~.}

Using this expansion we can write

fA:T=ddα[α3 det(A) (1α3+I1(A1T) 1α2+I2(A1T) 1α+I3(A1T))]|α=0=det(A) ddα[1+I1(A1T) α+I2(A1T) α2+I3(A1T) α3]|α=0=det(A) [I1(A1T)+2 I2(A1T) α+3 I3(A1T) α2]|α=0=det(A) I1(A1T) .{\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\left({\cfrac {1}{\alpha ^{3}}}+I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~{\cfrac {1}{\alpha ^{2}}}+I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~{\cfrac {1}{\alpha }}+I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})\right)\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~{\cfrac {d}{d\alpha }}\left[1+I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha +I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{2}+I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{3}\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~\left[I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})+2~I_{2}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha +3~I_{3}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~\alpha ^{2}\right]\right|_{\alpha =0}\\&=\det({\boldsymbol {A}})~I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})~.\end{aligned}}}

Recall that the invariantI1{\displaystyle I_{1}} is given by

I1(A)=trA .{\displaystyle I_{1}({\boldsymbol {A}})={\text{tr}}{\boldsymbol {A}}~.}

Hence,

fA:T=det(A) tr(A1T)=det(A) [A1]T:T .{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\det({\boldsymbol {A}})~{\text{tr}}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}:{\boldsymbol {T}}~.}

Invoking the arbitrariness ofT{\displaystyle {\boldsymbol {T}}} we then have

fA=det(A) [A1]T .{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}~.}

Derivatives of the invariants of a tensor

[edit |edit source]

Derivatives of the principal invariants of a tensor

The principal invariants of a second order tensor are

I1(A)=trAI2(A)=12[(trA)2trA2]I3(A)=det(A){\displaystyle {\begin{aligned}I_{1}({\boldsymbol {A}})&={\text{tr}}{\boldsymbol {A}}\\I_{2}({\boldsymbol {A}})&={\frac {1}{2}}\left[({\text{tr}}{\boldsymbol {A}})^{2}-{\text{tr}}{{\boldsymbol {A}}^{2}}\right]\\I_{3}({\boldsymbol {A}})&=\det({\boldsymbol {A}})\end{aligned}}}

The derivatives of these three invariants with respect toA{\displaystyle {\boldsymbol {A}}} are

I1A=1I2A=I1 1ATI3A=det(A) [A1]T=I2 1AT (I1 1AT)=(A2I1 A+I2 1)T{\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\\{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}~(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T})=({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}})^{T}\end{aligned}}}

Proof:

From the derivative of the determinant we know that

I3A=det(A) [A1]T .{\displaystyle {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~[{\boldsymbol {A}}^{-1}]^{T}~.}

For the derivatives of the other two invariants, let us go back to thecharacteristic equation

det(λ 1+A)=λ3+I1(A) λ2+I2(A) λ+I3(A) .{\displaystyle \det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})~.}

Using the same approach as for the determinant of a tensor, we can show that

Adet(λ 1+A)=det(λ 1+A) [(λ 1+A)1]T .{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}]^{T}~.}

Now the left hand side can be expanded as

Adet(λ 1+A)=A[λ3+I1(A) λ2+I2(A) λ+I3(A)]=I1A λ2+I2A λ+I3A .{\displaystyle {\begin{aligned}{\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})&={\frac {\partial }{\partial {\boldsymbol {A}}}}\left[\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})\right]\\&={\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~.\end{aligned}}}

Hence

I1A λ2+I2A λ+I3A=det(λ 1+A) [(λ 1+A)1]T{\displaystyle {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}]^{T}}

or,

(λ 1+A)T[I1A λ2+I2A λ+I3A]=det(λ 1+A) 1 .{\displaystyle (\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{T}\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~{\boldsymbol {\mathit {1}}}~.}

Expanding the right hand side and separating terms on the left hand sidegives

(λ 1+AT)[I1A λ2+I2A λ+I3A]=[λ3+I1 λ2+I2 λ+I3]1{\displaystyle (\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T})\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}}

or,

[I1A λ3+I2A λ2+I3A λ]1+ATI1A λ2+ATI2A λ+ATI3A=[λ3+I1 λ2+I2 λ+I3]1 .{\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda \right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}

If we defineI0:=1{\displaystyle I_{0}:=1} andI4:=0{\displaystyle I_{4}:=0}, we can write the above as

[I1A λ3+I2A λ2+I3A λ+I4A]1+ATI0A λ3+ATI1A λ2+ATI2A λ+ATI3A=[I0 λ3+I1 λ2+I2 λ+I3]1 .{\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}\right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[I_{0}~\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}

Collecting terms containing various powers ofλ{\displaystyle \lambda }, we get

λ3(I0 1I1A 1ATI0A)+λ2(I1 1I2A 1ATI1A)+λ(I2 1I3A 1ATI2A)+(I3 1I4A 1ATI3A)=0 .{\displaystyle {\begin{aligned}\lambda ^{3}&\left(I_{0}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}\right)+\lambda ^{2}\left(I_{1}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}\right)+\\&\qquad \qquad \lambda \left(I_{2}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}\right)+\left(I_{3}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right)=0~.\end{aligned}}}

Then, invoking the arbitrariness ofλ{\displaystyle \lambda }, we have

I0 1I1A 1ATI0A=0I1 1I2A 1I2 1I3A 1ATI2A=0I3 1I4A 1ATI3A=0 .{\displaystyle {\begin{aligned}I_{0}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}&=0\\I_{1}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-I_{2}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=0\\I_{3}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=0~.\end{aligned}}}

This implies that

I1A=1I2A=I1 1ATI3A=I2 1AT (I1 1AT)=(A2I1 A+I2 1)T{\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}\\{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T}~(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{T})=({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}})^{T}\end{aligned}}}

Derivative of the identity tensor

[edit |edit source]

Let1{\displaystyle {\boldsymbol {\mathit {1}}}} be the second order identity tensor. Then the derivative of thistensor with respect to a second order tensorA{\displaystyle {\boldsymbol {A}}} is given by

1A:T=0:T=0{\displaystyle {\frac {\partial {\boldsymbol {\mathit {1}}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {0}}}:{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}

This is because1{\displaystyle {\boldsymbol {\mathit {1}}}} is independent ofA{\displaystyle {\boldsymbol {A}}}.

Derivative of a tensor with respect to itself

[edit |edit source]

LetA{\displaystyle {\boldsymbol {A}}} be a second order tensor. Then

AA:T=[α(A+α T)]α=0=T=I:T{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\left[{\frac {\partial }{\partial \alpha }}({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}={\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}:{\boldsymbol {T}}}

Therefore,

AA=I{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\boldsymbol {\mathsf {I}}}}

HereI{\displaystyle {\boldsymbol {\mathsf {I}}}} is the fourth order identity tensor. In index notation with respect to an orthonormal basis

I=δik δjl eiejekel{\displaystyle {\boldsymbol {\mathsf {I}}}=\delta _{ik}~\delta _{jl}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}

This result implies that

ATA:T=IT:T=TT{\displaystyle {\frac {\partial {\boldsymbol {A}}^{T}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}^{T}:{\boldsymbol {T}}={\boldsymbol {T}}^{T}}

where

IT=δjk δil eiejekel{\displaystyle {\boldsymbol {\mathsf {I}}}^{T}=\delta _{jk}~\delta _{il}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}

Therefore, if the tensorA{\displaystyle {\boldsymbol {A}}} is symmetric, then the derivative is also symmetric andwe get

AA=12(A+AT)A=12 (I+IT)=I(s){\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\frac {\partial {\frac {1}{2}}({\boldsymbol {A}}+{\boldsymbol {A}}^{T})}{\partial {\boldsymbol {A}}}}={\frac {1}{2}}~({\boldsymbol {\mathsf {I}}}+{\boldsymbol {\mathsf {I}}}^{T})={\boldsymbol {\mathsf {I}}}^{(s)}}

where the symmetric fourth order identity tensor is

I(s)=12 (δik δjl+δil δjk) eiejekel{\displaystyle {\boldsymbol {\mathsf {I}}}^{(s)}={\frac {1}{2}}~(\delta _{ik}~\delta _{jl}+\delta _{il}~\delta _{jk})~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}

Derivative of the inverse of a tensor

[edit |edit source]

Derivative of the inverse of a tensor

LetA{\displaystyle {\boldsymbol {A}}} andT{\displaystyle {\boldsymbol {T}}} be two second order tensors, then

A(A1):T=A1TA1{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-1}}

In index notation with respect to an orthonormal basis

Aij1Akl Tkl=Aik1 Tkl Alj1Aij1Akl=Aik1 Alj1{\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{ik}^{-1}~T_{kl}~A_{lj}^{-1}\implies {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-A_{ik}^{-1}~A_{lj}^{-1}}

We also have

A(AT):T=ATTAT{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-T}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-T}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-T}}

In index notation

Aji1Akl Tkl=Ajk1 Tkl Ali1Aji1Akl=Ali1 Ajk1{\displaystyle {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{jk}^{-1}~T_{kl}~A_{li}^{-1}\implies {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}=-A_{li}^{-1}~A_{jk}^{-1}}

If the tensorA{\displaystyle {\boldsymbol {A}}} is symmetric then

Aij1Akl=12(Aik1 Ajl1+Ail1 Ajk1){\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-{\cfrac {1}{2}}\left(A_{ik}^{-1}~A_{jl}^{-1}+A_{il}^{-1}~A_{jk}^{-1}\right)}

Proof:

Recall that

1A:T=0{\displaystyle {\frac {\partial {\boldsymbol {\mathit {1}}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}

SinceA1A=1{\displaystyle {\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}={\boldsymbol {\mathit {1}}}}, we can write

A(A1A):T=0{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}):{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}

Using the product rule for second order tensors

S[F1(S)F2(S)]:T=(F1S:T)F2+F1(F2S:T){\displaystyle {\frac {\partial }{\partial {\boldsymbol {S}}}}[{\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})]:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {F}}_{2}+{\boldsymbol {F}}_{1}\cdot \left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}

we get

A(A1A):T=(A1A:T)A+A1(AA:T)=0{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}):{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {A}}^{-1}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {A}}+{\boldsymbol {A}}^{-1}\cdot \left({\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)={\boldsymbol {\mathit {0}}}}

or,

(A1A:T)A=A1T{\displaystyle \left({\frac {\partial {\boldsymbol {A}}^{-1}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {A}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}}

Therefore,

A(A1):T=A1TA1{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-1}}

Remarks

[edit |edit source]

The boldface notation that I've used is called the Gibbs notation. The index notation that I have used is also called Cartesian tensor notation.

Elasticity
Retrieved from "https://en.wikiversity.org/w/index.php?title=Elasticity/Tensors&oldid=2325249"
Categories:

[8]ページ先頭

©2009-2025 Movatter.jp