Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Linear algebra

From Wikipedia, the free encyclopedia
Branch of mathematics

Linear algebra is the branch ofmathematics concerninglinear equations such as

a1x1++anxn=b,{\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,}

linear maps such as

(x1,,xn)a1x1++anxn,{\displaystyle (x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n},}

and their representations invector spaces and throughmatrices.[1][2][3]

In three-dimensionalEuclidean space, these three planes represent solutions to linear equations, and their intersection represents the set of common solutions: in this case, a unique point. The blue line is the common solution to two of these equations.

Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations ofgeometry, including for defining basic objects such aslines,planes androtations. Also,functional analysis, a branch ofmathematical analysis, may be viewed as the application of linear algebra tofunction spaces.

Linear algebra is also used in most sciences and fields ofengineering because it allowsmodeling many natural phenomena, and computing efficiently with such models. Fornonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing withfirst-order approximations, using the fact that thedifferential of amultivariate function at a point is the linear map that best approximates the function near that point.

History

[edit]
See also:Determinant § History, andGaussian elimination § History

The procedure (using counting rods) for solving simultaneous linear equations now calledGaussian elimination appears in the ancient Chinese mathematical textChapter Eight:Rectangular Arrays ofThe Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations.[4]

Systems of linear equations arose in Europe with the introduction in 1637 byRené Descartes ofcoordinates ingeometry. In fact, in this new geometry, now calledCartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations.

The first systematic methods for solving linear systems useddeterminants and were first considered byLeibniz in 1693. In 1750,Gabriel Cramer used them for giving explicit solutions of linear systems, now calledCramer's rule. Later,Gauss further described the method of elimination, which was initially listed as an advancement ingeodesy.[5]

In 1844Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848,James Joseph Sylvester introduced the termmatrix, which is Latin forwomb.

Linear algebra grew with ideas noted in thecomplex plane. For instance, two numbersw andz inC{\displaystyle \mathbb {C} } have a differencewz, and the line segmentswz and0(wz) are of the same length and direction. The segments areequipollent. The four-dimensional systemH{\displaystyle \mathbb {H} } ofquaternions was discovered byW.R. Hamilton in 1843.[6] The termvector was introduced asv =xi +yj +zk representing a point in space. The quaternion differencepq also produces a segment equipollent topq. Otherhypercomplex number systems also used the idea of a linear space with abasis.

Arthur Cayley introducedmatrix multiplication and theinverse matrix in 1856, making possible thegeneral linear group. The mechanism ofgroup representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[5]

Benjamin Peirce published hisLinear Associative Algebra (1872), and his sonCharles Sanders Peirce extended the work later.[7]

Thetelegraph required an explanatory system, and the 1873 publication byJames Clerk Maxwell ofA Treatise on Electricity and Magnetism instituted afield theory of forces and requireddifferential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces tomanifolds. Electromagnetic symmetries of spacetime are expressed by theLorentz transformations, and much of the history of linear algebra is thehistory of Lorentz transformations.

The first modern and more precise definition of a vector space was introduced byPeano in 1888;[5] by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century when many ideas and methods of previous centuries were generalized asabstract algebra. The development of computers led to increased research in efficientalgorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modeling and simulations.[5]

Vector spaces

[edit]
Main article:Vector space

Until the 19th century, linear algebra was introduced throughsystems of linear equations andmatrices. In modern mathematics, the presentation throughvector spaces is generally preferred, since it is moresynthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract.

A vector space over afieldF (often the field of thereal numbers or of thecomplex numbers) is asetV equipped with twobinary operations.Elements ofV are calledvectors, and elements ofF are calledscalars. The first operation,vector addition, takes any two vectorsv andw and outputs a third vectorv +w. The second operation,scalar multiplication, takes any scalara and any vectorv and outputs a newvectorav. The axioms that addition and scalar multiplication must satisfy are the following. (In the list below,u,v andw are arbitrary elements ofV, anda andb are arbitrary scalars in the fieldF.)[8]

AxiomSignification
Associativity of additionu + (v +w) = (u +v) +w
Commutativity of additionu +v =v +u
Identity element of additionThere exists an element0 inV, called thezero vector (or simplyzero), such thatv +0 =v for allv inV.
Inverse elements of additionFor everyv inV, there exists an elementv inV, called theadditive inverse ofv, such thatv + (−v) =0
Distributivity of scalar multiplication with respect to vector additiona(u +v) =au +av
Distributivity of scalar multiplication with respect to field addition(a +b)v =av +bv
Compatibility of scalar multiplication with field multiplicationa(bv) = (ab)v[a]
Identity element of scalar multiplication1v =v, where1 denotes themultiplicative identity ofF.

The first four axioms mean thatV is anabelian group under addition.

The elements of a specific vector space may have various natures; for example, they could betuples,sequences,functions,polynomials, or amatrices. Linear algebra is concerned with the properties of such objects that are common to all vector spaces.

Linear maps

[edit]
Main article:Linear map

Linear maps aremappings between vector spaces that preserve the vector-space structure. Given two vector spacesV andW over a fieldF, a linear map (also called, in some contexts, linear transformation or linear mapping) is amap

T:VW{\displaystyle T:V\to W}

that is compatible with addition and scalar multiplication, that is

T(u+v)=T(u)+T(v),T(av)=aT(v){\displaystyle T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v} ),\quad T(a\mathbf {v} )=aT(\mathbf {v} )}

for any vectorsu,v inV and scalara inF.

An equivalent condition is that for any vectorsu,v inV and scalarsa,b inF, one has

T(au+bv)=aT(u)+bT(v){\displaystyle T(a\mathbf {u} +b\mathbf {v} )=aT(\mathbf {u} )+bT(\mathbf {v} )}.

WhenV =W are the same vector space, a linear mapT :VV is also known as alinear operator onV.

Abijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is anisomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding itsrange (or image) and the set of elements that are mapped to the zero vector, called thekernel of the map. All these questions can be solved by usingGaussian elimination or some variant of thisalgorithm.

Subspaces, span, and basis

[edit]
Main articles:Linear subspace,Linear span, andBasis (linear algebra)

The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are calledlinear subspaces. More precisely, a linear subspace of a vector spaceV over a fieldF is asubsetW ofV such thatu +v andau are inW, for everyu,v inW, and everya inF. (These conditions suffice for implying thatW is a vector space.)

For example, given a linear mapT :VW, theimageT(V) ofV, and theinverse imageT−1(0) of0 (calledkernel or null space), are linear subspaces ofW andV, respectively.

Another important way of forming a subspace is to considerlinear combinations of a setS of vectors: the set of all sums

a1v1+a2v2++akvk,{\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k},}

wherev1,v2, ...,vk are inS, anda1,a2, ...,ak are inF form a linear subspace called thespan ofS. The span ofS is also the intersection of all linear subspaces containingS. In other words, it is the smallest (for the inclusion relation) linear subspace containingS.

A set of vectors islinearly independent if none is in the span of the others. Equivalently, a setS of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements ofS is to take zero for every coefficientai.

A set of vectors that spans a vector space is called aspanning set orgenerating set. If a spanning setS islinearly dependent (that is not linearly independent), then some elementw ofS is in the span of the other elements ofS, and the span would remain the same if one were to removew fromS. One may continue to remove elements ofS until getting alinearly independent spanning set. Such a linearly independent set that spans a vector spaceV is called abasis ofV. The importance of bases lies in the fact that they are simultaneously minimal-generating sets and maximal independent sets. More precisely, ifS is a linearly independent set, andT is a spanning set such thatST, then there is a basisB such thatSBT.

Any two bases of a vector spaceV have the samecardinality, which is called thedimension ofV; this is thedimension theorem for vector spaces. Moreover, two vector spaces over the same fieldF areisomorphic if and only if they have the same dimension.[9]

If any basis ofV (and therefore every basis) has a finite number of elements,V is afinite-dimensional vector space. IfU is a subspace ofV, thendimU ≤ dimV. In the case whereV is finite-dimensional, the equality of the dimensions impliesU =V.

IfU1 andU2 are subspaces ofV, then

dim(U1+U2)=dimU1+dimU2dim(U1U2),{\displaystyle \dim(U_{1}+U_{2})=\dim U_{1}+\dim U_{2}-\dim(U_{1}\cap U_{2}),}

whereU1 +U2 denotes the span ofU1U2.[10]

Matrices

[edit]
Main article:Matrix (mathematics)

Matrices allow explicit manipulation of finite-dimensional vector spaces andlinear maps. Their theory is thus an essential part of linear algebra.

LetV be a finite-dimensional vector space over a fieldF, and(v1,v2, ...,vm) be a basis ofV (thusm is the dimension ofV). By definition of a basis, the map

(a1,,am)a1v1+amvmFmV{\displaystyle {\begin{aligned}(a_{1},\ldots ,a_{m})&\mapsto a_{1}\mathbf {v} _{1}+\cdots a_{m}\mathbf {v} _{m}\\F^{m}&\to V\end{aligned}}}

is abijection fromFm, the set of thesequences ofm elements ofF, ontoV. This is anisomorphism of vector spaces, ifFm is equipped with its standard structure of vector space, where vector addition and scalar multiplication are done component by component.

This isomorphism allows representing a vector by itsinverse image under this isomorphism, that is by thecoordinate vector(a1, ...,am) or by thecolumn matrix

[a1am].{\displaystyle {\begin{bmatrix}a_{1}\\\vdots \\a_{m}\end{bmatrix}}.}

IfW is another finite dimensional vector space (possibly the same), with a basis(w1, ...,wn), a linear mapf fromW toV is well defined by its values on the basis elements, that is(f(w1), ...,f(wn)). Thus,f is well represented by the list of the corresponding column matrices. That is, if

f(wj)=a1,jv1++am,jvm,{\displaystyle f(w_{j})=a_{1,j}v_{1}+\cdots +a_{m,j}v_{m},}

forj = 1, ...,n, thenf is represented by the matrix

[a1,1a1,nam,1am,n],{\displaystyle {\begin{bmatrix}a_{1,1}&\cdots &a_{1,n}\\\vdots &\ddots &\vdots \\a_{m,1}&\cdots &a_{m,n}\end{bmatrix}},}

withm rows andn columns.

Matrix multiplication is defined in such a way that the product of two matrices is the matrix of thecomposition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing the same concepts.

Two matrices that encode the same linear transformation in different bases are calledsimilar. It can be proved that two matrices are similar if and only if one can transform one into the other byelementary row and column operations. For a matrix representing a linear map fromW toV, the row operations correspond to change of bases inV and the column operations correspond to change of bases inW. Every matrix is similar to anidentity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map fromW toV, there are bases such that a part of the basis ofW is mapped bijectively on a part of the basis ofV, and that the remaining basis elements ofW, if any, are mapped to zero.Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results.

Linear systems

[edit]
Main article:System of linear equations

A finite set of linear equations in a finite set of variables, for example,x1,x2, ...,xn, orx,y, ...,z is called a system of linear equations or alinear system.[11][12][13][14][15]

Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory have been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems.

For example, let

2x+yz=83xy+2z=112x+y+2z=3{\displaystyle {\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&8\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&-11\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&-3\end{alignedat}}}S

be a linear system.

To such a system, one may associate its matrix

M=[211312212].{\displaystyle M=\left[{\begin{array}{rrr}2&1&-1\\-3&-1&2\\-2&1&2\end{array}}\right].}

and its right member vector

v=[8113].{\displaystyle \mathbf {v} ={\begin{bmatrix}8\\-11\\-3\end{bmatrix}}.}

LetT be the linear transformation associated with the matrixM. A solution of the system (S) is a vector

X=[xyz]{\displaystyle \mathbf {X} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}}

such that

T(X)=v,{\displaystyle T(\mathbf {X} )=\mathbf {v} ,}

that is an element of thepreimage ofv byT.

Let (S′) be the associatedhomogeneous system, where the right-hand sides of the equations are put to zero:

2x+yz=03xy+2z=02x+y+2z=0{\displaystyle {\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&0\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&0\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&0\end{alignedat}}}S′

The solutions of (S′) are exactly the elements of thekernel ofT or, equivalently,M.

TheGaussian-elimination consists of performingelementary row operations on theaugmented matrix

[Mv]=[2118312112123]{\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]}

for putting it inreduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is

[Mv]=[100201030011],{\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right],}

showing that the system (S) has the unique solution

x=2y=3z=1.{\displaystyle {\begin{aligned}x&=2\\y&=3\\z&=-1.\end{aligned}}}

More generally, a system ofm{\displaystyle m} linear equations inn{\displaystyle n} variables can be written asAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }where

A=(aij)m×n{\displaystyle A=(a_{ij})_{m\times n}}

x=[x1xn]{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}}

b=[b1bm]{\displaystyle \mathbf {b} ={\begin{bmatrix}b_{1}\\\vdots \\b_{m}\end{bmatrix}}}

Ifm=n{\displaystyle m=n} and the matrixA{\displaystyle A} is invertible, then the system has the unique solutionx=A1b{\displaystyle \mathbf {x} =A^{-1}\mathbf {b} }.

It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of theranks,kernels,matrix inverses.

Endomorphisms and square matrices

[edit]
Main article:Square matrix

A linearendomorphism is a linear map that maps a vector spaceV to itself. IfV has a basis ofn elements, such an endomorphism is represented by a square matrix of sizen.

Concerning general linear maps, linear endomorphisms, and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, includinggeometric transformations,coordinate changes,quadratic forms, and many other parts of mathematics.

Determinant

[edit]
Main article:Determinant

Thedeterminant of a square matrixA is defined to be[16]

σSn(1)σa1σ(1)anσ(n),{\displaystyle \sum _{\sigma \in S_{n}}(-1)^{\sigma }a_{1\sigma (1)}\cdots a_{n\sigma (n)},}

whereSn is thegroup of all permutations ofn elements,σ is a permutation, and(−1)σ theparity of the permutation. A matrix isinvertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field).

Cramer's rule is aclosed-form expression, in terms of determinants, of the solution of asystem ofn linear equations inn unknowns. Cramer's rule is useful for reasoning about the solution, but, except forn = 2 or3, it is rarely used for computing a solution, sinceGaussian elimination is a faster algorithm.

Thedeterminant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense since this determinant is independent of the choice of the basis.

Eigenvalues and eigenvectors

[edit]
Main article:Eigenvalues and eigenvectors

Iff is a linear endomorphism of a vector spaceV over a fieldF, aneigenvector off is a nonzero vectorv ofV such thatf(v) =av for some scalara inF. This scalara is aneigenvalue off.

If the dimension ofV is finite, and a basis has been chosen,f andv may be represented, respectively, by a square matrixM and a column matrixz; the equation defining eigenvectors and eigenvalues becomes

Mz=az.{\displaystyle Mz=az.}

Using theidentity matrixI, whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten

(MaI)z=0.{\displaystyle (M-aI)z=0.}

Asz is supposed to be nonzero, this means thatMaI is asingular matrix, and thus that its determinantdet (MaI) equals zero. The eigenvalues are thus theroots of thepolynomial

det(xIM).{\displaystyle \det(xI-M).}

IfV is of dimensionn, this is amonic polynomial of degreen, called thecharacteristic polynomial of the matrix (or of the endomorphism), and there are, at most,n eigenvalues.

If a basis exists that consists only of eigenvectors, the matrix off on this basis has a very simple structure: it is adiagonal matrix such that the entries on themain diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to bediagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable afterextending the field of scalars. In this extended sense, if the characteristic polynomial issquare-free, then the matrix is diagonalizable.

Asymmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being

[0100]{\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}}

(it cannot be diagonalizable since its square is thezero matrix, and the square of a nonzero diagonal matrix is never zero).

When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. TheFrobenius normal form does not need to extend the field of scalars and makes the characteristic polynomial immediately readable on the matrix. TheJordan normal form requires to extension of the field of scalar for containing all eigenvalues and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1.

Duality

[edit]
Main article:Dual space

Alinear form is a linear map from a vector spaceV over a fieldF to the field of scalarsF, viewed as a vector space over itself. Equipped bypointwise addition and multiplication by a scalar, the linear forms form a vector space, called thedual space ofV, and usually denotedV*[17] orV.[18][19]

Ifv1, ...,vn is a basis ofV (this implies thatV is finite-dimensional), then one can define, fori = 1, ...,n, a linear mapvi* such thatvi*(vi) = 1 andvi*(vj) = 0 ifji. These linear maps form a basis ofV*, called thedual basis ofv1, ...,vn. (IfV is not finite-dimensional, thevi* may be defined similarly; they are linearly independent, but do not form a basis.)

Forv inV, the map

ff(v){\displaystyle f\to f(\mathbf {v} )}

is a linear form onV*. This defines thecanonical linear map fromV into(V*)*, the dual ofV*, called thedouble dual orbidual ofV. This canonical map is anisomorphism ifV is finite-dimensional, and this allows identifyingV with its bidual. (In the infinite-dimensional case, the canonical map is injective, but not surjective.)

There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of thebra–ket notation

f,x{\displaystyle \langle f,\mathbf {x} \rangle }

for denotingf(x).

Dual map

[edit]
Main article:Transpose of a linear map

Let

f:VW{\displaystyle f:V\to W}

be a linear map. For every linear formh onW, thecomposite functionhf is a linear form onV. This defines a linear map

f:WV{\displaystyle f^{*}:W^{*}\to V^{*}}

between the dual spaces, which is called thedual or thetranspose off.

IfV andW are finite-dimensional, andM is the matrix off in terms of some ordered bases, then the matrix off* over the dual bases is thetransposeMT ofM, obtained by exchanging rows and columns.

If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed inbra–ket notation by

hT,Mv=hTM,v.{\displaystyle \langle h^{\mathsf {T}},M\mathbf {v} \rangle =\langle h^{\mathsf {T}}M,\mathbf {v} \rangle .}

To highlight this symmetry, the two members of this equality are sometimes written

hTMv.{\displaystyle \langle h^{\mathsf {T}}\mid M\mid \mathbf {v} \rangle .}

Inner-product spaces

[edit]
Main article:Inner product space

Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as aninner product. The inner product is an example of abilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, aninner product is a map.

,:V×VF{\displaystyle \langle \cdot ,\cdot \rangle :V\times V\to F}

that satisfies the following threeaxioms for all vectorsu,v,w inV and all scalarsa inF:[20][21]

InR{\displaystyle \mathbb {R} }, it is symmetric.
with equality only forv = 0.

We can define the length of a vectorv inV by

v2=v,v,{\displaystyle \|\mathbf {v} \|^{2}=\langle \mathbf {v} ,\mathbf {v} \rangle ,}

and we can prove theCauchy–Schwarz inequality:

|u,v|uv.{\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\cdot \|\mathbf {v} \|.}

In particular, the quantity

|u,v|uv1,{\displaystyle {\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |}{\|\mathbf {u} \|\cdot \|\mathbf {v} \|}}\leq 1,}

and so we can call this quantity the cosine of the angle between the two vectors.

Two vectors are orthogonal ifu,v⟩ = 0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by theGram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since ifv =a1v1 + ⋯ +anvn, then

ai=v,vi.{\displaystyle a_{i}=\langle \mathbf {v} ,\mathbf {v} _{i}\rangle .}

The inner product facilitates the construction of many useful concepts. For instance, given a transformT, we can define itsHermitian conjugateT* as the linear transform satisfying

Tu,v=u,Tv.{\displaystyle \langle T\mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,T^{*}\mathbf {v} \rangle .}

IfT satisfiesTT* =T*T, we callTnormal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that spanV.

Relationship with geometry

[edit]

There is a strong relationship between linear algebra andgeometry, which started with the introduction byRené Descartes, in 1637, ofCartesian coordinates. In this new (at that time) geometry, now calledCartesian geometry, points are represented byCartesian coordinates, which are sequences of three real numbers (in the case of the usualthree-dimensional space). The basic objects of geometry, which arelines andplanes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra.

Mostgeometric transformation, such astranslations,rotations,reflections,rigid motions,isometries, andprojections transform lines into lines. It follows that they can be defined, specified, and studied in terms of linear maps. This is also the case ofhomographies andMöbius transformations when considered as transformations of aprojective space.

Until the end of the 19th century, geometric spaces were defined byaxioms relating points, lines, and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example,Projective space andAffine space). It has been shown that the two approaches are essentially equivalent.[22] In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, includingfinite fields.

Presently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at the elementary level, as a subfield of linear algebra.

Usage and applications

[edit]

Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories.

Functional analysis

[edit]

Functional analysis studiesfunction spaces. These are vector spaces with additional structure, such asHilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular,quantum mechanics (wave functions) andFourier analysis (orthogonal basis).

Scientific computation

[edit]

Nearly allscientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized.BLAS andLAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, to adapt them to the specificities of the computer (cache size, number of availablecores, ...).

Since the 1960s there have been processors with specialized instructions[23] for optimizing the operations of linear algebra, optional array processors[24] under the control of a conventional processor, supercomputers[25][26][27] designed for array processing and conventional processors augmented[28] with vector registers.

Some contemporaryprocessors, typicallygraphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra.[29]

Geometry of ambient space

[edit]

Themodeling ofambient space is based ongeometry. Sciences concerned with this space use geometry widely. This is the case withmechanics androbotics, for describingrigid body dynamics;geodesy for describingEarth shape;perspectivity,computer vision, andcomputer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains.

In all these applications,synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute withcoordinates. This requires the heavy use of linear algebra.

Study of complex systems

[edit]
See also:Complex system

Most physical phenomena are modeled bypartial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interactingcells. Forlinear systems this interaction involveslinear functions. Fornonlinear systems, this interaction is often approximated by linear functions.[b]This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because they makeparametrization more manageable.[30] In both cases, very large matrices are generally involved.Weather forecasting (or more specifically,parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earthatmosphere is divided into cells of, say, 100 km of width and 100 km of height.

Fluid mechanics, fluid dynamics, and thermal energy systems

[edit]

[31][32][33]

Linear algebra, a branch of mathematics dealing withvector spaces andlinear mappings between these spaces, plays a critical role in various engineering disciplines, includingfluid mechanics,fluid dynamics, andthermal energy systems. Its application in these fields is multifaceted and indispensable for solving complex problems.

Influid mechanics, linear algebra is integral to understanding and solving problems related to the behavior of fluids. It assists in the modeling and simulation of fluid flow, providing essential tools for the analysis offluid dynamics problems. For instance, linear algebraic techniques are used to solve systems ofdifferential equations that describe fluid motion. These equations, often complex andnon-linear, can be linearized using linear algebra methods, allowing for simpler solutions and analyses.

In the field of fluid dynamics, linear algebra finds its application incomputational fluid dynamics (CFD), a branch that usesnumerical analysis anddata structures to solve and analyze problems involving fluid flows. CFD relies heavily on linear algebra for the computation of fluid flow andheat transfer in various applications. For example, theNavier–Stokes equations, fundamental influid dynamics, are often solved using techniques derived from linear algebra. This includes the use ofmatrices andvectors to represent and manipulate fluid flow fields.

Furthermore, linear algebra plays a crucial role inthermal energy systems, particularly inpower systems analysis. It is used to model and optimize the generation,transmission, anddistribution of electric power. Linear algebraic concepts such as matrix operations andeigenvalue problems are employed to enhance the efficiency, reliability, and economic performance ofpower systems. The application of linear algebra in this context is vital for the design and operation of modernpower systems, includingrenewable energy sources andsmart grids.

Overall, the application of linear algebra influid mechanics,fluid dynamics, andthermal energy systems is an example of the profound interconnection betweenmathematics andengineering. It provides engineers with the necessary tools to model, analyze, and solve complex problems in these domains, leading to advancements in technology and industry.

Extensions and generalizations

[edit]

This section presents several related topics that do not appear generally in elementary textbooks on linear algebra but are commonly considered, in advanced mathematics, as parts of linear algebra.

Module theory

[edit]
Main article:Module (mathematics)

The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by aringR, and this gives the structure called amodule overR, orR-module.

The concepts of linear independence, span, basis, and linear maps (also calledmodule homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, ifR is not a field, there are modules that do not have any basis. The modules that have a basis are thefree modules, and those that are spanned by a finite set are thefinitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except thatdeterminants exist only if the ring iscommutative, and that a square matrix over a commutative ring isinvertible only if its determinant has amultiplicative inverse in the ring.

Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is acokernel of a homomorphism of free modules.

Modules over the integers can be identified withabelian groups, since the multiplication by an integer may be identified as a repeated addition. Most of the theory of abelian groups may be extended to modules over aprincipal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and thefundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring.

There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally acomputational complexity that is much higher than similar algorithms over a field. For more details, seeLinear equation over a ring.

Multilinear algebra and tensors

[edit]
This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:The dual space is considered above, and the section must be rewritten to give an understandable summary of this subject. Please helpimprove this section if you can.(September 2018) (Learn how and when to remove this message)

Inmultilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of several different variables. This line of inquiry naturally leads to the idea of thedual space, the vector spaceV* consisting of linear mapsf :VF whereF is the field of scalars. Multilinear mapsT :VnF can be described viatensor products of elements ofV*.

If, in addition to vector addition and scalar multiplication, there is a bilinear vector productV ×VV, the vector space is called analgebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials).

Topological vector spaces

[edit]
Main articles:Topological vector space,Normed vector space, andHilbert space

Vector spaces that are not finite-dimensional often require additional structure to be tractable. Anormed vector space is a vector space along with a function called anorm, which measures the "size" of elements. The norm induces ametric, which measures the distance between elements, and induces atopology, which allows for a definition of continuous maps. The metric also allows for a definition oflimits andcompleteness – a normed vector space that is complete is known as aBanach space. A complete metric space along with the additional structure of aninner product (a conjugate symmetricsesquilinear form) is known as aHilbert space, which is in some sense a particularly well-behaved Banach space.Functional analysis applies the methods of linear algebra alongside those ofmathematical analysis to study various function spaces; the central objects of study in functional analysis areLp spaces, which are Banach spaces, and especially theL2 space of square-integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods.

See also

[edit]

Explanatory notes

[edit]
  1. ^This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplicationbv; and field multiplication:ab.
  2. ^This may have the consequence that some physically interesting solutions are omitted.

Citations

[edit]
  1. ^Banerjee, Sudipto; Roy, Anindya (2014).Linear Algebra and Matrix Analysis for Statistics. Texts in Statistical Science (1st ed.). Chapman and Hall/CRC.ISBN 978-1420095388.
  2. ^Strang, Gilbert (July 19, 2005).Linear Algebra and Its Applications (4th ed.). Brooks Cole.ISBN 978-0-03-010567-8.
  3. ^Weisstein, Eric."Linear Algebra".MathWorld. Wolfram. Retrieved16 April 2012.
  4. ^Hart, Roger (2010).The Chinese Roots of Linear Algebra.JHU Press.ISBN 9780801899584.
  5. ^abcdVitulli, Marie."A Brief History of Linear Algebra and Matrix Theory".Department of Mathematics. University of Oregon. Archived fromthe original on 2012-09-10. Retrieved2014-07-08.
  6. ^Koecher, M., Remmert, R. (1991). Hamilton’s Quaternions. In: Numbers. Graduate Texts in Mathematics, vol 123. Springer, New York, NY.https://doi.org/10.1007/978-1-4612-1005-4_10
  7. ^Benjamin Peirce (1872)Linear Associative Algebra, lithograph, new edition with corrections, notes, and an added 1875 paper by Peirce, plus notes by his sonCharles Sanders Peirce, published in theAmerican Journal of Mathematics v. 4, 1881, Johns Hopkins University, pp. 221–226,GoogleEprint and as an extract, D. Van Nostrand, 1882,GoogleEprint.
  8. ^Roman (2005, ch. 1, p. 27)
  9. ^Axler (2015) p. 82, §3.59
  10. ^Axler (2015) p. 23, §1.45
  11. ^Anton (1987, p. 2)
  12. ^Beauregard & Fraleigh (1973, p. 65)
  13. ^Burden & Faires (1993, p. 324)
  14. ^Golub & Van Loan (1996, p. 87)
  15. ^Harper (1976, p. 57)
  16. ^Katznelson & Katznelson (2008) pp. 76–77, § 4.4.1–4.4.6
  17. ^Katznelson & Katznelson (2008) p. 37 §2.1.3
  18. ^Halmos (1974) p. 20, §13
  19. ^Axler (2015) p. 101, §3.94
  20. ^P. K. Jain, Khalil Ahmad (1995)."5.1 Definitions and basic properties of inner product spaces and Hilbert spaces".Functional analysis (2nd ed.). New Age International. p. 203.ISBN 81-224-0801-X.
  21. ^Eduard Prugovec̆ki (1981)."Definition 2.1".Quantum mechanics in Hilbert space (2nd ed.). Academic Press. pp. 18ff.ISBN 0-12-566060-X.
  22. ^Emil Artin (1957)Geometric AlgebraInterscience Publishers
  23. ^IBM System/36O Model 40 - Sum of Products Instruction-RPQ W12561 - Special Systems Feature.IBM. L22-6902.
  24. ^IBM System/360 Custom Feature Description: 2938 Array Processor Model 1, - RPQ W24563; Model 2, RPQ 815188.IBM. A24-3519.
  25. ^Barnes, George; Brown, Richard; Kato, Maso; Kuck, David; Slotnick, Daniel; Stokes, Richard (August 1968)."The ILLIAC IV Computer"(PDF).IEEE Transactions on Computers.C.17 (8):746–757.doi:10.1109/tc.1968.229158.ISSN 0018-9340.S2CID 206617237. RetrievedOctober 31, 2024.
  26. ^Star-100 - Hardware Reference Manual(PDF). Revision 9.Control Data Corporation. December 15, 1975. 60256000. RetrievedOctober 31, 2024.
  27. ^Cray-1 - Computer System - Hardware Reference Manual(PDF). Rev. C.Cray Research, Inc. November 4, 1977. 2240004. RetrievedOctober 31, 2024.
  28. ^IBM Enterprise Systems Architecture/370 and System/370 Vector Operations(PDF) (Fourth ed.).IBM. August 1988. SA22-7125-3. RetrievedOctober 31, 2024.
  29. ^"GPU Performance Background User's Guide".NVIDIA Docs. Retrieved2024-10-29.
  30. ^Savov, Ivan (2017).No Bullshit Guide to Linear Algebra. MinireferenceCo. pp. 150–155.ISBN 9780992001025.
  31. ^"Special Topics in Mathematics with Applications: Linear Algebra and the Calculus of Variations | Mechanical Engineering".MIT OpenCourseWare.
  32. ^"Energy and power systems".engineering.ucdenver.edu.
  33. ^"ME Undergraduate Curriculum | FAMU-FSU".eng.famu.fsu.edu.

General and cited sources

[edit]

Further reading

[edit]

History

[edit]
  • Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly86 (1979), pp. 809–817.
  • Grassmann, Hermann (1844),Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, Leipzig: O. Wigand

Introductory textbooks

[edit]

Advanced textbooks

[edit]

Study guides and outlines

[edit]
  • Leduc, Steven A. (May 1, 1996),Linear Algebra (Cliffs Quick Review), Cliffs Notes,ISBN 978-0-8220-5331-6
  • Lipschutz, Seymour; Lipson, Marc (December 6, 2000),Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill,ISBN 978-0-07-136200-9
  • Lipschutz, Seymour (January 1, 1989),3,000 Solved Problems in Linear Algebra, McGraw–Hill,ISBN 978-0-07-038023-3
  • McMahon, David (October 28, 2005),Linear Algebra Demystified, McGraw–Hill Professional,ISBN 978-0-07-146579-3
  • Zhang, Fuzhen (April 7, 2009),Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press,ISBN 978-0-8018-9125-0

External links

[edit]
Wikibooks has a book on the topic of:Linear Algebra
Wikiversity has learning resources aboutLinear Algebra

Online Resources

[edit]
Wikimedia Commons has media related toLinear algebra.

Online books

[edit]
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Majormathematics areas
Foundations
Algebra
Analysis
Discrete
Geometry
Number theory
Topology
Applied
Computational
Related topics
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_algebra&oldid=1319719218"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp