Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Vector space

This is a good article. Click here for more information.
From Wikipedia, the free encyclopedia
(Redirected fromVector spaces)
Algebraic structure in linear algebra
Not to be confused withVector field.
"Linear space" redirects here. For a structure in incidence geometry, seeLinear space (geometry).
Vector addition and scalar multiplication: a vectorv (blue) is added to another vectorw (red, upper illustration). Below,w is stretched by a factor of 2, yielding the sumv + 2w.

Inmathematics andphysics, avector space (also called alinear space) is aset whose elements, often calledvectors, can be added together and multiplied ("scaled") by numbers calledscalars. The operations of vector addition andscalar multiplication must satisfy certain requirements, calledvectoraxioms.Real vector spaces andcomplex vector spaces are kinds of vector spaces based on different kinds of scalars:real numbers andcomplex numbers. Scalars can also be, more generally, elements of anyfield.

Vector spaces generalizeEuclidean vectors, which allow modeling ofphysical quantities (such asforces andvelocity) that have not only amagnitude, but also adirection. The concept of vector spaces is fundamental forlinear algebra, together with the concept ofmatrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studyingsystems of linear equations.

Vector spaces are characterized by theirdimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces areisomorphic). A vector space isfinite-dimensional if its dimension is anatural number. Otherwise, it isinfinite-dimensional, and its dimension is aninfinite cardinal. Finite-dimensional vector spaces occur naturally ingeometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example,polynomial rings arecountably infinite-dimensional vector spaces, and manyfunction spaces have thecardinality of the continuum as a dimension.

Many vector spaces that are considered in mathematics are also endowed with otherstructures. This is the case ofalgebras, which includefield extensions, polynomial rings,associative algebras andLie algebras. This is also the case oftopological vector spaces, which include function spaces,inner product spaces,normed spaces,Hilbert spaces andBanach spaces.

Algebraic structures

Definition and basic properties

[edit]

In this article, vectors are represented in boldface to distinguish them from scalars.[nb 1][1]

A vector space over afieldF is a non-emptyset V together with abinary operation and abinary function that satisfy the eightaxioms listed below. In this context, the elements ofV are commonly calledvectors, and the elements of F are calledscalars.[2]

  • The binary operation, calledvector addition or simplyaddition assigns to any two vectors v andw inV a third vector inV which is commonly written asv +w, and called thesum of these two vectors.
  • The binary function, calledscalar multiplication, assigns to any scalar a inF and any vector v inV another vector inV, which is denoted av.[nb 2]

To have a vector space, the eight followingaxioms must be satisfied for everyu,v andw inV, anda andb inF.[3]


AxiomStatement
Associativity of vector additionu + (v +w) = (u +v) +w
Commutativity of vector additionu +v =v +u
Identity element of vector additionThere exists an element0V, called thezero vector, such thatv +0 =v for allvV.
Inverse elements of vector additionFor everyvV, there exists an elementvV, called theadditive inverse ofv, such thatv + (−v) =0.
Compatibility of scalar multiplication with field multiplicationa(bv) = (ab)v[nb 3]
Identity element of scalar multiplication1v =v, where1 denotes themultiplicative identity inF.
Distributivity of scalar multiplication with respect to vector addition  a(u +v) =au +av
Distributivity of scalar multiplication with respect to field addition(a +b)v =av +bv

When the scalar field is thereal numbers, the vector space is called areal vector space, and when the scalar field is thecomplex numbers, the vector space is called acomplex vector space.[4] These two cases are the most common ones, but vector spaces with scalars in an arbitrary fieldF are also commonly considered. Such a vector space is called anF-vector space or avector space overF.[5]

An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms (related to vector addition) say that a vector space is anabelian group under addition, and the four remaining axioms (related to the scalar multiplication) say that this operation defines aring homomorphism from the fieldF into theendomorphism ring of this group.[6]

Subtraction of two vectors can be defined asvw=v+(w).{\displaystyle \mathbf {v} -\mathbf {w} =\mathbf {v} +(-\mathbf {w} ).}

Direct consequences of the axioms include that, for everysF{\displaystyle s\in F} andvV,{\displaystyle \mathbf {v} \in V,} one has

Even more concisely, a vector space is amodule over afield.[7]

Bases, vector coordinates, and subspaces

[edit]
A vectorv inR2 (blue) expressed in terms of different bases: using thestandard basis ofR2:v =xe1 +ye2 (black), and using a different, non-orthogonal basis:v =f1 +f2 (red).
Linear combination
Given a setG of elements of aF-vector spaceV, a linear combination of elements ofG is an element ofV of the forma1g1+a2g2++akgk,{\displaystyle a_{1}\mathbf {g} _{1}+a_{2}\mathbf {g} _{2}+\cdots +a_{k}\mathbf {g} _{k},} wherea1,,akF{\displaystyle a_{1},\ldots ,a_{k}\in F} andg1,,gkG.{\displaystyle \mathbf {g} _{1},\ldots ,\mathbf {g} _{k}\in G.} The scalarsa1,,ak{\displaystyle a_{1},\ldots ,a_{k}} are called thecoefficients of the linear combination.[8]
Linear independence
The elements of a subsetG of aF-vector spaceV are said to belinearly independent if no element ofG can be written as a linear combination of the other elements ofG. Equivalently, they are linearly independent if two linear combinations of elements ofG define the same element ofV if and only if they have the same coefficients. Also equivalently, they are linearly independent if a linear combination results in the zero vector if and only if all its coefficients are zero.[9]
Linear subspace
Alinear subspace orvector subspaceW of a vector spaceV is a non-empty subset ofV that isclosed under vector addition and scalar multiplication; that is, the sum of two elements ofW and the product of an element ofW by a scalar belong toW.[10] This implies that every linear combination of elements ofW belongs toW. A linear subspace is a vector space for the induced addition and scalar multiplication; this means that the closure property implies that the axioms of a vector space are satisfied.[11]
The closure property also implies thateveryintersection of linear subspaces is a linear subspace.[11]
Linear span
Given a subsetG of a vector spaceV, thelinear span or simply thespan ofG is the smallest linear subspace ofV that containsG, in the sense that it is the intersection of all linear subspaces that containG. The span ofG is also the set of all linear combinations of elements ofG.
IfW is the span ofG, one says thatGspans orgeneratesW, and thatG is aspanning set or agenerating set ofW.[12]
Basis anddimension
A subset of a vector space is abasis if its elements are linearly independent and span the vector space.[13] Every vector space has at least one basis, or many in general (seeBasis (linear algebra) § Proof that every vector space has a basis).[14] Moreover, all bases of a vector space have the samecardinality, which is called thedimension of the vector space (seeDimension theorem for vector spaces).[15] This is a fundamental property of vector spaces, which is detailed in the remainder of the section.

Bases are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often calledHamel bases, depends on theaxiom of choice. It follows that, in general, no base can be explicitly described.[16] For example, thereal numbers form an infinite-dimensional vector space over therational numbers, for which no specific basis is known.

Consider a basis(b1,b2,,bn){\displaystyle (\mathbf {b} _{1},\mathbf {b} _{2},\ldots ,\mathbf {b} _{n})} of a vector spaceV of dimensionn over a fieldF. The definition of a basis implies that everyvV{\displaystyle \mathbf {v} \in V} may be writtenv=a1b1++anbn,{\displaystyle \mathbf {v} =a_{1}\mathbf {b} _{1}+\cdots +a_{n}\mathbf {b} _{n},} witha1,,an{\displaystyle a_{1},\dots ,a_{n}} inF, and that this decomposition is unique. The scalarsa1,,an{\displaystyle a_{1},\ldots ,a_{n}} are called thecoordinates ofv on the basis. They are also said to be thecoefficients of the decomposition ofv on the basis. One also says that then-tuple of the coordinates is thecoordinate vector ofv on the basis, since the setFn{\displaystyle F^{n}} of then-tuples of elements ofF is a vector space forcomponentwise addition and scalar multiplication, whose dimension isn.

Theone-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus avector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates.[17]

History

[edit]

Vector spaces stem fromaffine geometry, via the introduction ofcoordinates in the plane or three-dimensional space. Around 1636, French mathematiciansRené Descartes andPierre de Fermat foundedanalytic geometry by identifying solutions to an equation of two variables with points on a planecurve.[18] To achieve geometric solutions without using coordinates,Bolzano introduced, in 1804, certain operations on points, lines, and planes, which are predecessors of vectors.[19]Möbius (1827) introduced the notion ofbarycentric coordinates.[20]Bellavitis (1833) introduced anequivalence relation on directed line segments that share the same length and direction which he calledequipollence.[21] AEuclidean vector is then anequivalence class of that relation.[22]

Vectors were reconsidered with the presentation ofcomplex numbers byArgand andHamilton and the inception ofquaternions by the latter.[23] They are elements inR2 andR4; treating them usinglinear combinations goes back toLaguerre in 1867, who also definedsystems of linear equations.

In 1857,Cayley introduced thematrix notation which allows for harmonization and simplification oflinear maps. Around the same time,Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations.[24] In his work, the concepts oflinear independence anddimension, as well asscalar products are present. Grassmann's 1844 work exceeds the framework of vector spaces as well since his considering multiplication led him to what are today calledalgebras. Italian mathematicianPeano was the first to give the modern definition of vector spaces and linear maps in 1888,[25] although he called them "linear systems".[26] Peano's axiomatization allowed for vector spaces with infinite dimension, but Peano did not develop that theory further. In 1897,Salvatore Pincherle adopted Peano's axioms and made initial inroads into the theory of infinite-dimensional vector spaces.[27]

An important development of vector spaces is due to the construction offunction spaces byHenri Lebesgue. This was later formalized byBanach andHilbert, around 1920.[28] At that time,algebra and the new field offunctional analysis began to interact, notably with key concepts such asspaces ofp-integrable functions andHilbert spaces.[29]

Examples

[edit]
Main article:Examples of vector spaces

Arrows in the plane

[edit]
Vector addition: the sumv +w (black) of the vectorsv (blue) andw (red) is shown.
Scalar multiplication: the multiplesv and2w are shown.

The first example of a vector space consists ofarrows in a fixedplane, starting at one fixed point. This is used in physics to describeforces orvelocities.[30] Given any two such arrows,v andw, theparallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called thesum of the two arrows, and is denotedv +w. In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positivereal numbera, the arrow that has the same direction asv, but is dilated or shrunk by multiplying its length bya, is calledmultiplication ofv bya. It is denotedav. Whena is negative,av is defined as the arrow pointing in the opposite direction instead.[31]

The following shows a few examples: ifa = 2, the resulting vectoraw has the same direction asw, but is stretched to the double length ofw (the second image). Equivalently,2w is the sumw +w. Moreover,(−1)v = −v has the opposite direction and the same length asv (blue vector pointing down in the second image).

Ordered pairs of numbers

[edit]

A second key example of a vector space is provided by pairs of real numbersx andy. The order of the componentsx andy is significant, so such a pair is also called anordered pair. Such a pair is written as(x,y). The sum of two such pairs and the multiplication of a pair with a number is defined as follows:[32](x1,y1)+(x2,y2)=(x1+x2,y1+y2),a(x,y)=(ax,ay).{\displaystyle {\begin{aligned}(x_{1},y_{1})+(x_{2},y_{2})&=(x_{1}+x_{2},y_{1}+y_{2}),\\a(x,y)&=(ax,ay).\end{aligned}}}

The first example above reduces to this example if an arrow is represented by a pair ofCartesian coordinates of its endpoint.

Coordinate space

[edit]

The simplest example of a vector space over a fieldF is the fieldF itself with its addition viewed as vector addition and its multiplication viewed as scalar multiplication. More generally, alln-tuples (sequences of lengthn)(a1,a2,,an){\displaystyle (a_{1},a_{2},\dots ,a_{n})}of elementsai ofF form a vector space that is usually denotedFn and called acoordinate space.[33] The casen = 1 is the above-mentioned simplest example, in which the fieldF is also regarded as a vector space over itself. The caseF =R andn = 2 (soR2) reduces to the previous example.

Complex numbers and other field extensions

[edit]

The set ofcomplex numbersC, numbers that can be written in the formx +iy forreal numbersx andy wherei is theimaginary unit, form a vector space over the reals with the usual addition and multiplication:(x +iy) + (a +ib) = (x +a) +i(y +b) andc ⋅ (x +iy) = (cx) +i(cy) for real numbersx,y,a,b andc. The various axioms of a vector space follow from the fact that the same rules hold for complex number arithmetic. The example of complex numbers is essentially the same as (that is, it isisomorphic to) the vector space of ordered pairs of real numbers mentioned above: if we think of the complex numberx +iy as representing the ordered pair(x,y) in thecomplex plane then we see that the rules for addition and scalar multiplication correspond exactly to those in the earlier example.

More generally,field extensions provide another class of examples of vector spaces, particularly in algebra andalgebraic number theory: a fieldF containing asmaller fieldE is anE-vector space, by the given multiplication and addition operations ofF.[34] For example, the complex numbers are a vector space overR, and the field extensionQ(i5){\displaystyle \mathbf {Q} (i{\sqrt {5}})} is a vector space overQ.

Function spaces

[edit]
Main article:Function space
Addition of functions: the sum of the sine and the exponential function issin+exp:RR{\displaystyle \sin +\exp :\mathbb {R} \to \mathbb {R} } with(sin+exp)(x)=sin(x)+exp(x){\displaystyle (\sin +\exp )(x)=\sin(x)+\exp(x)}.

Functions from any fixed setΩ to a fieldF also form vector spaces, by performing addition and scalar multiplication pointwise. That is, the sum of two functionsf andg is the function(f+g){\displaystyle (f+g)} given by(f+g)(w)=f(w)+g(w),{\displaystyle (f+g)(w)=f(w)+g(w),}and similarly for multiplication. Such function spaces occur in many geometric situations, whenΩ is thereal line or aninterval, or othersubsets ofR. Many notions in topology and analysis, such ascontinuity,integrability ordifferentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such a property still have that property.[35] Therefore, the set of such functions are vector spaces, whose study belongs tofunctional analysis.

Linear equations

[edit]
Main articles:Linear equation,Linear differential equation, andSystems of linear equations

Systems ofhomogeneous linear equations are closely tied to vector spaces.[36] For example, the solutions ofa+3b+c=04a+2b+2c=0{\displaystyle {\begin{alignedat}{9}&&a\,&&+\,3b\,&\,+&\,&c&\,=0\\4&&a\,&&+\,2b\,&\,+&\,2&c&\,=0\\\end{alignedat}}}are given by triples with arbitrarya,{\displaystyle a,}b=a/2,{\displaystyle b=a/2,} andc=5a/2.{\displaystyle c=-5a/2.} They form a vector space: sums and scalar multiples of such triples still satisfy the same ratios of the three variables; thus they are solutions, too.Matrices can be used to condense multiple linear equations as above into one vector equation, namely

Ax=0,{\displaystyle A\mathbf {x} =\mathbf {0} ,}

whereA=[131422]{\displaystyle A={\begin{bmatrix}1&3&1\\4&2&2\end{bmatrix}}} is the matrix containing the coefficients of the given equations,x{\displaystyle \mathbf {x} } is the vector(a,b,c),{\displaystyle (a,b,c),}Ax{\displaystyle A\mathbf {x} } denotes thematrix product, and0=(0,0){\displaystyle \mathbf {0} =(0,0)} is the zero vector. In a similar vein, the solutions of homogeneouslinear differential equations form vector spaces. For example,

f(x)+2f(x)+f(x)=0{\displaystyle f^{\prime \prime }(x)+2f^{\prime }(x)+f(x)=0}

yieldsf(x)=aex+bxex,{\displaystyle f(x)=ae^{-x}+bxe^{-x},} wherea{\displaystyle a} andb{\displaystyle b} are arbitrary constants, andex{\displaystyle e^{x}} is thenatural exponential function.

Linear maps and matrices

[edit]
Main article:Linear map

The relation of two vector spaces can be expressed bylinear map orlinear transformation. They arefunctions that reflect the vector space structure, that is, they preserve sums and scalar multiplication:f(v+w)=f(v)+f(w),f(av)=af(v){\displaystyle {\begin{aligned}f(\mathbf {v} +\mathbf {w} )&=f(\mathbf {v} )+f(\mathbf {w} ),\\f(a\cdot \mathbf {v} )&=a\cdot f(\mathbf {v} )\end{aligned}}} for allv{\displaystyle \mathbf {v} } andw{\displaystyle \mathbf {w} } inV,{\displaystyle V,} alla{\displaystyle a} inF.{\displaystyle F.}[37]

Anisomorphism is a linear mapf :VW such that there exists aninverse mapg :WV, which is a map such that the two possiblecompositionsfg :WW andgf :VV areidentity maps. Equivalently,f is both one-to-one (injective) and onto (surjective).[38] If there exists an isomorphism betweenV andW, the two spaces are said to beisomorphic; they are then essentially identical as vector spaces, since all identities holding inV are, viaf, transported to similar ones inW, and vice versa viag.

Describing an arrow vectorv by its coordinatesx andy yields an isomorphism of vector spaces.

For example, the arrows in the plane and the ordered pairs of numbers vector spaces in the introduction above (see§ Examples) are isomorphic: a planar arrowv departing at theorigin of some (fixed)coordinate system can be expressed as an ordered pair by considering thex- andy-component of the arrow, as shown in the image at the right. Conversely, given a pair(x,y), the arrow going byx to the right (or to the left, ifx is negative), andy up (down, ify is negative) turns back the arrowv.[39]

Linear mapsVW between two vector spaces form a vector spaceHomF(V,W), also denotedL(V,W), or𝓛(V,W).[40] The space of linear maps fromV toF is called thedual vector space, denotedV.[41] Via the injectivenatural mapVV∗∗, any vector space can be embedded into itsbidual; the map is an isomorphism if and only if the space is finite-dimensional.[42]

Once a basis ofV is chosen, linear mapsf :VW are completely determined by specifying the images of the basis vectors, because any element ofV is expressed uniquely as a linear combination of them.[43] IfdimV = dimW, a1-to-1 correspondence between fixed bases ofV andW gives rise to a linear map that maps any basis element ofV to the corresponding basis element ofW. It is an isomorphism, by its very definition.[44] Therefore, two vector spaces over a given field are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space over a given field iscompletely classified (up to isomorphism) by its dimension, a single number. In particular, anyn-dimensionalF-vector spaceV is isomorphic toFn. However, there is no "canonical" or preferred isomorphism; an isomorphismφ :FnV is equivalent to the choice of a basis ofV, by mapping the standard basis ofFn toV, viaφ.

Matrices

[edit]
Main articles:Matrix andDeterminant
A typical matrix

Matrices are a useful notion to encode linear maps.[45] They are written as a rectangular array of scalars as in the image at the right. Anym-by-n matrixA{\displaystyle A} gives rise to a linear map fromFn toFm, by the followingx=(x1,x2,,xn)(j=1na1jxj,j=1na2jxj,,j=1namjxj),{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})\mapsto \left(\sum _{j=1}^{n}a_{1j}x_{j},\sum _{j=1}^{n}a_{2j}x_{j},\ldots ,\sum _{j=1}^{n}a_{mj}x_{j}\right),} where{\textstyle \sum } denotessummation, or by using thematrix multiplication of the matrixA{\displaystyle A} with the coordinate vectorx{\displaystyle \mathbf {x} }:

xAx.{\displaystyle \mathbf {x} \mapsto A\mathbf {x} .}

Moreover, after choosing bases ofV andW,any linear mapf :VW is uniquely represented by a matrix via this assignment.[46]

The volume of thisparallelepiped is the absolute value of the determinant of the 3-by-3 matrix formed by the vectorsr1,r2, andr3.

Thedeterminantdet (A) of asquare matrixA is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero.[47] The linear transformation ofRn corresponding to a realn-by-n matrix isorientation preserving if and only if its determinant is positive.

Eigenvalues and eigenvectors

[edit]
Main article:Eigenvalues and eigenvectors

Endomorphisms, linear mapsf :VV, are particularly important since in this case vectorsv can be compared with their image underf,f(v). Any nonzero vectorv satisfyingλv =f(v), whereλ is a scalar, is called aneigenvector off witheigenvalueλ.[48] Equivalently,v is an element of thekernel of the differencefλ · Id (where Id is theidentity mapVV). IfV is finite-dimensional, this can be rephrased using determinants:f having eigenvalueλ is equivalent todet(fλId)=0.{\displaystyle \det(f-\lambda \cdot \operatorname {Id} )=0.}By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial function inλ, called thecharacteristic polynomial off.[49] If the fieldF is large enough to contain a zero of this polynomial (which automatically happens forFalgebraically closed, such asF =C) any linear map has at least one eigenvector. The vector spaceV may or may not possess aneigenbasis, a basis consisting of eigenvectors. This phenomenon is governed by theJordan canonical form of the map.[50] The set of all eigenvectors corresponding to a particular eigenvalue off forms a vector space known as theeigenspace corresponding to the eigenvalue (andf) in question.

Basic constructions

[edit]

In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones.

Subspaces and quotient spaces

[edit]
Main articles:Linear subspace andQuotient vector space
A line passing through theorigin (blue, thick) inR3 is a linear subspace. It is the intersection of twoplanes (green and yellow).

A nonemptysubsetW{\displaystyle W} of a vector spaceV{\displaystyle V} that is closed under addition and scalar multiplication (and therefore contains the0{\displaystyle \mathbf {0} }-vector ofV{\displaystyle V}) is called alinear subspace ofV{\displaystyle V}, or simply asubspace ofV{\displaystyle V}, when the ambient space is unambiguously a vector space.[51][nb 4] Subspaces ofV{\displaystyle V} are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given setS{\displaystyle S} of vectors is called itsspan, and it is the smallest subspace ofV{\displaystyle V} containing the setS{\displaystyle S}. Expressed in terms of elements, the span is the subspace consisting of all thelinear combinations of elements ofS{\displaystyle S}.[52]

Linear subspace of dimension 1 and 2 are referred to as aline (alsovector line), and aplane respectively. IfW is ann-dimensional vector space, any subspace of dimension 1 less, i.e., of dimensionn1{\displaystyle n-1} is called ahyperplane.[53]

The counterpart to subspaces arequotient vector spaces.[54] Given any subspaceWV{\displaystyle W\subseteq V}, the quotient spaceV/W{\displaystyle V/W} ("V{\displaystyle V}moduloW{\displaystyle W}") is defined as follows: as a set, it consists ofv+W={v+w:wW},{\displaystyle \mathbf {v} +W=\{\mathbf {v} +\mathbf {w} :\mathbf {w} \in W\},}wherev{\displaystyle \mathbf {v} } is an arbitrary vector inV{\displaystyle V}. The sum of two such elementsv1+W{\displaystyle \mathbf {v} _{1}+W} andv2+W{\displaystyle \mathbf {v} _{2}+W} is(v1+v2)+W{\displaystyle \left(\mathbf {v} _{1}+\mathbf {v} _{2}\right)+W}, and scalar multiplication is given bya(v+W)=(av)+W{\displaystyle a\cdot (\mathbf {v} +W)=(a\cdot \mathbf {v} )+W}. The key point in this definition is thatv1+W=v2+W{\displaystyle \mathbf {v} _{1}+W=\mathbf {v} _{2}+W}if and only if the difference ofv1{\displaystyle \mathbf {v} _{1}} andv2{\displaystyle \mathbf {v} _{2}} lies inW{\displaystyle W}.[nb 5] This way, the quotient space "forgets" information that is contained in the subspaceW{\displaystyle W}.

Thekernelker(f){\displaystyle \ker(f)} of a linear mapf:VW{\displaystyle f:V\to W} consists of vectorsv{\displaystyle \mathbf {v} } that are mapped to0{\displaystyle \mathbf {0} } inW{\displaystyle W}.[55] The kernel and theimageim(f)={f(v):vV}{\displaystyle \operatorname {im} (f)=\{f(\mathbf {v} ):\mathbf {v} \in V\}} are subspaces ofV{\displaystyle V} andW{\displaystyle W}, respectively.[56]

An important example is the kernel of a linear mapxAx{\displaystyle \mathbf {x} \mapsto A\mathbf {x} } for some fixed matrixA{\displaystyle A}. The kernel of this map is the subspace of vectorsx{\displaystyle \mathbf {x} } such thatAx=0{\displaystyle A\mathbf {x} =\mathbf {0} }, which is precisely the set of solutions to the system of homogeneous linear equations belonging toA{\displaystyle A}. This concept also extends to linear differential equationsa0f+a1dfdx+a2d2fdx2++andnfdxn=0,{\displaystyle a_{0}f+a_{1}{\frac {df}{dx}}+a_{2}{\frac {d^{2}f}{dx^{2}}}+\cdots +a_{n}{\frac {d^{n}f}{dx^{n}}}=0,}where the coefficientsai{\displaystyle a_{i}} are functions inx,{\displaystyle x,} too.In the corresponding mapfD(f)=i=0naidifdxi,{\displaystyle f\mapsto D(f)=\sum _{i=0}^{n}a_{i}{\frac {d^{i}f}{dx^{i}}},}thederivatives of the functionf{\displaystyle f} appear linearly (as opposed tof(x)2{\displaystyle f^{\prime \prime }(x)^{2}}, for example). Since differentiation is a linear procedure (that is,(f+g)=f+g{\displaystyle (f+g)^{\prime }=f^{\prime }+g^{\prime }} and(cf)=cf{\displaystyle (c\cdot f)^{\prime }=c\cdot f^{\prime }} for a constantc{\displaystyle c}) this assignment is linear, called alinear differential operator. In particular, the solutions to the differential equationD(f)=0{\displaystyle D(f)=0} form a vector space (overR orC).[57]

The existence of kernels and images is part of the statement that thecategory of vector spaces (over a fixed fieldF{\displaystyle F}) is anabelian category, that is, a corpus of mathematical objects and structure-preserving maps between them (acategory) that behaves much like thecategory of abelian groups.[58] Because of this, many statements such as thefirst isomorphism theorem (also calledrank–nullity theorem in matrix-related terms)V/ker(f)im(f){\displaystyle V/\ker(f)\;\equiv \;\operatorname {im} (f)}and the second and third isomorphism theorem can be formulated and proven in a way very similar to the corresponding statements forgroups.

Direct product and direct sum

[edit]
Main articles:Direct product andDirect sum of modules

Thedirect product of vector spaces and thedirect sum of vector spaces are two ways of combining an indexed family of vector spaces into a new vector space.

Thedirect productiIVi{\displaystyle \textstyle {\prod _{i\in I}V_{i}}} of a family of vector spacesVi{\displaystyle V_{i}} consists of the set of all tuples(vi)iI{\displaystyle \left(\mathbf {v} _{i}\right)_{i\in I}}, which specify for each indexi{\displaystyle i} in someindex setI{\displaystyle I} an elementvi{\displaystyle \mathbf {v} _{i}} ofVi{\displaystyle V_{i}}.[59] Addition and scalar multiplication is performed componentwise. A variant of this construction is thedirect sumiIVi{\textstyle \bigoplus _{i\in I}V_{i}} (also calledcoproduct and denotediIVi{\textstyle \coprod _{i\in I}V_{i}}), where only tuples with finitely many nonzero vectors are allowed. If the index setI{\displaystyle I} is finite, the two constructions agree, but in general they are different.

Tensor product

[edit]
Main article:Tensor product of vector spaces

Thetensor productVFW,{\displaystyle V\otimes _{F}W,} or simplyVW,{\displaystyle V\otimes W,} of two vector spacesV{\displaystyle V} andW{\displaystyle W} is one of the central notions ofmultilinear algebra which deals with extending notions such as linear maps to several variables. A mapg:V×WX{\displaystyle g:V\times W\to X} from theCartesian productV×W{\displaystyle V\times W} is calledbilinear ifg{\displaystyle g} is linear in both variablesv{\displaystyle \mathbf {v} } andw.{\displaystyle \mathbf {w} .} That is to say, for fixedw{\displaystyle \mathbf {w} } the mapvg(v,w){\displaystyle \mathbf {v} \mapsto g(\mathbf {v} ,\mathbf {w} )} is linear in the sense above and likewise for fixedv.{\displaystyle \mathbf {v} .}

Commutative diagram depicting the universal property of the tensor product

The tensor product is a particular vector space that is auniversal recipient of bilinear mapsg,{\displaystyle g,} as follows. It is defined as the vector space consisting of finite (formal) sums of symbols calledtensorsv1w1+v2w2++vnwn,{\displaystyle \mathbf {v} _{1}\otimes \mathbf {w} _{1}+\mathbf {v} _{2}\otimes \mathbf {w} _{2}+\cdots +\mathbf {v} _{n}\otimes \mathbf {w} _{n},}subject to the rules[60]a(vw) = (av)w = v(aw),   where a is a scalar(v1+v2)w = v1w+v2wv(w1+w2) = vw1+vw2.{\displaystyle {\begin{alignedat}{6}a\cdot (\mathbf {v} \otimes \mathbf {w} )~&=~(a\cdot \mathbf {v} )\otimes \mathbf {w} ~=~\mathbf {v} \otimes (a\cdot \mathbf {w} ),&&~~{\text{ where }}a{\text{ is a scalar}}\\(\mathbf {v} _{1}+\mathbf {v} _{2})\otimes \mathbf {w} ~&=~\mathbf {v} _{1}\otimes \mathbf {w} +\mathbf {v} _{2}\otimes \mathbf {w} &&\\\mathbf {v} \otimes (\mathbf {w} _{1}+\mathbf {w} _{2})~&=~\mathbf {v} \otimes \mathbf {w} _{1}+\mathbf {v} \otimes \mathbf {w} _{2}.&&\\\end{alignedat}}}These rules ensure that the mapf{\displaystyle f} from theV×W{\displaystyle V\times W} toVW{\displaystyle V\otimes W} that maps atuple(v,w){\displaystyle (\mathbf {v} ,\mathbf {w} )} tovw{\displaystyle \mathbf {v} \otimes \mathbf {w} } is bilinear. The universality states that givenany vector spaceX{\displaystyle X} andany bilinear mapg:V×WX,{\displaystyle g:V\times W\to X,} there exists a unique mapu,{\displaystyle u,} shown in the diagram with a dotted arrow, whosecomposition withf{\displaystyle f} equalsg:{\displaystyle g:}u(vw)=g(v,w).{\displaystyle u(\mathbf {v} \otimes \mathbf {w} )=g(\mathbf {v} ,\mathbf {w} ).}[61] This is called theuniversal property of the tensor product, an instance of the method—much used in advanced abstract algebra—to indirectly define objects by specifying maps from or to this object.

Vector spaces with additional structure

[edit]

From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space over a given field is characterized, up to isomorphism, by its dimension. However, vector spacesper se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functionsconverges to another function. Likewise, linear algebra is not adapted to deal withinfinite series, since the addition operation allows only finitely many terms to be added.Therefore, the needs offunctional analysis require considering additional structures.[62]

A vector space may be given apartial order,{\displaystyle \,\leq ,\,} under which some vectors can be compared.[63] For example,n{\displaystyle n}-dimensional real spaceRn{\displaystyle \mathbf {R} ^{n}} can be ordered by comparing its vectors componentwise.Ordered vector spaces, for exampleRiesz spaces, are fundamental toLebesgue integration, which relies on the ability to express a function as a difference of two positive functionsf=f+f.{\displaystyle f=f^{+}-f^{-}.}wheref+{\displaystyle f^{+}} denotes the positive part off{\displaystyle f} andf{\displaystyle f^{-}} the negative part.[64]

Normed vector spaces and inner product spaces

[edit]
Main articles:Normed vector space andInner product space

"Measuring" vectors is done by specifying anorm, a datum which measures lengths of vectors, or by aninner product, which measures angles between vectors. Norms and inner products are denoted|v|{\displaystyle |\mathbf {v} |} andv,w,{\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle ,} respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm|v|:=v,v.{\textstyle |\mathbf {v} |:={\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}.} Vector spaces endowed with such data are known asnormed vector spaces andinner product spaces, respectively.[65]

Coordinate spaceFn{\displaystyle F^{n}} can be equipped with the standarddot product:x,y=xy=x1y1++xnyn.{\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =\mathbf {x} \cdot \mathbf {y} =x_{1}y_{1}+\cdots +x_{n}y_{n}.}InR2,{\displaystyle \mathbf {R} ^{2},} this reflects the common notion of the angle between two vectorsx{\displaystyle \mathbf {x} } andy,{\displaystyle \mathbf {y} ,} by thelaw of cosines:xy=cos((x,y))|x||y|.{\displaystyle \mathbf {x} \cdot \mathbf {y} =\cos \left(\angle (\mathbf {x} ,\mathbf {y} )\right)\cdot |\mathbf {x} |\cdot |\mathbf {y} |.}Because of this, two vectors satisfyingx,y=0{\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =0} are calledorthogonal. An important variant of the standard dot product is used inMinkowski space:R4{\displaystyle \mathbf {R} ^{4}} endowed with the Lorentz product[66]x|y=x1y1+x2y2+x3y3x4y4.{\displaystyle \langle \mathbf {x} |\mathbf {y} \rangle =x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}-x_{4}y_{4}.}In contrast to the standard dot product, it is notpositive definite:x|x{\displaystyle \langle \mathbf {x} |\mathbf {x} \rangle } also takes negative values, for example, forx=(0,0,0,1).{\displaystyle \mathbf {x} =(0,0,0,1).} Singling out the fourth coordinate—corresponding to time, as opposed to three space-dimensions—makes it useful for the mathematical treatment ofspecial relativity. Note that in other conventions time is often written as the first, or "zeroeth" component so that the Lorentz product is writtenx|y=x0y0+x1y1+x2y2+x3y3.{\displaystyle \langle \mathbf {x} |\mathbf {y} \rangle =-x_{0}y_{0}+x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}.}

Topological vector spaces

[edit]
Main article:Topological vector space

Convergence questions are treated by considering vector spacesV{\displaystyle V} carrying a compatibletopology, a structure that allows one to talk about elements beingclose to each other.[67] Compatible here means that addition and scalar multiplication have to becontinuous maps. Roughly, ifx{\displaystyle \mathbf {x} } andy{\displaystyle \mathbf {y} } inV{\displaystyle V}, anda{\displaystyle a} inF{\displaystyle F} vary by a bounded amount, then so dox+y{\displaystyle \mathbf {x} +\mathbf {y} } andax.{\displaystyle a\mathbf {x} .}[nb 6] To make sense of specifying the amount a scalar changes, the fieldF{\displaystyle F} also has to carry a topology in this context; a common choice is the reals or the complex numbers.

In suchtopological vector spaces one can considerseries of vectors. Theinfinite sumi=1fi = limnf1++fn{\displaystyle \sum _{i=1}^{\infty }f_{i}~=~\lim _{n\to \infty }f_{1}+\cdots +f_{n}}denotes thelimit of the corresponding finite partial sums of the sequencef1,f2,{\displaystyle f_{1},f_{2},\ldots } of elements ofV.{\displaystyle V.} For example, thefi{\displaystyle f_{i}} could be (real or complex) functions belonging to somefunction spaceV,{\displaystyle V,} in which case the series is afunction series. Themode of convergence of the series depends on the topology imposed on the function space. In such cases,pointwise convergence anduniform convergence are two prominent examples.[68]

Unit "spheres" inR2{\displaystyle \mathbf {R} ^{2}} consist of plane vectors of norm 1. Depicted are the unit spheres in differentp{\displaystyle p}-norms, forp=1,2,{\displaystyle p=1,2,} and.{\displaystyle \infty .} The bigger diamond depicts points of 1-norm equal to 2.

A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where anyCauchy sequence has a limit; such a vector space is calledcomplete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval[0,1],{\displaystyle [0,1],} equipped with thetopology of uniform convergence is not complete because any continuous function on[0,1]{\displaystyle [0,1]} can be uniformly approximated by a sequence of polynomials, by theWeierstrass approximation theorem.[69] In contrast, the space ofall continuous functions on[0,1]{\displaystyle [0,1]} with the same topology is complete.[70] A norm gives rise to a topology by defining that a sequence of vectorsvn{\displaystyle \mathbf {v} _{n}} converges tov{\displaystyle \mathbf {v} } if and only iflimn|vnv|=0.{\displaystyle \lim _{n\to \infty }|\mathbf {v} _{n}-\mathbf {v} |=0.}Banach and Hilbert spaces are complete topological vector spaces whose topologies are given, respectively, by a norm and an inner product. Their study—a key piece offunctional analysis—focuses on infinite-dimensional vector spaces, since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence.[71] The image at the right shows the equivalence of the1{\displaystyle 1}-norm and{\displaystyle \infty }-norm onR2:{\displaystyle \mathbf {R} ^{2}:} as the unit "balls" enclose each other, a sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case, however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer than that of vector spaces without additional data.

From a conceptual point of view, all notions related to topological vector spaces should match the topology. For example, instead of considering all linear maps (also calledfunctionals)VW,{\displaystyle V\to W,} maps between topological vector spaces are required to be continuous.[72] In particular, the(topological) dual spaceV{\displaystyle V^{*}} consists of continuous functionalsVR{\displaystyle V\to \mathbf {R} } (or toC{\displaystyle \mathbf {C} }). The fundamentalHahn–Banach theorem is concerned with separating subspaces of appropriate topological vector spaces by continuous functionals.[73]

Banach spaces

[edit]
Main article:Banach space

Banach spaces, introduced byStefan Banach, are complete normed vector spaces.[74]

A first example isthe vector spacep{\displaystyle \ell ^{p}} consisting of infinite vectors with real entriesx=(x1,x2,,xn,){\displaystyle \mathbf {x} =\left(x_{1},x_{2},\ldots ,x_{n},\ldots \right)}whosep{\displaystyle p}-norm(1p){\displaystyle (1\leq p\leq \infty )} given byx:=supi|xi| for p=, and {\displaystyle \|\mathbf {x} \|_{\infty }:=\sup _{i}|x_{i}|\qquad {\text{ for }}p=\infty ,{\text{ and }}}xp:=(i|xi|p)1p for p<.{\displaystyle \|\mathbf {x} \|_{p}:=\left(\sum _{i}|x_{i}|^{p}\right)^{\frac {1}{p}}\qquad {\text{ for }}p<\infty .}

The topologies on the infinite-dimensional spacep{\displaystyle \ell ^{p}} are inequivalent for differentp.{\displaystyle p.} For example, the sequence of vectorsxn=(2n,2n,,2n,0,0,),{\displaystyle \mathbf {x} _{n}=\left(2^{-n},2^{-n},\ldots ,2^{-n},0,0,\ldots \right),} in which the first2n{\displaystyle 2^{n}} components are2n{\displaystyle 2^{-n}} and the following ones are0,{\displaystyle 0,} converges to thezero vector forp=,{\displaystyle p=\infty ,} but does not forp=1:{\displaystyle p=1:}xn=sup(2n,0)=2n0,{\displaystyle \|\mathbf {x} _{n}\|_{\infty }=\sup(2^{-n},0)=2^{-n}\to 0,} butxn1=i=12n2n=2n2n=1.{\displaystyle \|\mathbf {x} _{n}\|_{1}=\sum _{i=1}^{2^{n}}2^{-n}=2^{n}\cdot 2^{-n}=1.}

More generally than sequences of real numbers, functionsf:ΩR{\displaystyle f:\Omega \to \mathbb {R} } are endowed with a norm that replaces the above sum by theLebesgue integralfp:=(Ω|f(x)|pdμ(x))1p.{\displaystyle \|f\|_{p}:=\left(\int _{\Omega }|f(x)|^{p}\,{d\mu (x)}\right)^{\frac {1}{p}}.}

The space ofintegrable functions on a givendomainΩ{\displaystyle \Omega } (for example an interval) satisfyingfp<,{\displaystyle \|f\|_{p}<\infty ,} and equipped with this norm are calledLebesgue spaces, denotedLp(Ω).{\displaystyle L^{\;\!p}(\Omega ).}[nb 7]

These spaces are complete.[75] (If one uses theRiemann integral instead, the space isnot complete, which may be seen as a justification for Lebesgue's integration theory.[nb 8]) Concretely this means that for any sequence of Lebesgue-integrable functionsf1,f2,,fn,{\displaystyle f_{1},f_{2},\ldots ,f_{n},\ldots } withfnp<,{\displaystyle \|f_{n}\|_{p}<\infty ,} satisfying the conditionlimk, nΩ|fk(x)fn(x)|pdμ(x)=0{\displaystyle \lim _{k,\ n\to \infty }\int _{\Omega }\left|f_{k}(x)-f_{n}(x)\right|^{p}\,{d\mu (x)}=0}there exists a functionf(x){\displaystyle f(x)} belonging to the vector spaceLp(Ω){\displaystyle L^{\;\!p}(\Omega )} such thatlimkΩ|f(x)fk(x)|pdμ(x)=0.{\displaystyle \lim _{k\to \infty }\int _{\Omega }\left|f(x)-f_{k}(x)\right|^{p}\,{d\mu (x)}=0.}

Imposing boundedness conditions not only on the function, but also on itsderivatives leads toSobolev spaces.[76]

Hilbert spaces

[edit]
Main article:Hilbert space
The succeeding snapshots show summation of 1 to 5 terms in approximating a periodic function (blue) by finite sum of sine functions (red).

Complete inner product spaces are known asHilbert spaces, in honor ofDavid Hilbert.[77] The Hilbert spaceL2(Ω),{\displaystyle L^{2}(\Omega ),} with inner product given byf , g=Ωf(x)g(x)¯dx,{\displaystyle \langle f\ ,\ g\rangle =\int _{\Omega }f(x){\overline {g(x)}}\,dx,}whereg(x)¯{\displaystyle {\overline {g(x)}}} denotes thecomplex conjugate ofg(x),{\displaystyle g(x),}[78][nb 9] is a key case.

By definition, in a Hilbert space, any Cauchy sequence converges to a limit. Conversely, finding a sequence of functionsfn{\displaystyle f_{n}} with desirable properties that approximate a given limit function is equally crucial. Early analysis, in the guise of theTaylor approximation, established an approximation ofdifferentiable functionsf{\displaystyle f} by polynomials.[79] By theStone–Weierstrass theorem, every continuous function on[a,b]{\displaystyle [a,b]} can be approximated as closely as desired by a polynomial.[80] A similar approximation technique bytrigonometric functions is commonly calledFourier expansion, and is much applied in engineering. More generally, and more conceptually, the theorem yields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice to generate a Hilbert spaceH,{\displaystyle H,} in the sense that theclosure of their span (that is, finite linear combinations and limits of those) is the whole space. Such a set of functions is called abasis ofH,{\displaystyle H,} its cardinality is known as theHilbert space dimension.[nb 10] Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes, but also together with theGram–Schmidt process, it enables one to construct abasis of orthogonal vectors.[81] Such orthogonal bases are the Hilbert space generalization of the coordinate axes in finite-dimensionalEuclidean space.

The solutions to variousdifferential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations, and frequently solutions with particular physical properties are used as basis functions, often orthogonal.[82] As an example from physics, the time-dependentSchrödinger equation inquantum mechanics describes the change of physical properties in time by means of apartial differential equation, whose solutions are calledwavefunctions.[83] Definite values for physical properties such as energy, or momentum, correspond toeigenvalues of a certain (linear)differential operator and the associated wavefunctions are calledeigenstates. Thespectral theorem decomposes a linearcompact operator acting on functions in terms of these eigenfunctions and their eigenvalues.[84]

Algebras over fields

[edit]
Main articles:Algebra over a field andLie algebra
Ahyperbola, given by the equationxy=1.{\displaystyle x\cdot y=1.} Thecoordinate ring of functions on this hyperbola is given byR[x,y]/(xy1),{\displaystyle \mathbf {R} [x,y]/(x\cdot y-1),} an infinite-dimensional vector space overR.{\displaystyle \mathbf {R} .}

General vector spaces do not possess a multiplication between vectors. A vector space equipped with an additionalbilinear operator defining the multiplication of two vectors is analgebra over a field (orF-algebra if the fieldF is specified).[85]

For example, the set of allpolynomialsp(t){\displaystyle p(t)} forms an algebra known as thepolynomial ring: using that the sum of two polynomials is a polynomial, they form a vector space; they form an algebra since the product of two polynomials is again a polynomial. Rings of polynomials (in several variables) and theirquotients form the basis ofalgebraic geometry, because they arerings of functions of algebraic geometric objects.[86]

Another crucial example areLie algebras, which are neither commutative nor associative, but the failure to be so is limited by the constraints ([x,y]{\displaystyle [x,y]} denotes the product ofx{\displaystyle x} andy{\displaystyle y}):

Examples include the vector space ofn{\displaystyle n}-by-n{\displaystyle n} matrices, with[x,y]=xyyx,{\displaystyle [x,y]=xy-yx,} thecommutator of two matrices, andR3,{\displaystyle \mathbf {R} ^{3},} endowed with thecross product.

Thetensor algebraT(V){\displaystyle \operatorname {T} (V)} is a formal way of adding products to any vector spaceV{\displaystyle V} to obtain an algebra.[88] As a vector space, it is spanned by symbols, called simpletensorsv1v2vn,{\displaystyle \mathbf {v} _{1}\otimes \mathbf {v} _{2}\otimes \cdots \otimes \mathbf {v} _{n},}where thedegreen{\displaystyle n} varies.The multiplication is given by concatenating such symbols, imposing thedistributive law under addition, and requiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensor product of two vector spaces introduced in the above section ontensor products. In general, there are no relations betweenv1v2{\displaystyle \mathbf {v} _{1}\otimes \mathbf {v} _{2}} andv2v1.{\displaystyle \mathbf {v} _{2}\otimes \mathbf {v} _{1}.} Forcing two such elements to be equal leads to thesymmetric algebra, whereas forcingv1v2=v2v1{\displaystyle \mathbf {v} _{1}\otimes \mathbf {v} _{2}=-\mathbf {v} _{2}\otimes \mathbf {v} _{1}} yields theexterior algebra.[89]

Related structures

[edit]

Vector bundles

[edit]
Main articles:Vector bundle andTangent bundle
A Möbius strip. Locally, itlooks likeU ×R.

Avector bundle is a family of vector spaces parametrized continuously by atopological spaceX.[90] More precisely, a vector bundle overX is a topological spaceE equipped with a continuous mapπ:EX{\displaystyle \pi :E\to X}such that for everyx inX, thefiber π−1(x) is a vector space. The case dimV = 1 is called aline bundle. For any vector spaceV, the projectionX ×VX makes the productX ×V into a"trivial" vector bundle. Vector bundles overX are required to belocally a product ofX and some (fixed) vector spaceV: for everyx inX, there is aneighborhoodU ofx such that the restriction of π to π−1(U) is isomorphic[nb 11] to the trivial bundleU ×VU. Despite their locally trivial character, vector bundles may (depending on the shape of the underlying spaceX) be "twisted" in the large (that is, the bundle need not be (globally isomorphic to) the trivial bundleX ×V). For example, theMöbius strip can be seen as a line bundle over the circleS1 (byidentifying open intervals with the real line). It is, however, different from thecylinderS1 ×R, because the latter isorientable whereas the former is not.[91]

Properties of certain vector bundles provide information about the underlying topological space. For example, thetangent bundle consists of the collection oftangent spaces parametrized by the points of a differentiable manifold. The tangent bundle of the circleS1 is globally isomorphic toS1 ×R, since there is a global nonzerovector field onS1.[nb 12] In contrast, by thehairy ball theorem, there is no (tangent) vector field on the2-sphereS2 which is everywhere nonzero.[92]K-theory studies the isomorphism classes of all vector bundles over some topological space.[93] In addition to deepening topological and geometrical insight, it has purely algebraic consequences, such as the classification of finite-dimensional realdivision algebras:R,C, thequaternionsH and theoctonionsO.

Thecotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangent space, thecotangent space.Sections of that bundle are known asdifferential one-forms.

Modules

[edit]
Main article:Module

Modules are torings what vector spaces are to fields: the same axioms, applied to a ringR instead of a fieldF, yield modules.[94] The theory of modules, compared to that of vector spaces, is complicated by the presence of ring elements that do not havemultiplicative inverses. For example, modules need not have bases, as theZ-module (that is,abelian group)Z/2Z shows; those modules that do (including all vector spaces) are known asfree modules. Nevertheless, a vector space can be compactly defined as amodule over aring which is afield, with the elements being called vectors. Some authors use the termvector space to mean modules over adivision ring.[95] The algebro-geometric interpretation of commutative rings via theirspectrum allows the development of concepts such aslocally free modules, the algebraic counterpart to vector bundles.

Affine and projective spaces

[edit]
Main articles:Affine space andProjective space
Anaffine plane (light blue) inR3. It is a two-dimensional subspace shifted by a vectorx (red).

Roughly,affine spaces are vector spaces whose origins are not specified.[96] More precisely, an affine space is a set with afree transitive vector spaceaction. In particular, a vector space is an affine space over itself, by the mapV×VW,(v,a)a+v.{\displaystyle V\times V\to W,\;(\mathbf {v} ,\mathbf {a} )\mapsto \mathbf {a} +\mathbf {v} .}IfW is a vector space, then an affine subspace is a subset ofW obtained by translating a linear subspaceV by a fixed vectorxW; this space is denoted byx +V (it is acoset ofV inW) and consists of all vectors of the formx +v forvV. An important example is the space of solutions of a system of inhomogeneous linear equationsAv=b{\displaystyle A\mathbf {v} =\mathbf {b} }generalizing the homogeneous case discussed in theabove section on linear equations, which can be found by settingb=0{\displaystyle \mathbf {b} =\mathbf {0} } in this equation.[97] The space of solutions is the affine subspacex +V wherex is a particular solution of the equation, andV is the space of solutions of the homogeneous equation (thenullspace ofA).

The set of one-dimensional subspaces of a fixed finite-dimensional vector spaceV is known asprojective space; it may be used to formalize the idea ofparallel lines intersecting at infinity.[98]Grassmannians andflag manifolds generalize this by parametrizing linear subspaces of fixed dimensionk andflags of subspaces, respectively.

Notes

[edit]
  1. ^It is also common, especially in physics, to denote vectors with an arrow on top:v.{\displaystyle {\vec {v}}.} It is also common, especially in higher mathematics, to not use any typographical method for distinguishing vectors from other mathematical objects.
  2. ^Scalar multiplication is not to be confused with thescalar product, which is an additional operation on some specific vector spaces, calledinner product spaces. Scalar multiplication is the multiplication of a vectorby a scalar that produces a vector, while the scalar product is a multiplication of two vectors that produces a scalar.
  3. ^This axiom is not anassociative property, since it refers to two different operations, scalar multiplication and field multiplication. So, it is independent from the associativity of field multiplication, which is assumed by field axioms.
  4. ^This is typically the case when a vector space is also considered as anaffine space. In this case, a linear subspace contains thezero vector, while an affine subspace does not necessarily contain it.
  5. ^Some authors, such asRoman (2005), choose to start with thisequivalence relation and derive the concrete shape ofV/W{\displaystyle V/W} from this.
  6. ^This requirement implies that the topology gives rise to auniform structure,Bourbaki (1989), loc = ch. II.
  7. ^Thetriangle inequality forf+gpfp+gp{\displaystyle \|f+g\|_{p}\leq \|f\|_{p}+\|g\|_{p}} is provided by theMinkowski inequality. For technical reasons, in the context of functions one has to identify functions that agreealmost everywhere to get a norm, and not only aseminorm.
  8. ^"Many functions inL2{\displaystyle L^{2}} of Lebesgue measure, being unbounded, cannot be integrated with the classical Riemann integral. So spaces of Riemann integrable functions would not be complete in theL2{\displaystyle L^{2}} norm, and the orthogonal decomposition would not apply to them. This shows one of the advantages of Lebesgue integration.",Dudley (1989), §5.3, p. 125.
  9. ^Forp2,{\displaystyle p\neq 2,}Lp(Ω){\displaystyle L^{p}(\Omega )} is not a Hilbert space.
  10. ^A basis of a Hilbert space is not the same thing as a basis of a linear algebra. For distinction, a linear algebra basis for a Hilbert space is called aHamel basis.
  11. ^That is, there is ahomeomorphism from π−1(U) toV ×U which restricts to linear isomorphisms between fibers.
  12. ^A line bundle, such as the tangent bundle ofS1 is trivial if and only if there is asection that vanishes nowhere, seeHusemoller (1994), Corollary 8.3. The sections of the tangent bundle are justvector fields.

Citations

[edit]
  1. ^Lang 2002.
  2. ^Brown 1991, p. 86.
  3. ^Roman 2005, ch. 1, p. 27.
  4. ^Brown 1991, p. 87.
  5. ^Springer 2000, p. 185;Brown 1991, p. 86.
  6. ^Atiyah & Macdonald 1969, p. 17.
  7. ^Bourbaki 1998, §1.1, Definition 2.
  8. ^Brown 1991, p. 94.
  9. ^Brown 1991, pp. 99–101.
  10. ^Brown 1991, p. 92.
  11. ^abStoll & Wong 1968, p. 14.
  12. ^Roman 2005, pp. 41–42.
  13. ^Lang 1987, p. 10–11;Anton & Rorres 2010, p. 212.
  14. ^Blass 1984.
  15. ^Joshi 1989, p. 450.
  16. ^Heil 2011, p. 126.
  17. ^Halmos 1948, p. 12.
  18. ^Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91.
  19. ^Bolzano 1804.
  20. ^Möbius 1827.
  21. ^Bellavitis 1833.
  22. ^Dorier 1995.
  23. ^Hamilton 1853.
  24. ^Grassmann 2000.
  25. ^Peano 1888, ch. IX.
  26. ^Guo 2021.
  27. ^Moore 1995, pp. 268–271.
  28. ^Banach 1922.
  29. ^Dorier 1995;Moore 1995.
  30. ^Kreyszig 2020, p. 355.
  31. ^Kreyszig 2020, p. 358–359.
  32. ^Jain 2001, p. 11.
  33. ^Lang 1987, ch. I.1.
  34. ^Lang 2002, ch. V.1.
  35. ^Lang 1993, ch. XII.3., p. 335.
  36. ^Lang 1987, ch. VI.3..
  37. ^Roman 2005, ch. 2, p. 45.
  38. ^Lang 1987, ch. IV.4, Corollary, p. 106.
  39. ^Nicholson 2018, ch. 7.3.
  40. ^Lang 1987, Example IV.2.6.
  41. ^Lang 1987, ch. VI.6.
  42. ^Halmos 1974, p. 28, Ex. 9.
  43. ^Lang 1987, Theorem IV.2.1, p. 95.
  44. ^Roman 2005, Th. 2.5 and 2.6, p. 49.
  45. ^Lang 1987, ch. V.1.
  46. ^Lang 1987, ch. V.3., Corollary, p. 106.
  47. ^Lang 1987, Theorem VII.9.8, p. 198.
  48. ^Roman 2005, ch. 8, p. 135–156.
  49. ^ & Lang 1987, ch. IX.4.
  50. ^Roman 2005, ch. 8, p. 140.
  51. ^Roman 2005, ch. 1, p. 29.
  52. ^Roman 2005, ch. 1, p. 35.
  53. ^Nicholson 2018, ch. 10.4.
  54. ^Roman 2005, ch. 3, p. 64.
  55. ^Lang 1987, ch. IV.3..
  56. ^Roman 2005, ch. 2, p. 48.
  57. ^Nicholson 2018, ch. 7.4.
  58. ^Mac Lane 1998.
  59. ^Roman 2005, ch. 1, pp. 31–32.
  60. ^Lang 2002, ch. XVI.1.
  61. ^Roman (2005), Th. 14.3. See alsoYoneda lemma.
  62. ^Rudin 1991, p.3.
  63. ^Schaefer & Wolff 1999, pp. 204–205.
  64. ^Bourbaki 2004, ch. 2, p. 48.
  65. ^Roman 2005, ch. 9.
  66. ^Naber 2003, ch. 1.2.
  67. ^Treves 1967;Bourbaki 1987.
  68. ^Schaefer & Wolff 1999, p. 7.
  69. ^Kreyszig 1989, §4.11-5
  70. ^Kreyszig 1989, §1.5-5
  71. ^Choquet 1966, Proposition III.7.2.
  72. ^Treves 1967, p. 34–36.
  73. ^Lang 1983, Cor. 4.1.2, p. 69.
  74. ^Treves 1967, ch. 11.
  75. ^Treves 1967, Theorem 11.2, p. 102.
  76. ^Evans 1998, ch. 5.
  77. ^Treves 1967, ch. 12.
  78. ^Dennery & Krzywicki 1996, p.190.
  79. ^Lang 1993, Th. XIII.6, p. 349.
  80. ^Lang 1993, Th. III.1.1.
  81. ^Choquet 1966, Lemma III.16.11.
  82. ^Kreyszig 1999, Chapter 11.
  83. ^Griffiths 1995, Chapter 1.
  84. ^Lang 1993, ch. XVII.3.
  85. ^Lang 2002, ch. III.1, p. 121.
  86. ^Eisenbud 1995, ch. 1.6.
  87. ^Varadarajan 1974.
  88. ^Lang 2002, ch. XVI.7.
  89. ^Lang 2002, ch. XVI.8.
  90. ^Spivak 1999, ch. 3.
  91. ^Kreyszig 1991, §34, p. 108.
  92. ^Eisenberg & Guy 1979.
  93. ^Atiyah 1989.
  94. ^Artin 1991, ch. 12.
  95. ^Grillet 2007.
  96. ^Meyer 2000, Example 5.13.5, p. 436.
  97. ^Meyer 2000, Exercise 5.13.15–17, p. 442.
  98. ^Coxeter 1987.

References

[edit]

Algebra

[edit]

Analysis

[edit]

Historical references

[edit]

Further references

[edit]

External links

[edit]
The WikibookLinear Algebra has a page on the topic of:Real vector spaces
The WikibookLinear Algebra has a page on the topic of:Vector spaces
Basic concepts
Three dimensional Euclidean space
Matrices
Bilinear
Multilinear algebra
Vector space constructions
Numerical
Authority control databases: NationalEdit this at Wikidata

Retrieved from "https://en.wikipedia.org/w/index.php?title=Vector_space&oldid=1277523828"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp