Inmathematics, and more specifically inlinear algebra, alinear subspace orvector subspace[1][note 1] is avector space that is asubset of some larger vector space. A linear subspace is usually simply called asubspace when the context serves to distinguish it from other types ofsubspaces.
IfV is a vector space over afieldK, a subsetW ofV is alinear subspace ofV if it is avector space overK for the operations ofV. Equivalently, a linear subspace ofV is anonempty subsetW such that, wheneverw1,w2 are elements ofW andα,β are elements ofK, it follows thatαw1 +βw2 is inW.[2][3][4][5][6]
Thesingleton set consisting of thezero vector alone and the entire vector space itself are linear subspaces that are called thetrivial subspaces of the vector space.[7]
In the vector spaceV =R3 (thereal coordinate space over the fieldR ofreal numbers), takeW to be the set of all vectors inV whose last component is 0.ThenW is a subspace ofV.
Proof:
Givenu andv inW, then they can be expressed asu = (u1,u2, 0) andv = (v1,v2, 0). Thenu +v = (u1+v1,u2+v2, 0+0) = (u1+v1,u2+v2, 0). Thus,u +v is an element ofW, too.
Givenu inW and a scalarc inR, ifu = (u1,u2, 0) again, thencu = (cu1,cu2,c0) = (cu1,cu2,0). Thus,cu is an element ofW too.
Let the field beR again, but now let the vector spaceV be theCartesian planeR2.TakeW to be the set of points (x,y) ofR2 such thatx =y.ThenW is a subspace ofR2.
Proof:
Letp = (p1,p2) andq = (q1,q2) be elements ofW, that is, points in the plane such thatp1 =p2 andq1 =q2. Thenp +q = (p1+q1,p2+q2); sincep1 =p2 andq1 =q2, thenp1 +q1 =p2 +q2, sop +q is an element ofW.
Letp = (p1,p2) be an element ofW, that is, a point in the plane such thatp1 =p2, and letc be a scalar inR. Thencp = (cp1,cp2); sincep1 =p2, thencp1 =cp2, socp is an element ofW.
In general, any subset of the real coordinate spaceRn that is defined by ahomogeneous system of linear equations will yield a subspace.(The equation in example I wasz = 0, and the equation in example II wasx = y.)
Again take the field to beR, but now let the vector spaceV be the setRR of allfunctions fromR toR.Let C(R) be the subset consisting ofcontinuous functions.Then C(R) is a subspace ofRR.
Proof:
We know from calculus that0 ∈ C(R) ⊂RR.
We know from calculus that the sum of continuous functions is continuous.
Again, we know from calculus that the product of a continuous function and a number is continuous.
Keep the same field and vector space as before, but now consider the set Diff(R) of alldifferentiable functions.The same sort of argument as before shows that this is a subspace too.
From the definition of vector spaces, it follows that subspaces are nonempty, and areclosed under sums and under scalar multiples.[8] Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty setW is a subspaceif and only if every linear combination offinitely many elements ofW also belongs toW.The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.
Descriptions of subspaces include the solution set to ahomogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linearparametric equations, thespan of a collection of vectors, and thenull space,column space, androw space of amatrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is aflat in ann-space that passes through the origin.
A natural description of a 1-subspace is thescalar multiplication of one non-zero vectorv to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:
This idea is generalized for higher dimensions withlinear span, but criteria forequality ofk-spaces specified by sets ofk vectors are not so simple.
Adual description is provided withlinear functionals (usually implemented as linear equations). One non-zero linear functionalF specifies itskernel subspaceF = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in thedual space):
It is generalized for higher codimensions with asystem of equations. The following two subsections will present this latter description in details, andthe remaining four subsections further describe the idea of linear span.
For example, the set of all vectors(x,y,z) (over real orrational numbers) satisfying the equationsis a one-dimensional subspace. More generally, that is to say that given a set ofn independent functions, the dimension of the subspace inKk will be the dimension of thenull set ofA, the composite matrix of then functions.
In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:
The set of solutions to this equation is known as thenull space of the matrix. For example, the subspace described above is the null space of the matrix
Every subspace ofKn can be described as the null space of some matrix (see§ Algorithms below for more).
In linear algebra, the system of parametric equations can be written as a single vector equation:
The expression on the right is called a linear combination of the vectors(2, 5, −1) and (3, −4, 2). These two vectors are said tospan the resulting subspace.
In general, alinear combination of vectorsv1, v2, ... , vk is any vector of the form
The set of all possible linear combinations is called thespan:
If the vectorsv1, ... , vk haven components, then their span is a subspace ofKn. Geometrically, the span is the flat through the origin inn-dimensional space determined by the pointsv1, ... , vk.
Example
Thexz-plane inR3 can be parameterized by the equations
As a subspace, thexz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in thexz-plane can be written as a linear combination of these two:
Geometrically, this corresponds to the fact that every point on thexz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).
A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:
In this case, the subspace consists of all possible values of the vectorx. In linear algebra, this subspace is known as the column space (orimage) of the matrixA. It is precisely the subspace ofKn spanned by the column vectors ofA.
The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is theorthogonal complement of the null space (see below).
The vectorsu andv are a basis for this two-dimensional subspace ofR3.
In general, a subspace ofKn determined byk parameters (or spanned byk vectors) has dimensionk. However, there are exceptions to this rule. For example, the subspace ofK3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just thexz-plane, with each point on the plane described by infinitely many different values oft1,t2,t3.
In general, vectorsv1, ... , vk are calledlinearly independent if
for(t1, t2, ... , tk) ≠ (u1, u2, ... , uk).[note 3]Ifv1, ...,vk are linearly independent, then thecoordinatest1, ...,tk for a vector in the span are uniquely determined.
Abasis for a subspaceS is a set of linearly independent vectors whose span isS. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see§ Algorithms below for more).
Example
LetS be the subspace ofR4 defined by the equations
Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis forS. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
The subspaceS is two-dimensional. Geometrically, it is the plane inR4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).
InR3, the intersection of two distinct two-dimensional subspaces is one-dimensional
Given subspacesU andW of a vector spaceV, then theirintersectionU ∩ W := {v ∈ V :v is an element of bothU and W} is also a subspace ofV.[10]
Proof:
Letv andw be elements ofU ∩ W. Thenv andw belong to bothU andW. BecauseU is a subspace, thenv + w belongs toU. Similarly, sinceW is a subspace, thenv + w belongs toW. Thus,v + w belongs toU ∩ W.
Letv belong toU ∩ W, and letc be a scalar. Thenv belongs to bothU andW. SinceU andW are subspaces,cv belongs to bothU and W.
SinceU andW are vector spaces, then0 belongs to both sets. Thus,0 belongs toU ∩ W.
For every vector spaceV, theset {0} andV itself are subspaces ofV.[11][12]
IfU andW are subspaces, theirsum is the subspace[13][14]
For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality
Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation:[15]
A set of subspaces isindependent when the only intersection between any pair of subspaces is the trivial subspace. Thedirect sum is the sum of independent subspaces, written as. An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum.[16][17][18][19]
The dimension of a direct sum is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero.[20]
If is aninner product space and is a subset of, then theorthogonal complement of, denoted, is again a subspace.[21] If is finite-dimensional and is a subspace, then the dimensions of and satisfy the complementary relationship.[22] Moreover, no vector is orthogonal to itself, so and is thedirect sum of and.[23] Applying orthogonal complements twice returns the original subspace: for every subspace.[24]
This operation, understood asnegation (), makes the lattice of subspaces a (possiblyinfinite) orthocomplemented lattice (although not a distributive lattice).[citation needed]
In spaces with otherbilinear forms, some but not all of these results still hold. Inpseudo-Euclidean spaces andsymplectic vector spaces, for example, orthogonal complements exist. However, these spaces may havenull vectors that are orthogonal to themselves, and consequently there exist subspaces such that. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor aHeyting algebra).[citation needed]
If we instead put the matrixA into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces ofKn are equal.
This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.
Input A basis {b1,b2, ...,bk} for a subspaceS ofKn, and a vectorv ∈S
Output Numberst1,t2, ...,tk such thatv =t1b1 + ··· +tkbk
Create anaugmented matrixA whose columns areb1,...,bk , with the last column beingv.
Use elementary row operations to putA into reduced row echelon form.
Express the final column of the reduced echelon form as a linear combination of the firstk columns. The coefficients used are the desired numberst1,t2, ...,tk. (These should be precisely the firstk entries in the final column of the reduced echelon form.)
If the final column of the reduced row echelon form contains a pivot, then the input vectorv does not lie inS.
Use elementary row operations to putA in reduced row echelon form.
Using the reduced row echelon form, determine which of the variablesx1,x2, ...,xn are free. Write equations for the dependent variables in terms of the free variables.
For each free variablexi, choose a vector in the null space for whichxi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space ofA.
Input A basis {b1,b2, ...,bk} for a subspaceS ofKn
Output An (n − k) × n matrix whose null space isS.
Create a matrixA whose rows areb1,b2, ...,bk.
Use elementary row operations to putA into reduced row echelon form.
Letc1,c2, ...,cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
This results in a homogeneous system ofn −k linear equations involving the variablesc1,...,cn. The (n −k) ×n matrix corresponding to this system is the desired matrix with nullspaceS.
Example
If the reduced row echelon form ofA is
then the column vectorsc1, ...,c6 satisfy the equations
It follows that the row vectors ofA satisfy the equations
In particular, the row vectors ofA are a basis for the null space of the corresponding matrix.
^The termlinear subspace is sometimes used for referring toflats andaffine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also calledlinear manifolds for emphasizing that they are alsomanifolds.
^Generally,K can be any field of suchcharacteristic that the given integer matrix has the appropriaterank in it. All fields includeintegers, but some integers may equal to zero in some fields.
^This definition is often stated differently: vectorsv1, ...,vk are linearly independent ift1v1 + ··· +tkvk ≠0 for (t1,t2, ...,tk) ≠ (0, 0, ..., 0). The two definitions are equivalent.