Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Linear subspace

From Wikipedia, the free encyclopedia
(Redirected fromVector subspace)
In mathematics, vector subspace

Inmathematics, and more specifically inlinear algebra, alinear subspace orvector subspace[1][note 1] is avector space that is asubset of some larger vector space. A linear subspace is usually simply called asubspace when the context serves to distinguish it from other types ofsubspaces.

Definition

[edit]

IfV is a vector space over afieldK, a subsetW ofV is alinear subspace ofV if it is avector space overK for the operations ofV. Equivalently, a linear subspace ofV is anonempty subsetW such that, wheneverw1,w2 are elements ofW andα,β are elements ofK, it follows thatαw1 +βw2 is inW.[2][3][4][5][6]

Thesingleton set consisting of thezero vector alone and the entire vector space itself are linear subspaces that are called thetrivial subspaces of the vector space.[7]

Examples

[edit]

Example I

[edit]

In the vector spaceV =R3 (thereal coordinate space over the fieldR ofreal numbers), takeW to be the set of all vectors inV whose last component is 0.ThenW is a subspace ofV.

Proof:

  1. Givenu andv inW, then they can be expressed asu = (u1,u2, 0) andv = (v1,v2, 0). Thenu +v = (u1+v1,u2+v2, 0+0) = (u1+v1,u2+v2, 0). Thus,u +v is an element ofW, too.
  2. Givenu inW and a scalarc inR, ifu = (u1,u2, 0) again, thencu = (cu1,cu2,c0) = (cu1,cu2,0). Thus,cu is an element ofW too.

Example II

[edit]

Let the field beR again, but now let the vector spaceV be theCartesian planeR2.TakeW to be the set of points (x,y) ofR2 such thatx =y.ThenW is a subspace ofR2.

Proof:

  1. Letp = (p1,p2) andq = (q1,q2) be elements ofW, that is, points in the plane such thatp1 =p2 andq1 =q2. Thenp +q = (p1+q1,p2+q2); sincep1 =p2 andq1 =q2, thenp1 +q1 =p2 +q2, sop +q is an element ofW.
  2. Letp = (p1,p2) be an element ofW, that is, a point in the plane such thatp1 =p2, and letc be a scalar inR. Thencp = (cp1,cp2); sincep1 =p2, thencp1 =cp2, socp is an element ofW.

In general, any subset of the real coordinate spaceRn that is defined by ahomogeneous system of linear equations will yield a subspace.(The equation in example I wasz = 0, and the equation in example II wasx = y.)

Example III

[edit]

Again take the field to beR, but now let the vector spaceV be the setRR of allfunctions fromR toR.Let C(R) be the subset consisting ofcontinuous functions.Then C(R) is a subspace ofRR.

Proof:

  1. We know from calculus that0 ∈ C(R) ⊂RR.
  2. We know from calculus that the sum of continuous functions is continuous.
  3. Again, we know from calculus that the product of a continuous function and a number is continuous.

Example IV

[edit]

Keep the same field and vector space as before, but now consider the set Diff(R) of alldifferentiable functions.The same sort of argument as before shows that this is a subspace too.

Examples that extend these themes are common infunctional analysis.

Properties of subspaces

[edit]

From the definition of vector spaces, it follows that subspaces are nonempty, and areclosed under sums and under scalar multiples.[8] Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty setW is a subspaceif and only if every linear combination offinitely many elements ofW also belongs toW.The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.

In atopological vector spaceX, a subspaceW need not be topologicallyclosed, but afinite-dimensional subspace is always closed.[9] The same is true for subspaces of finitecodimension (i.e., subspaces determined by a finite number of continuouslinear functionals).

Descriptions

[edit]

Descriptions of subspaces include the solution set to ahomogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linearparametric equations, thespan of a collection of vectors, and thenull space,column space, androw space of amatrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is aflat in ann-space that passes through the origin.

A natural description of a 1-subspace is thescalar multiplication of one non-zero vectorv to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:

cK:v=cv (or v=1cv){\displaystyle \exists c\in K:\mathbf {v} '=c\mathbf {v} {\text{ (or }}\mathbf {v} ={\frac {1}{c}}\mathbf {v} '{\text{)}}}

This idea is generalized for higher dimensions withlinear span, but criteria forequality ofk-spaces specified by sets ofk vectors are not so simple.

Adual description is provided withlinear functionals (usually implemented as linear equations). One non-zero linear functionalF specifies itskernel subspaceF = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in thedual space):

cK:F=cF (or F=1cF){\displaystyle \exists c\in K:\mathbf {F} '=c\mathbf {F} {\text{ (or }}\mathbf {F} ={\frac {1}{c}}\mathbf {F} '{\text{)}}}

It is generalized for higher codimensions with asystem of equations. The following two subsections will present this latter description in details, andthe remaining four subsections further describe the idea of linear span.

Systems of linear equations

[edit]

The solution set to anyhomogeneous system of linear equations withn variables is a subspace in thecoordinate spaceKn:{[x1x2xn]Kn:a11x1+a12x2++a1nxn=0a21x1+a22x2++a2nxn=0am1x1+am2x2++amnxn=0}.{\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{6}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=0&\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=0&\\&&&&&&&&&&\vdots \quad &\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=0&\end{alignedat}}\right\}.}

For example, the set of all vectors(x,y,z) (over real orrational numbers) satisfying the equationsx+3y+2z=0and2x4y+5z=0{\displaystyle x+3y+2z=0\quad {\text{and}}\quad 2x-4y+5z=0}is a one-dimensional subspace. More generally, that is to say that given a set ofn independent functions, the dimension of the subspace inKk will be the dimension of thenull set ofA, the composite matrix of then functions.

Null space of a matrix

[edit]
Main article:Null space

In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:

Ax=0.{\displaystyle A\mathbf {x} =\mathbf {0} .}

The set of solutions to this equation is known as thenull space of the matrix. For example, the subspace described above is the null space of the matrix

A=[132245].{\displaystyle A={\begin{bmatrix}1&3&2\\2&-4&5\end{bmatrix}}.}

Every subspace ofKn can be described as the null space of some matrix (see§ Algorithms below for more).

Linear parametric equations

[edit]

The subset ofKn described by a system of homogeneous linearparametric equations is a subspace:

{[x1x2xn]Kn:x1=a11t1+a12t2++a1mtmx2=a21t1+a22t2++a2mtmxn=an1t1+an2t2++anmtm for some t1,,tmK}.{\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{7}x_{1}&&\;=\;&&a_{11}t_{1}&&\;+\;&&a_{12}t_{2}&&\;+\cdots +\;&&a_{1m}t_{m}&\\x_{2}&&\;=\;&&a_{21}t_{1}&&\;+\;&&a_{22}t_{2}&&\;+\cdots +\;&&a_{2m}t_{m}&\\&&\vdots \;\;&&&&&&&&&&&\\x_{n}&&\;=\;&&a_{n1}t_{1}&&\;+\;&&a_{n2}t_{2}&&\;+\cdots +\;&&a_{nm}t_{m}&\\\end{alignedat}}{\text{ for some }}t_{1},\ldots ,t_{m}\in K\right\}.}

For example, the set of all vectors (xyz) parameterized by the equations

x=2t1+3t2,y=5t14t2,andz=t1+2t2{\displaystyle x=2t_{1}+3t_{2},\;\;\;\;y=5t_{1}-4t_{2},\;\;\;\;{\text{and}}\;\;\;\;z=-t_{1}+2t_{2}}

is a two-dimensional subspace ofK3, ifK is anumber field (such as real or rational numbers).[note 2]

Span of vectors

[edit]
Main article:Linear span

In linear algebra, the system of parametric equations can be written as a single vector equation:

[xyz]=t1[251]+t2[342].{\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}\;=\;t_{1}\!{\begin{bmatrix}2\\5\\-1\end{bmatrix}}+t_{2}\!{\begin{bmatrix}3\\-4\\2\end{bmatrix}}.}

The expression on the right is called a linear combination of the vectors(2, 5, −1) and (3, −4, 2). These two vectors are said tospan the resulting subspace.

In general, alinear combination of vectorsv1v2, ... , vk is any vector of the form

t1v1++tkvk.{\displaystyle t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}.}

The set of all possible linear combinations is called thespan:

Span{v1,,vk}={t1v1++tkvk:t1,,tkK}.{\displaystyle {\text{Span}}\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\}=\left\{t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}:t_{1},\ldots ,t_{k}\in K\right\}.}

If the vectorsv1, ... , vk haven components, then their span is a subspace ofKn. Geometrically, the span is the flat through the origin inn-dimensional space determined by the pointsv1, ... , vk.

Example
Thexz-plane inR3 can be parameterized by the equations
x=t1,y=0,z=t2.{\displaystyle x=t_{1},\;\;\;y=0,\;\;\;z=t_{2}.}
As a subspace, thexz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in thexz-plane can be written as a linear combination of these two:
(t1,0,t2)=t1(1,0,0)+t2(0,0,1).{\displaystyle (t_{1},0,t_{2})=t_{1}(1,0,0)+t_{2}(0,0,1){\text{.}}}
Geometrically, this corresponds to the fact that every point on thexz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).

Column space and row space

[edit]
Main article:Row and column spaces

A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:

x=AtwhereA=[235412].{\displaystyle \mathbf {x} =A\mathbf {t} \;\;\;\;{\text{where}}\;\;\;\;A=\left[{\begin{alignedat}{2}2&&3&\\5&&\;\;-4&\\-1&&2&\end{alignedat}}\,\right]{\text{.}}}

In this case, the subspace consists of all possible values of the vectorx. In linear algebra, this subspace is known as the column space (orimage) of the matrixA. It is precisely the subspace ofKn spanned by the column vectors ofA.

The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is theorthogonal complement of the null space (see below).

Independence, basis, and dimension

[edit]
Main articles:Linear independence,Basis (linear algebra), andDimension (vector space)
The vectorsu andv are a basis for this two-dimensional subspace ofR3.

In general, a subspace ofKn determined byk parameters (or spanned byk vectors) has dimensionk. However, there are exceptions to this rule. For example, the subspace ofK3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just thexz-plane, with each point on the plane described by infinitely many different values oft1,t2,t3.

In general, vectorsv1, ... , vk are calledlinearly independent if

t1v1++tkvku1v1++ukvk{\displaystyle t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}\;\neq \;u_{1}\mathbf {v} _{1}+\cdots +u_{k}\mathbf {v} _{k}}

for(t1t2, ... , tk) ≠ (u1u2, ... , uk).[note 3]Ifv1, ...,vk are linearly independent, then thecoordinatest1, ...,tk for a vector in the span are uniquely determined.

Abasis for a subspaceS is a set of linearly independent vectors whose span isS. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see§ Algorithms below for more).

Example
LetS be the subspace ofR4 defined by the equations
x1=2x2andx3=5x4.{\displaystyle x_{1}=2x_{2}\;\;\;\;{\text{and}}\;\;\;\;x_{3}=5x_{4}.}
Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis forS. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
(2t1,t1,5t2,t2)=t1(2,1,0,0)+t2(0,0,5,1).{\displaystyle (2t_{1},t_{1},5t_{2},t_{2})=t_{1}(2,1,0,0)+t_{2}(0,0,5,1).}
The subspaceS is two-dimensional. Geometrically, it is the plane inR4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).

Operations and relations on subspaces

[edit]

Inclusion

[edit]

Theset-theoretical inclusion binary relation specifies apartial order on the set of all subspaces (of any dimension).

A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, andU ⊂ W, then dim W = k if and only ifU = W.

Intersection

[edit]
InR3, the intersection of two distinct two-dimensional subspaces is one-dimensional

Given subspacesU andW of a vector spaceV, then theirintersectionU ∩ W := {v ∈ V :v is an element of bothU and W} is also a subspace ofV.[10]

Proof:

  1. Letv andw be elements ofU ∩ W. Thenv andw belong to bothU andW. BecauseU is a subspace, thenv + w belongs toU. Similarly, sinceW is a subspace, thenv + w belongs toW. Thus,v + w belongs toU ∩ W.
  2. Letv belong toU ∩ W, and letc be a scalar. Thenv belongs to bothU andW. SinceU andW are subspaces,cv belongs to bothU and W.
  3. SinceU andW are vector spaces, then0 belongs to both sets. Thus,0 belongs toU ∩ W.

For every vector spaceV, theset {0} andV itself are subspaces ofV.[11][12]

Sum

[edit]

IfU andW are subspaces, theirsum is the subspace[13][14]U+W={u+w:uU,wW}.{\displaystyle U+W=\left\{\mathbf {u} +\mathbf {w} \colon \mathbf {u} \in U,\mathbf {w} \in W\right\}.}

For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequalitymax(dimU,dimW)dim(U+W)dim(U)+dim(W).{\displaystyle \max(\dim U,\dim W)\leq \dim(U+W)\leq \dim(U)+\dim(W).}

Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation:[15]dim(U+W)=dim(U)+dim(W)dim(UW).{\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W).}

A set of subspaces isindependent when the only intersection between any pair of subspaces is the trivial subspace. Thedirect sum is the sum of independent subspaces, written asUW{\displaystyle U\oplus W}. An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum.[16][17][18][19]

The dimension of a direct sumUW{\displaystyle U\oplus W} is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero.[20]

dim(UW)=dim(U)+dim(W){\displaystyle \dim(U\oplus W)=\dim(U)+\dim(W)}

Lattice of subspaces

[edit]

The operationsintersection andsum make the set of all subspaces a boundedmodular lattice, where the{0} subspace, theleast element, is anidentity element of the sum operation, and the identical subspaceV, the greatest element, is an identity element of the intersection operation.

Orthogonal complements

[edit]

IfV{\displaystyle V} is aninner product space andN{\displaystyle N} is a subset ofV{\displaystyle V}, then theorthogonal complement ofN{\displaystyle N}, denotedN{\displaystyle N^{\perp }}, is again a subspace.[21] IfV{\displaystyle V} is finite-dimensional andN{\displaystyle N} is a subspace, then the dimensions ofN{\displaystyle N} andN{\displaystyle N^{\perp }} satisfy the complementary relationshipdim(N)+dim(N)=dim(V){\displaystyle \dim(N)+\dim(N^{\perp })=\dim(V)}.[22] Moreover, no vector is orthogonal to itself, soNN={0}{\displaystyle N\cap N^{\perp }=\{0\}} andV{\displaystyle V} is thedirect sum ofN{\displaystyle N} andN{\displaystyle N^{\perp }}.[23] Applying orthogonal complements twice returns the original subspace:(N)=N{\displaystyle (N^{\perp })^{\perp }=N} for every subspaceN{\displaystyle N}.[24]

This operation, understood asnegation (¬{\displaystyle \neg }), makes the lattice of subspaces a (possiblyinfinite) orthocomplemented lattice (although not a distributive lattice).[citation needed]

In spaces with otherbilinear forms, some but not all of these results still hold. Inpseudo-Euclidean spaces andsymplectic vector spaces, for example, orthogonal complements exist. However, these spaces may havenull vectors that are orthogonal to themselves, and consequently there exist subspacesN{\displaystyle N} such thatNN{0}{\displaystyle N\cap N^{\perp }\neq \{0\}}. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor aHeyting algebra).[citation needed]

Algorithms

[edit]

Most algorithms for dealing with subspaces involverow reduction. This is the process of applyingelementary row operations to a matrix, until it reaches eitherrow echelon form orreduced row echelon form. Row reduction has the following important properties:

  1. The reduced matrix has the same null space as the original.
  2. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
  3. Row reduction does not affect the linear dependence of the column vectors.

Basis for a row space

[edit]
Input Anm × n matrixA.
Output A basis for the row space ofA.
  1. Use elementary row operations to putA into row echelon form.
  2. The nonzero rows of the echelon form are a basis for the row space ofA.

See the article onrow space for anexample.

If we instead put the matrixA into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces ofKn are equal.

Subspace membership

[edit]
Input A basis {b1,b2, ...,bk} for a subspaceS ofKn, and a vectorv withn components.
Output Determines whetherv is an element ofS
  1. Create a (k + 1) × n matrixA whose rows are the vectorsb1, ... , bk andv.
  2. Use elementary row operations to putA into row echelon form.
  3. If the echelon form has a row of zeroes, then the vectors {b1, ...,bk,v} are linearly dependent, and thereforevS.

Basis for a column space

[edit]
Input Anm × n matrixA
Output A basis for the column space ofA
  1. Use elementary row operations to putA into row echelon form.
  2. Determine which columns of the echelon form havepivots. The corresponding columns of the original matrix are a basis for the column space.

See the article on column space for anexample.

This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

Coordinates for a vector

[edit]
Input A basis {b1,b2, ...,bk} for a subspaceS ofKn, and a vectorvS
Output Numberst1,t2, ...,tk such thatv =t1b1 + ··· +tkbk
  1. Create anaugmented matrixA whose columns areb1,...,bk , with the last column beingv.
  2. Use elementary row operations to putA into reduced row echelon form.
  3. Express the final column of the reduced echelon form as a linear combination of the firstk columns. The coefficients used are the desired numberst1,t2, ...,tk. (These should be precisely the firstk entries in the final column of the reduced echelon form.)

If the final column of the reduced row echelon form contains a pivot, then the input vectorv does not lie inS.

Basis for a null space

[edit]
Input Anm × n matrixA.
Output A basis for the null space ofA
  1. Use elementary row operations to putA in reduced row echelon form.
  2. Using the reduced row echelon form, determine which of the variablesx1,x2, ...,xn are free. Write equations for the dependent variables in terms of the free variables.
  3. For each free variablexi, choose a vector in the null space for whichxi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space ofA.

See the article on null space for anexample.

Basis for the sum and intersection of two subspaces

[edit]

Given two subspacesU andW ofV, a basis of the sumU+W{\displaystyle U+W} and the intersectionUW{\displaystyle U\cap W} can be calculated using theZassenhaus algorithm.

Equations for a subspace

[edit]
Input A basis {b1,b2, ...,bk} for a subspaceS ofKn
Output An (n − k) × n matrix whose null space isS.
  1. Create a matrixA whose rows areb1,b2, ...,bk.
  2. Use elementary row operations to putA into reduced row echelon form.
  3. Letc1,c2, ...,cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
  4. This results in a homogeneous system ofnk linear equations involving the variablesc1,...,cn. The (nk) ×n matrix corresponding to this system is the desired matrix with nullspaceS.
Example
If the reduced row echelon form ofA is
[103020015014000179000000]{\displaystyle \left[{\begin{alignedat}{6}1&&0&&-3&&0&&2&&0\\0&&1&&5&&0&&-1&&4\\0&&0&&0&&1&&7&&-9\\0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0\end{alignedat}}\,\right]}
then the column vectorsc1, ...,c6 satisfy the equations
c3=3c1+5c2c5=2c1c2+7c4c6=4c29c4{\displaystyle {\begin{alignedat}{1}\mathbf {c} _{3}&=-3\mathbf {c} _{1}+5\mathbf {c} _{2}\\\mathbf {c} _{5}&=2\mathbf {c} _{1}-\mathbf {c} _{2}+7\mathbf {c} _{4}\\\mathbf {c} _{6}&=4\mathbf {c} _{2}-9\mathbf {c} _{4}\end{alignedat}}}
It follows that the row vectors ofA satisfy the equations
x3=3x1+5x2x5=2x1x2+7x4x6=4x29x4.{\displaystyle {\begin{alignedat}{1}x_{3}&=-3x_{1}+5x_{2}\\x_{5}&=2x_{1}-x_{2}+7x_{4}\\x_{6}&=4x_{2}-9x_{4}.\end{alignedat}}}
In particular, the row vectors ofA are a basis for the null space of the corresponding matrix.

See also

[edit]

Notes

[edit]
  1. ^The termlinear subspace is sometimes used for referring toflats andaffine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also calledlinear manifolds for emphasizing that they are alsomanifolds.
  2. ^Generally,K can be any field of suchcharacteristic that the given integer matrix has the appropriaterank in it. All fields includeintegers, but some integers may equal to zero in some fields.
  3. ^This definition is often stated differently: vectorsv1, ...,vk are linearly independent ift1v1 + ··· +tkvk0 for (t1,t2, ...,tk) ≠ (0, 0, ..., 0). The two definitions are equivalent.

Citations

[edit]
  1. ^Halmos (1974) pp. 16–17, § 10
  2. ^Anton (2005, p. 155)
  3. ^Beauregard & Fraleigh (1973, p. 176)
  4. ^Herstein (1964, p. 132)
  5. ^Kreyszig (1972, p. 200)
  6. ^Nering (1970, p. 20)
  7. ^Hefferon (2020) p. 100, ch. 2, Definition 2.13
  8. ^MathWorld (2021) Subspace.
  9. ^DuChateau (2002) Basic facts about Hilbert Space — class notes from Colorado State University on Partial Differential Equations (M645).
  10. ^Nering (1970, p. 21)
  11. ^Hefferon (2020) p. 100, ch. 2, Definition 2.13
  12. ^Nering (1970, p. 20)
  13. ^Nering (1970, p. 21)
  14. ^Vector space related operators.
  15. ^Nering (1970, p. 22)
  16. ^Hefferon (2020) p. 148, ch. 2, §4.10
  17. ^Axler (2015) p. 21 § 1.40
  18. ^Katznelson & Katznelson (2008) pp. 10–11, § 1.2.5
  19. ^Halmos (1974) pp. 28–29, § 18
  20. ^Halmos (1974) pp. 30–31, § 19
  21. ^Axler (2015) p. 193, § 6.46
  22. ^Axler (2015) p. 195, § 6.50
  23. ^Axler (2015) p. 194, § 6.47
  24. ^Axler (2015) p. 195, § 6.51

Sources

[edit]

Textbook

[edit]

Web

[edit]

External links

[edit]
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_subspace&oldid=1317345250"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp