Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Linear independence

From Wikipedia, the free encyclopedia
Vectors whose linear combinations are nonzero
This article is about the mathematical concept. For the statistical concept, seeIndependence (statistics) andCovariance.
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Linear independence" – news ·newspapers ·books ·scholar ·JSTOR
(January 2019) (Learn how and when to remove this message)
Linearly independent vectors inR3{\displaystyle \mathbb {R} ^{3}}
Linearly dependent vectors in a plane inR3{\displaystyle \mathbb {R} ^{3}}

Inlinear algebra, aset ofvectors is said to belinearly independent if there exists no vector in the set that is equal to alinear combination of the other vectors in the set. If such a vector exists, then the vectors are said to belinearly dependent. Linear independence is part of the definition oflinear basis.[1]

A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space.

Definition

[edit]

A sequence of vectorsv1,v2,,vk{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{k}} from avector spaceV is said to belinearly dependent, if there existscalarsa1,a2,,ak,{\displaystyle a_{1},a_{2},\dots ,a_{k},} not all zero, such that

a1v1+a2v2++akvk=0,{\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} ,}

where0{\displaystyle \mathbf {0} } denotes the zero vector.

Ifk=1{\displaystyle k=1}, this implies that a single vector is linear dependent if and only if it is the zero vector.

Ifk>1{\displaystyle k>1}, this implies that at least one of the scalars is nonzero, saya10{\displaystyle a_{1}\neq 0}, and the above equation is able to be written as

v1=a2a1v2++aka1vk.{\displaystyle \mathbf {v} _{1}={\frac {-a_{2}}{a_{1}}}\mathbf {v} _{2}+\cdots +{\frac {-a_{k}}{a_{1}}}\mathbf {v} _{k}.}

Thus, a set of vectors is linearly dependent if and only if one of them is zero or alinear combination of the others.

A sequence of vectorsv1,v2,,vn{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}} is said to belinearly independent if it is not linearly dependent, that is, if the equation

a1v1+a2v2++anvn=0,{\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0} ,}

can only be satisfied byai=0{\displaystyle a_{i}=0} fori=1,,n.{\displaystyle i=1,\dots ,n.} This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of0{\displaystyle \mathbf {0} } as a linear combination of its vectors is the trivial representation in which all the scalarsai{\textstyle a_{i}} are zero.[2] Even more concisely, a sequence of vectors is linearly independent if and only if0{\displaystyle \mathbf {0} } can be represented as a linear combination of its vectors in a unique way.

If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors islinearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful.

A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent.

Infinite case

[edit]

An infinite set of vectors islinearly independent if every finitesubset is linearly independent. This definition applies also to finite sets of vectors, since a finite set is a finite subset of itself, and every subset of a linearly independent set is also linearly independent.

Conversely, an infinite set of vectors islinearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set.

Anindexed family of vectors islinearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to belinearly dependent.

A set of vectors which is linearly independent andspans some vector space, forms abasis for that vector space. For example, the vector space of allpolynomials inx over the reals has the (infinite) subset{1,x,x2, ...} as a basis.

Definition via span

[edit]

LetV{\displaystyle V} be a vector space. A setXV{\displaystyle X\subseteq V} islinearly independent if and only ifX{\displaystyle X} is aminimal element of

{YVXSpan(Y)}{\displaystyle \{Y\subseteq V\mid X\subseteq \operatorname {Span} (Y)\}}

by theinclusion order. In contrast,X{\displaystyle X} islinearly dependent if it has a proper subset whose span is a superset ofX{\displaystyle X}.

Geometric examples

[edit]

Geographic location

[edit]

A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement istrue, but it is not necessary to find the location.

In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is alinear combination of the other two vectors, and it makes the set of vectorslinearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane.

Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general,n linearly independent vectors are required to describe all locations inn-dimensional space.

Evaluating linear independence

[edit]

The zero vector

[edit]

If one or more vectors from a given sequence of vectorsv1,,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} is the zero vector0{\displaystyle \mathbf {0} } then the vectorsv1,,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose thati{\displaystyle i} is an index (i.e. an element of{1,,k}{\displaystyle \{1,\ldots ,k\}}) such thatvi=0.{\displaystyle \mathbf {v} _{i}=\mathbf {0} .} Then letai:=1{\displaystyle a_{i}:=1} (alternatively, lettingai{\displaystyle a_{i}} be equal to any other non-zero scalar will also work) and then let all other scalars be0{\displaystyle 0} (explicitly, this means that for any indexj{\displaystyle j} other thani{\displaystyle i} (i.e. forji{\displaystyle j\neq i}), letaj:=0{\displaystyle a_{j}:=0} so that consequentlyajvj=0vj=0{\displaystyle a_{j}\mathbf {v} _{j}=0\mathbf {v} _{j}=\mathbf {0} }). Simplifyinga1v1++akvk{\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}} gives:

a1v1++akvk=0++0+aivi+0++0=aivi=ai0=0.{\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} +\cdots +\mathbf {0} +a_{i}\mathbf {v} _{i}+\mathbf {0} +\cdots +\mathbf {0} =a_{i}\mathbf {v} _{i}=a_{i}\mathbf {0} =\mathbf {0} .}

Because not all scalars are zero (in particular,ai0{\displaystyle a_{i}\neq 0}), this proves that the vectorsv1,,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are linearly dependent.

As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearlyindependent.

Now consider the special case where the sequence ofv1,,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} has length1{\displaystyle 1} (i.e. the case wherek=1{\displaystyle k=1}). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, ifv1{\displaystyle \mathbf {v} _{1}} is any vector then the sequencev1{\displaystyle \mathbf {v} _{1}} (which is a sequence of length1{\displaystyle 1}) is linearly dependent if and only ifv1=0{\displaystyle \mathbf {v} _{1}=\mathbf {0} }; alternatively, the collectionv1{\displaystyle \mathbf {v} _{1}} is linearly independent if and only ifv10.{\displaystyle \mathbf {v} _{1}\neq \mathbf {0} .}

Linear dependence and independence of two vectors

[edit]

This example considers the special case where there are exactly two vectoru{\displaystyle \mathbf {u} } andv{\displaystyle \mathbf {v} } from some real or complex vector space. The vectorsu{\displaystyle \mathbf {u} } andv{\displaystyle \mathbf {v} } are linearly dependentif and only if at least one of the following is true:

  1. u{\displaystyle \mathbf {u} } is a scalar multiple ofv{\displaystyle \mathbf {v} } (explicitly, this means that there exists a scalarc{\displaystyle c} such thatu=cv{\displaystyle \mathbf {u} =c\mathbf {v} }) or
  2. v{\displaystyle \mathbf {v} } is a scalar multiple ofu{\displaystyle \mathbf {u} } (explicitly, this means that there exists a scalarc{\displaystyle c} such thatv=cu{\displaystyle \mathbf {v} =c\mathbf {u} }).

Ifu=0{\displaystyle \mathbf {u} =\mathbf {0} } then by settingc:=0{\displaystyle c:=0} we havecv=0v=0=u{\displaystyle c\mathbf {v} =0\mathbf {v} =\mathbf {0} =\mathbf {u} } (this equality holds no matter what the value ofv{\displaystyle \mathbf {v} } is), which shows that (1) is true in this particular case. Similarly, ifv=0{\displaystyle \mathbf {v} =\mathbf {0} } then (2) is true becausev=0u.{\displaystyle \mathbf {v} =0\mathbf {u} .} Ifu=v{\displaystyle \mathbf {u} =\mathbf {v} } (for instance, if they are both equal to the zero vector0{\displaystyle \mathbf {0} }) thenboth (1) and (2) are true (by usingc:=1{\displaystyle c:=1} for both).

Ifu=cv{\displaystyle \mathbf {u} =c\mathbf {v} } thenu0{\displaystyle \mathbf {u} \neq \mathbf {0} } is only possible ifc0{\displaystyle c\neq 0}andv0{\displaystyle \mathbf {v} \neq \mathbf {0} }; in this case, it is possible to multiply both sides by1c{\textstyle {\frac {1}{c}}} to concludev=1cu.{\textstyle \mathbf {v} ={\frac {1}{c}}\mathbf {u} .} This shows that ifu0{\displaystyle \mathbf {u} \neq \mathbf {0} } andv0{\displaystyle \mathbf {v} \neq \mathbf {0} } then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearlyindependent). Ifu=cv{\displaystyle \mathbf {u} =c\mathbf {v} } but insteadu=0{\displaystyle \mathbf {u} =\mathbf {0} } then at least one ofc{\displaystyle c} andv{\displaystyle \mathbf {v} } must be zero. Moreover, if exactly one ofu{\displaystyle \mathbf {u} } andv{\displaystyle \mathbf {v} } is0{\displaystyle \mathbf {0} } (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false).

The vectorsu{\displaystyle \mathbf {u} } andv{\displaystyle \mathbf {v} } are linearlyindependent if and only ifu{\displaystyle \mathbf {u} } is not a scalar multiple ofv{\displaystyle \mathbf {v} }andv{\displaystyle \mathbf {v} } is not a scalar multiple ofu{\displaystyle \mathbf {u} }.

Vectors in R2

[edit]

Three vectors: Consider the set of vectorsv1=(1,1),{\displaystyle \mathbf {v} _{1}=(1,1),}v2=(3,2),{\displaystyle \mathbf {v} _{2}=(-3,2),} andv3=(2,4),{\displaystyle \mathbf {v} _{3}=(2,4),} then the condition for linear dependence seeks a set of non-zero scalars, such that

a1[11]+a2[32]+a3[24]=[00],{\displaystyle a_{1}{\begin{bmatrix}1\\1\end{bmatrix}}+a_{2}{\begin{bmatrix}-3\\2\end{bmatrix}}+a_{3}{\begin{bmatrix}2\\4\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},}

or

[132124][a1a2a3]=[00].{\displaystyle {\begin{bmatrix}1&-3&2\\1&2&4\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.}

Row reduce this matrix equation by subtracting the first row from the second to obtain,

[132052][a1a2a3]=[00].{\displaystyle {\begin{bmatrix}1&-3&2\\0&5&2\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.}

Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is

[1016/5012/5][a1a2a3]=[00].{\displaystyle {\begin{bmatrix}1&0&16/5\\0&1&2/5\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.}

Rearranging this equation allows us to obtain

[1001][a1a2]=[a1a2]=a3[16/52/5].{\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}=-a_{3}{\begin{bmatrix}16/5\\2/5\end{bmatrix}}.}

which shows that non-zeroai exist such thatv3=(2,4){\displaystyle \mathbf {v} _{3}=(2,4)} can be defined in terms ofv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)} andv2=(3,2).{\displaystyle \mathbf {v} _{2}=(-3,2).} Thus, the three vectors are linearly dependent.

Two vectors: Now consider the linear dependence of the two vectorsv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)} andv2=(3,2),{\displaystyle \mathbf {v} _{2}=(-3,2),} and check,

a1[11]+a2[32]=[00],{\displaystyle a_{1}{\begin{bmatrix}1\\1\end{bmatrix}}+a_{2}{\begin{bmatrix}-3\\2\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},}

or

[1312][a1a2]=[00].{\displaystyle {\begin{bmatrix}1&-3\\1&2\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.}

The same row reduction presented above yields,

[1001][a1a2]=[00].{\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.}

This shows thatai=0,{\displaystyle a_{i}=0,} which means that the vectorsv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)} andv2=(3,2){\displaystyle \mathbf {v} _{2}=(-3,2)} are linearly independent.

Vectors in R4

[edit]

In order to determine if the three vectors inR4,{\displaystyle \mathbb {R} ^{4},}

v1=[1423],v2=[71041],v3=[2154].{\displaystyle \mathbf {v} _{1}={\begin{bmatrix}1\\4\\2\\-3\end{bmatrix}},\mathbf {v} _{2}={\begin{bmatrix}7\\10\\-4\\-1\end{bmatrix}},\mathbf {v} _{3}={\begin{bmatrix}-2\\1\\5\\-4\end{bmatrix}}.}

are linearly dependent, form the matrix equation,

[1724101245314][a1a2a3]=[0000].{\displaystyle {\begin{bmatrix}1&7&-2\\4&10&1\\2&-4&5\\-3&-1&-4\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\\0\\0\end{bmatrix}}.}

Row reduce this equation to obtain,

[1720189000000][a1a2a3]=[0000].{\displaystyle {\begin{bmatrix}1&7&-2\\0&-18&9\\0&0&0\\0&0&0\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\\0\\0\end{bmatrix}}.}

Rearrange to solve for v3 and obtain,

[17018][a1a2]=a3[29].{\displaystyle {\begin{bmatrix}1&7\\0&-18\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}=-a_{3}{\begin{bmatrix}-2\\9\end{bmatrix}}.}

This equation is easily solved to define non-zeroai,

a1=3a3/2,a2=a3/2,{\displaystyle a_{1}=-3a_{3}/2,a_{2}=a_{3}/2,}

wherea3{\displaystyle a_{3}} can be chosen arbitrarily. Thus, the vectorsv1,v2,{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},} andv3{\displaystyle \mathbf {v} _{3}} are linearly dependent.

Alternative method using determinants

[edit]

An alternative method relies on the fact thatn{\displaystyle n} vectors inRn{\displaystyle \mathbb {R} ^{n}} are linearlyindependentif and only if thedeterminant of thematrix formed by taking the vectors as its columns is non-zero.

In this case, the matrix formed by the vectors is

A=[1312].{\displaystyle A={\begin{bmatrix}1&-3\\1&2\end{bmatrix}}.}

We may write a linear combination of the columns as

AΛ=[1312][λ1λ2].{\displaystyle A\Lambda ={\begin{bmatrix}1&-3\\1&2\end{bmatrix}}{\begin{bmatrix}\lambda _{1}\\\lambda _{2}\end{bmatrix}}.}

We are interested in whetherAΛ =0 for some nonzero vector Λ. This depends on the determinant ofA{\displaystyle A}, which is

detA=121(3)=50.{\displaystyle \det A=1\cdot 2-1\cdot (-3)=5\neq 0.}

Since thedeterminant is non-zero, the vectors(1,1){\displaystyle (1,1)} and(3,2){\displaystyle (-3,2)} are linearly independent.

Otherwise, suppose we havem{\displaystyle m} vectors ofn{\displaystyle n} coordinates, withm<n.{\displaystyle m<n.} ThenA is ann×m matrix and Λ is a column vector withm{\displaystyle m} entries, and we are again interested inAΛ =0. As we saw previously, this is equivalent to a list ofn{\displaystyle n} equations. Consider the firstm{\displaystyle m} rows ofA{\displaystyle A}, the firstm{\displaystyle m} equations; any solution of the full list of equations must also be true of the reduced list. In fact, ifi1,...,im is any list ofm{\displaystyle m} rows, then the equation must be true for those rows.

Ai1,,imΛ=0.{\displaystyle A_{\langle i_{1},\dots ,i_{m}\rangle }\Lambda =\mathbf {0} .}

Furthermore, the reverse is true. That is, we can test whether them{\displaystyle m} vectors are linearly dependent by testing whether

detAi1,,im=0{\displaystyle \det A_{\langle i_{1},\dots ,i_{m}\rangle }=0}

for all possible lists ofm{\displaystyle m} rows. (In casem=n{\displaystyle m=n}, this requires only one determinant, as above. Ifm>n{\displaystyle m>n}, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available.

More vectors than dimensions

[edit]

If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors inR2.{\displaystyle \mathbb {R} ^{2}.}

Natural basis vectors

[edit]

LetV=Rn{\displaystyle V=\mathbb {R} ^{n}} and consider the following elements inV{\displaystyle V}, known as thenatural basis vectors:

e1=(1,0,0,,0)e2=(0,1,0,,0)en=(0,0,0,,1).{\displaystyle {\begin{matrix}\mathbf {e} _{1}&=&(1,0,0,\ldots ,0)\\\mathbf {e} _{2}&=&(0,1,0,\ldots ,0)\\&\vdots \\\mathbf {e} _{n}&=&(0,0,0,\ldots ,1).\end{matrix}}}

Thene1,e2,,en{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\ldots ,\mathbf {e} _{n}} are linearly independent.

Proof

Suppose thata1,a2,,an{\displaystyle a_{1},a_{2},\ldots ,a_{n}} are real numbers such that

a1e1+a2e2++anen=0.{\displaystyle a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+\cdots +a_{n}\mathbf {e} _{n}=\mathbf {0} .}

Since

a1e1+a2e2++anen=(a1,a2,,an),{\displaystyle a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+\cdots +a_{n}\mathbf {e} _{n}=\left(a_{1},a_{2},\ldots ,a_{n}\right),}

thenai=0{\displaystyle a_{i}=0} for alli=1,,n.{\displaystyle i=1,\ldots ,n.}

Linear independence of functions

[edit]

LetV{\displaystyle V} be thevector space of all differentiablefunctions of a real variablet{\displaystyle t}. Then the functionset{\displaystyle e^{t}} ande2t{\displaystyle e^{2t}} inV{\displaystyle V} are linearly independent.

Proof

[edit]

Supposea{\displaystyle a} andb{\displaystyle b} are two real numbers such that

aet+be2t=0{\displaystyle ae^{t}+be^{2t}=0}

Take the first derivative of the above equation:

aet+2be2t=0{\displaystyle ae^{t}+2be^{2t}=0}

forall values oft.{\displaystyle t.} We need to show thata=0{\displaystyle a=0} andb=0.{\displaystyle b=0.} In order to do this, we subtract the first equation from the second, givingbe2t=0{\displaystyle be^{2t}=0}. Sincee2t{\displaystyle e^{2t}} is not zero for somet{\displaystyle t},b=0.{\displaystyle b=0.} It follows thata=0{\displaystyle a=0} too. Therefore, according to the definition of linear independence,et{\displaystyle e^{t}} ande2t{\displaystyle e^{2t}} are linearly independent.

Space of linear dependencies

[edit]

Alinear dependency orlinear relation among vectorsv1, ...,vn is atuple(a1, ...,an) withnscalar components such that

a1v1++anvn=0.{\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0} .}

If such a linear dependence exists with at least a nonzero component, then then vectors are linearly dependent. Linear dependencies amongv1, ...,vn form a vector space.

If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneoussystem of linear equations, with the coordinates of the vectors as coefficients. Abasis of the vector space of linear dependencies can therefore be computed byGaussian elimination.

Generalizations

[edit]

Affine independence

[edit]
See also:Affine space

A set of vectors is said to beaffinely dependent if at least one of the vectors in the set can be defined as anaffine combination of the others. Otherwise, the set is calledaffinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Contrapositively, every linearly independent set is affinely independent. Note that an affinely independent set is not necessarily linearly independent.

Consider a set ofm{\displaystyle m} vectorsv1,,vm{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{m}} of sizen{\displaystyle n} each, and consider the set ofm{\displaystyle m} augmented vectors([1v1],,[1vm]){\textstyle \left(\left[{\begin{smallmatrix}1\\\mathbf {v} _{1}\end{smallmatrix}}\right],\ldots ,\left[{\begin{smallmatrix}1\\\mathbf {v} _{m}\end{smallmatrix}}\right]\right)} of sizen+1{\displaystyle n+1} each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent.[3]: 256 

Linearly independent vector subspaces

[edit]

Two vector subspacesM{\displaystyle M} andN{\displaystyle N} of a vector spaceX{\displaystyle X} are said to belinearly independent ifMN={0}.{\displaystyle M\cap N=\{0\}.}[4] More generally, a collectionM1,,Md{\displaystyle M_{1},\ldots ,M_{d}} of subspaces ofX{\displaystyle X} are said to belinearly independent ifMikiMk={0}{\textstyle M_{i}\cap \sum _{k\neq i}M_{k}=\{0\}} for every indexi,{\displaystyle i,} wherekiMk={m1++mi1+mi+1++md:mkMk for all k}=spank{1,,i1,i+1,,d}Mk.{\textstyle \sum _{k\neq i}M_{k}={\Big \{}m_{1}+\cdots +m_{i-1}+m_{i+1}+\cdots +m_{d}:m_{k}\in M_{k}{\text{ for all }}k{\Big \}}=\operatorname {span} \bigcup _{k\in \{1,\ldots ,i-1,i+1,\ldots ,d\}}M_{k}.}[4]The vector spaceX{\displaystyle X} is said to be adirect sum ofM1,,Md{\displaystyle M_{1},\ldots ,M_{d}} if these subspaces are linearly independent andM1++Md=X.{\displaystyle M_{1}+\cdots +M_{d}=X.}

See also

[edit]
  • Matroid – Abstraction of linear independence of vectors

References

[edit]
  1. ^G. E. Shilov,Linear Algebra (Trans. R. A. Silverman), Dover Publications, New York, 1977.
  2. ^Friedberg, Stephen; Insel, Arnold; Spence, Lawrence (2003).Linear Algebra. Pearson, 4th Edition. pp. 48–49.ISBN 0130084514.
  3. ^Lovász, László;Plummer, M. D. (1986),Matching Theory, Annals of Discrete Mathematics, vol. 29, North-Holland,ISBN 0-444-87916-1,MR 0859549
  4. ^abBachman, George; Narici, Lawrence (2000).Functional Analysis (Second ed.). Mineola, New York: Dover Publications.ISBN 978-0486402512.OCLC 829157984. pp. 3–7

External links

[edit]
Linear equations
Three dimensional Euclidean space
Matrices
Matrix decompositions
Relations and computations
Vector spaces
Structures
Multilinear algebra
Affine and projective
Numerical linear algebra
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_independence&oldid=1334072212"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp