Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Singular value

From Wikipedia, the free encyclopedia
(Redirected fromSingular values)
Square roots of the eigenvalues of the self-adjoint operator

Inmathematics, in particularfunctional analysis, thesingular values of acompact operatorT:XY{\displaystyle T:X\rightarrow Y} acting betweenHilbert spacesX{\displaystyle X} andY{\displaystyle Y}, are the square roots of the (necessarily non-negative)eigenvalues of the self-adjoint operatorTT{\displaystyle T^{*}T} (whereT{\displaystyle T^{*}} denotes theadjoint ofT{\displaystyle T}).

The singular values are non-negativereal numbers, usually listed in decreasing order (σ1(T),σ2(T), …). The largest singular valueσ1(T) is equal to theoperator norm ofT (seeMin-max theorem).

Visualization of asingular value decomposition (SVD) of a 2-dimensional, realshearing matrixM. First, we see theunit disc in blue together with the twocanonical unit vectors. We then see the action ofM, which distorts the disc to anellipse. The SVD decomposesM into three simple transformations: arotationV*, ascaling Σ along the rotated coordinate axes and a second rotationU. Σ is a (square, in this example)diagonal matrix containing in its diagonal the singular values ofM, which represent the lengthsσ1 andσ2 of thesemi-axes of the ellipse.

IfT acts on Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, there is a simple geometric interpretation for the singular values: Consider the image byT{\displaystyle T} of theunit sphere; this is anellipsoid, and the lengths of its semi-axes are the singular values ofT{\displaystyle T} (the figure provides an example inR2{\displaystyle \mathbb {R} ^{2}}).

The singular values are the absolute values of theeigenvalues of anormal matrixA, because thespectral theorem can be applied to obtain unitary diagonalization ofA{\displaystyle A} asA=UΛU{\displaystyle A=U\Lambda U^{*}}. Therefore,AA=UΛΛU=U|Λ|U{\textstyle {\sqrt {A^{*}A}}={\sqrt {U\Lambda ^{*}\Lambda U^{*}}}=U\left|\Lambda \right|U^{*}}.

Mostnorms on Hilbert space operators studied are defined using singular values. For example, theKy Fan-k-norm is the sum of firstk singular values, the trace norm is the sum of all singular values, and theSchatten norm is thepth root of the sum of thepth powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators.

In the finite-dimensional case, amatrix can always be decomposed in the formUΣV{\displaystyle \mathbf {U\Sigma V^{*}} }, whereU{\displaystyle \mathbf {U} } andV{\displaystyle \mathbf {V^{*}} } areunitary matrices andΣ{\displaystyle \mathbf {\Sigma } } is arectangular diagonal matrix with the singular values lying on the diagonal. This is thesingular value decomposition.

Basic properties

[edit]

ForACm×n{\displaystyle A\in \mathbb {C} ^{m\times n}}, andi=1,2,,min{m,n}{\displaystyle i=1,2,\ldots ,\min\{m,n\}}.

Min-max theorem for singular values. HereU:dim(U)=i{\displaystyle U:\dim(U)=i} is a subspace ofCn{\displaystyle \mathbb {C} ^{n}} of dimensioni{\displaystyle i}.

σi(A)=mindim(U)=ni+1maxxUx2=1Ax2.σi(A)=maxdim(U)=iminxUx2=1Ax2.{\displaystyle {\begin{aligned}\sigma _{i}(A)&=\min _{\dim(U)=n-i+1}\max _{\underset {\|x\|_{2}=1}{x\in U}}\left\|Ax\right\|_{2}.\\\sigma _{i}(A)&=\max _{\dim(U)=i}\min _{\underset {\|x\|_{2}=1}{x\in U}}\left\|Ax\right\|_{2}.\end{aligned}}}

Matrix transpose and conjugate do not alter singular values.

σi(A)=σi(AT)=σi(A).{\displaystyle \sigma _{i}(A)=\sigma _{i}\left(A^{\textsf {T}}\right)=\sigma _{i}\left(A^{*}\right).}

For any unitaryUCm×m,VCn×n.{\displaystyle U\in \mathbb {C} ^{m\times m},V\in \mathbb {C} ^{n\times n}.}

σi(A)=σi(UAV).{\displaystyle \sigma _{i}(A)=\sigma _{i}(UAV).}

Relation to eigenvalues:

σi2(A)=λi(AA)=λi(AA).{\displaystyle \sigma _{i}^{2}(A)=\lambda _{i}\left(AA^{*}\right)=\lambda _{i}\left(A^{*}A\right).}

Relation totrace:

i=1nσi2=tr AA{\displaystyle \sum _{i=1}^{n}\sigma _{i}^{2}={\text{tr}}\ A^{\ast }A}.

IfAA{\displaystyle A^{*}A} has full rank, the product of singular values isdetAA{\displaystyle \det {\sqrt {A^{*}A}}}.

IfAA{\displaystyle AA^{*}} has full rank, the product of singular values isdetAA{\displaystyle \det {\sqrt {AA^{*}}}}.

IfA{\displaystyle A} is square and has full rank, the product of singular values is|detA|{\displaystyle |\det A|}.

IfA{\displaystyle A} isnormal, thenσ(A)=|λ(A)|{\displaystyle \sigma (A)=|\lambda (A)|}, that is, its singular values are the absolute values of its eigenvalues.

For a generic rectangular matrixA{\displaystyle A}, letA~=[0AA0]{\textstyle {\tilde {A}}={\begin{bmatrix}0&A\\A^{*}&0\end{bmatrix}}} be its augmented matrix. It has eigenvalues±σ(A){\textstyle \pm \sigma (A)} (whereσ(A){\textstyle \sigma (A)} are the singular values ofA{\textstyle A}) and the remaining eigenvalues are zero. LetA=UΣV{\textstyle A=U\Sigma V^{*}} be the singular value decomposition, then the eigenvectors ofA~{\textstyle {\tilde {A}}} are[ui±vi]{\textstyle {\begin{bmatrix}\mathbf {u} _{i}\\\pm \mathbf {v} _{i}\end{bmatrix}}} for±σi{\displaystyle \pm \sigma _{i}}[1]: 52 

The smallest singular value

[edit]

The smallest singular value of a matrixA isσn(A). It has the following properties for a non-singular matrix A:

  • The 2-norm of the inverse matrix A−1 equals the inverseσn−1(A).[2]: Thm.3.3 
  • The absolute values of all elements in the inverse matrix A−1 are at most the inverseσn−1(A).[2]: Thm.3.3 

Intuitively, ifσn(A) is small, then the rows of A are "almost" linearly dependent. If it isσn(A) = 0, then the rows of A are linearly dependent and A is not invertible.

Inequalities about singular values

[edit]

See also.[3]

Singular values of sub-matrices

[edit]

ForACm×n.{\displaystyle A\in \mathbb {C} ^{m\times n}.}

  1. LetB{\displaystyle B} denoteA{\displaystyle A} with one of its rowsor columns deleted. Thenσi+1(A)σi(B)σi(A){\displaystyle \sigma _{i+1}(A)\leq \sigma _{i}(B)\leq \sigma _{i}(A)}
  2. LetB{\displaystyle B} denoteA{\displaystyle A} with two of its rowsand columns deleted. Thenσi+2(A)σi(B)σi(A){\displaystyle \sigma _{i+2}(A)\leq \sigma _{i}(B)\leq \sigma _{i}(A)}
  3. LetB{\displaystyle B} denote an(mk)×(n){\displaystyle (m-k)\times (n-\ell )} submatrix ofA{\displaystyle A}. Thenσi+k+(A)σi(B)σi(A){\displaystyle \sigma _{i+k+\ell }(A)\leq \sigma _{i}(B)\leq \sigma _{i}(A)}

Singular values ofA +B

[edit]

ForA,BCm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}

  1. i=1kσi(A+B)i=1k(σi(A)+σi(B)),k=min{m,n}{\displaystyle \sum _{i=1}^{k}\sigma _{i}(A+B)\leq \sum _{i=1}^{k}(\sigma _{i}(A)+\sigma _{i}(B)),\quad k=\min\{m,n\}}
  2. σi+j1(A+B)σi(A)+σj(B).i,jN, i+j1min{m,n}{\displaystyle \sigma _{i+j-1}(A+B)\leq \sigma _{i}(A)+\sigma _{j}(B).\quad i,j\in \mathbb {N} ,\ i+j-1\leq \min\{m,n\}}

Singular values ofAB

[edit]

ForA,BCn×n{\displaystyle A,B\in \mathbb {C} ^{n\times n}}

  1. i=ni=nk+1σi(A)σi(B)i=ni=nk+1σi(AB)i=1kσi(AB)i=1kσi(A)σi(B),i=1kσip(AB)i=1kσip(A)σip(B),{\displaystyle {\begin{aligned}\prod _{i=n}^{i=n-k+1}\sigma _{i}(A)\sigma _{i}(B)&\leq \prod _{i=n}^{i=n-k+1}\sigma _{i}(AB)\\\prod _{i=1}^{k}\sigma _{i}(AB)&\leq \prod _{i=1}^{k}\sigma _{i}(A)\sigma _{i}(B),\\\sum _{i=1}^{k}\sigma _{i}^{p}(AB)&\leq \sum _{i=1}^{k}\sigma _{i}^{p}(A)\sigma _{i}^{p}(B),\end{aligned}}}
  2. σn(A)σi(B)σi(AB)σ1(A)σi(B)i=1,2,,n.{\displaystyle \sigma _{n}(A)\sigma _{i}(B)\leq \sigma _{i}(AB)\leq \sigma _{1}(A)\sigma _{i}(B)\quad i=1,2,\ldots ,n.}

ForA,BCm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}[4]2σi(AB)σi(AA+BB),i=1,2,,n.{\displaystyle 2\sigma _{i}(AB^{*})\leq \sigma _{i}\left(A^{*}A+B^{*}B\right),\quad i=1,2,\ldots ,n.}

Singular values and eigenvalues

[edit]

ForACn×n{\displaystyle A\in \mathbb {C} ^{n\times n}}.

  1. See[5]λi(A+A)2σi(A),i=1,2,,n.{\displaystyle \lambda _{i}\left(A+A^{*}\right)\leq 2\sigma _{i}(A),\quad i=1,2,\ldots ,n.}
  2. Assume|λ1(A)||λn(A)|{\displaystyle \left|\lambda _{1}(A)\right|\geq \cdots \geq \left|\lambda _{n}(A)\right|}. Then fork=1,2,,n{\displaystyle k=1,2,\ldots ,n}:
    1. Weyl's theoremi=1k|λi(A)|i=1kσi(A).{\displaystyle \prod _{i=1}^{k}\left|\lambda _{i}(A)\right|\leq \prod _{i=1}^{k}\sigma _{i}(A).}
    2. Forp>0{\displaystyle p>0}.i=1k|λip(A)|i=1kσip(A).{\displaystyle \sum _{i=1}^{k}\left|\lambda _{i}^{p}(A)\right|\leq \sum _{i=1}^{k}\sigma _{i}^{p}(A).}

History

[edit]

This concept was introduced byErhard Schmidt in 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of thenth singular number:[6]

σn(T)=inf{TL:L is an operator of finite rank <n}.{\displaystyle \sigma _{n}(T)=\inf {\big \{}\,\|T-L\|:L{\text{ is an operator of finite rank }}<n\,{\big \}}.}

This formulation made it possible to extend the notion of singular values to operators inBanach space. Note that there is a more general concept ofs-numbers, which also includes Gelfand and Kolmogorov width.

See also

[edit]

References

[edit]
  1. ^Tao, Terence (2012).Topics in random matrix theory. Graduate studies in mathematics. Providence, R.I: American Mathematical Society.ISBN 978-0-8218-7430-1.
  2. ^abDemmel, James W. (January 1997).Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics.doi:10.1137/1.9781611971446.ISBN 978-0-89871-389-3.
  3. ^R. A. Horn andC. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991. Chap. 3
  4. ^X. Zhan. Matrix Inequalities. Springer-Verlag, Berlin, Heidelberg, 2002. p.28
  5. ^R. Bhatia. Matrix Analysis. Springer-Verlag, New York, 1997. Prop. III.5.1
  6. ^I. C. Gohberg andM. G. Krein. Introduction to the Theory of Linear Non-selfadjoint Operators. American Mathematical Society, Providence, R.I.,1969. Translated from the Russian by A. Feinstein. Translations of Mathematical Monographs, Vol. 18.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Singular_value&oldid=1326680932"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp