Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Square root of a matrix

From Wikipedia, the free encyclopedia

Mathematical operation

Inmathematics, thesquare root of a matrix extends the notion ofsquare root from numbers tomatrices. A matrixB is said to be a square root ofA if thematrix productBB is equal toA.[1]

Some authors use the namesquare root or the notationA1/2 only for the specific case whenA ispositive semidefinite, to denote the unique matrixB that is positive semidefinite and such thatBB =BTB =A (for real-valued matrices, whereBT is thetranspose ofB).

Less frequently, the namesquare root may be used for any factorization of a positive semidefinite matrixA asBTB =A, as in theCholesky factorization, even ifBB ≠ A. This distinct meaning is discussed inPositive definite matrix § Decomposition.

Examples

[edit]

In general, a matrix can have several square roots. In particular, ifA=B2{\displaystyle A=B^{2}} thenA=(B)2{\displaystyle A=(-B)^{2}} as well.

For example, the 2×2identity matrix[1001]{\displaystyle \textstyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}} has infinitely many square roots. They are given by

[±1  0  0±1]{\displaystyle {\begin{bmatrix}\pm 1&~~0\\~~0&\pm 1\end{bmatrix}}} and[a  bca]{\displaystyle {\begin{bmatrix}a&~~b\\c&-a\end{bmatrix}}}

where(a,b,c){\displaystyle (a,b,c)} are any numbers (real or complex) such thata2+bc=1 .{\displaystyle a^{2}+bc=1~.}In particular if (r,s,t) {\displaystyle \ (r,s,t)\ } is anyPythagorean triple[a] then 1t[r  ssr] {\displaystyle \ {\frac {1}{t}}{\begin{bmatrix}r&~~s\\s&-r\end{bmatrix}}\ } is one of the matrix square roots of I2 {\displaystyle \ I_{2}\ } which happens to be symmetric and has rational entries.[2]Thus

 [1001]=[0110]2=[45  353545]2 .{\displaystyle \ {\begin{bmatrix}1&0\\0&1\end{bmatrix}}={\begin{bmatrix}0&1\\1&0\end{bmatrix}}^{2}={\begin{bmatrix}{\frac {4}{5}}&~~{\frac {3}{5}}\\{\frac {3}{5}}&-{\frac {4}{5}}\end{bmatrix}}^{2}~.}

MinusI2 also has a square root, for example:

 [1001]=[011  0]2 ,{\displaystyle \ -{\begin{bmatrix}1&0\\0&1\end{bmatrix}}={\begin{bmatrix}0&-1\\1&~~0\end{bmatrix}}^{2}\ ,}

which can be used to represent theimaginary unit i {\displaystyle \ i\ } and hence allcomplex numbers using 2×2 real matrices, seematrix representation of complex numbers.

Just as with thereal numbers, a real matrix may fail to have a real square root, but have a square root withcomplex-valued entries.Some matrices have no square root. An example is the matrix [0100] .{\displaystyle \ {\begin{bmatrix}0&1\\0&0\end{bmatrix}}~.}

Notice that some ideas fromnumber theory do not carry over to matrices: The square root of a nonnegativeinteger must either be another integer or anirrational number, excluding non-integerrationals. Contrast that to a matrix of integers, which can have a square root whose entries are all non-integer rational numbers, as demonstrated in some of the above examples.

Positive semidefinite matrices

[edit]
See also:Positive definite matrix § Decomposition

A symmetric realn ×n matrixA{\displaystyle A} is calledpositive semidefinite ifxTAx0{\displaystyle x^{\textsf {T}}Ax\geq 0} for allxRn{\displaystyle x\in \mathbb {R} ^{n}} (herexT{\displaystyle x^{\textsf {T}}} denotes thetranspose, changing a column vectorx into a row vector).A square real matrixA{\displaystyle A} is positive semidefinite if and only ifA=BTB{\displaystyle A=B^{\textsf {T}}B} for some matrixB.There can be many different such matricesB.A positive semidefinite matrixA can also have many matricesB such thatA=BB{\displaystyle A=BB}.However,A always has precisely one square rootB that isboth positive semidefinite and symmetric.In particular, sinceB is required to be symmetric,B=BT{\displaystyle B=B^{\textsf {T}}}, so the two conditionsA=BB{\displaystyle A=BB} orA=BTB{\displaystyle A=B^{\textsf {T}}B} are equivalent.

For complex-valued matrices, theconjugate transposeB{\displaystyle B^{*}} is used instead and positive semidefinite matrices areHermitian, meaningB=B{\displaystyle B^{*}=B}.

Theorem[3] LetA be a positive semidefinite matrix that is also symmetric.[b] Then there is exactly one positive semidefinite and symmetric matrixB such thatA=BB{\displaystyle A=BB}.[c]

This unique matrix is called theprincipal,non-negative, orpositive square root (the latter in the case ofpositive definite matrices).

The principal square root of a real positive semidefinite matrix is real.[3]The principal square root of a positive definite matrix is positive definite; more generally, the rank of the principal square root ofA is the same as the rank ofA.[3]

The operation of taking the principal square root is continuous on this set of matrices.[4] These properties are consequences of theholomorphic functional calculus applied to matrices.[5][6]The existence and uniqueness of the principal square root can be deduced directly from theJordan normal form (see below).

Matrices with distinct eigenvalues

[edit]

Ann×n matrix withndistinct nonzero eigenvalues has 2n square roots. Such a matrix,A, has aneigendecompositionVDV−1 whereV is the matrix whose columns are eigenvectors ofA andD is the diagonal matrix whose diagonal elements are the correspondingn eigenvaluesλi. Thus the square roots ofA are given byVD1/2V−1, whereD1/2 is any square root matrix ofD, which, for distinct eigenvalues, must be diagonal with diagonal elements equal to square roots of the diagonal elements ofD; since there are two possible choices for a square root of each diagonal element ofD, there are 2n choices for the matrixD1/2.

This also leads to a proof of the above observation, that a positive-definite matrix has precisely one positive-definite square root: a positive definite matrix has only positive eigenvalues, and each of these eigenvalues has only one positive square root; and since the eigenvalues of the square root matrix are the diagonal elements ofD1/2, for the square root matrix to be itself positive definite necessitates the use of only the unique positive square roots of the original eigenvalues.

Solutions in closed form

[edit]
See also:Square root of a 2 by 2 matrix

If a matrix isidempotent, meaningA2=A{\displaystyle A^{2}=A}, then by definition one of its square roots is the matrix itself.

Diagonal and triangular matrices

[edit]

IfD is adiagonaln ×n matrixD=diag(λ1,,λn){\displaystyle D=\operatorname {diag} (\lambda _{1},\dots ,\lambda _{n})},then some of its square roots are diagonal matricesdiag(μ1,,μn){\displaystyle \operatorname {diag} (\mu _{1},\dots ,\mu _{n})}, whereμi=±λi{\displaystyle \mu _{i}=\pm {\sqrt {\lambda _{i}}}}.If the diagonal elements ofD are real and non-negative then it is positive semidefinite, and if the square roots are taken with the (+) sign (i.e. all non-negative), the resulting matrix is the principal root ofD.A diagonal matrix may have additional non-diagonal roots if some entries on the diagonal are equal, as exemplified by the identity matrix above.

IfU is anupper triangular matrix (meaning its entries areui,j=0{\displaystyle u_{i,j}=0} fori>j{\displaystyle i>j}) and at most one of its diagonal entries is zero, then one upper triangular solution of the equationB2=U{\displaystyle B^{2}=U} can be found as follows.Since the equationui,i=bi,i2{\displaystyle u_{i,i}=b_{i,i}^{2}} should be satisfied, letbi,i{\displaystyle b_{i,i}} be theprincipal square root of the complex numberui,i{\displaystyle u_{i,i}}.By the assumptionui,i0{\displaystyle u_{i,i}\neq 0}, this guarantees thatbi,i+bj,j0{\displaystyle b_{i,i}+b_{j,j}\neq 0} for alli,j (because the principal square roots of complex numbers all lie on one half of the complex plane).From the equation

ui,j=bi,ibi,j+bi,i+1bi+1,j+bi,i+2bi+2,j++bi,jbj,j{\displaystyle u_{i,j}=b_{i,i}b_{i,j}+b_{i,i+1}b_{i+1,j}+b_{i,i+2}b_{i+2,j}+\dots +b_{i,j}b_{j,j}}

we deduce thatbi,j{\displaystyle b_{i,j}} can be computed recursively forji{\displaystyle j-i} increasing from 1 ton-1 as:

bi,j=1bi,i+bj,j(ui,jbi,i+1bi+1,jbi,i+2bi+2,jbi,j1bj1,j).{\displaystyle b_{i,j}={\frac {1}{b_{i,i}+b_{j,j}}}\left(u_{i,j}-b_{i,i+1}b_{i+1,j}-b_{i,i+2}b_{i+2,j}-\dots -b_{i,j-1}b_{j-1,j}\right).}

IfU is upper triangular but has multiple zeroes on the diagonal, then a square root might not exist, as exemplified by(0100){\displaystyle \left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right)}.Note the diagonal entries of a triangular matrix are precisely itseigenvalues (seeTriangular matrix#Properties).

By diagonalization

[edit]

Ann ×n matrixA isdiagonalizable if there is a matrixV and a diagonal matrixD such thatA =VDV−1. This happens if and only ifA hasneigenvectors which constitute a basis forCn. In this case,V can be chosen to be the matrix with then eigenvectors as columns, and thus a square root ofA is

R=VSV1 ,{\displaystyle R=VSV^{-1}~,}

whereS is any square root ofD. Indeed,

(VD12V1)2=VD12(V1V)D12V1=VDV1=A .{\displaystyle \left(VD^{\frac {1}{2}}V^{-1}\right)^{2}=VD^{\frac {1}{2}}\left(V^{-1}V\right)D^{\frac {1}{2}}V^{-1}=VDV^{-1}=A~.}

For example, the matrixA=(33244857){\displaystyle A=\left({\begin{smallmatrix}33&24\\48&57\end{smallmatrix}}\right)} can be diagonalized asVDV−1, where

V=(1121){\displaystyle V={\begin{pmatrix}1&1\\2&-1\end{pmatrix}}} andD=(81009){\displaystyle D={\begin{pmatrix}81&0\\0&9\end{pmatrix}}}.

D has principal square root

D12=(9003){\displaystyle D^{\frac {1}{2}}={\begin{pmatrix}9&0\\0&3\end{pmatrix}}},

giving the square root

A12=VD12V1=(5247){\displaystyle A^{\frac {1}{2}}=VD^{\frac {1}{2}}V^{-1}={\begin{pmatrix}5&2\\4&7\end{pmatrix}}}.

WhenA is symmetric, the diagonalizing matrixV can be made anorthogonal matrix by suitably choosing the eigenvectors (seespectral theorem). Then the inverse ofV is simply the transpose, so that

R=VSVT .{\displaystyle R=VSV^{\textsf {T}}~.}

By Schur decomposition

[edit]

Every complex-valued square matrixA{\displaystyle A}, regardless of diagonalizability, has aSchur decomposition given byA=QUQ{\displaystyle A=QUQ^{*}} whereU{\displaystyle U} is upper triangular andQ{\displaystyle Q} isunitary (meaningQ=Q1{\displaystyle Q^{*}=Q^{-1}}).Theeigenvalues ofA{\displaystyle A}are exactly the diagonal entries ofU{\displaystyle U};if at most one of them is zero, then the following is a square root[7]

A12=QU12Q.{\displaystyle A^{\frac {1}{2}}=QU^{\frac {1}{2}}Q^{*}.}

where a square rootU12{\displaystyle U^{\frac {1}{2}}} of the upper triangular matrixU{\displaystyle U} can be found as described above.

IfA{\displaystyle A} is positive definite, then the eigenvalues are all positive reals, so the chosen diagonal ofU12{\displaystyle U^{\frac {1}{2}}} also consists of positive reals.Hence the eigenvalues ofQU12Q{\displaystyle QU^{\frac {1}{2}}Q^{*}} are positive reals, which means the resulting matrix is the principal root ofA{\displaystyle A}.

By Jordan decomposition

[edit]

Similarly as for the Schur decomposition, every square matrixA{\displaystyle A} can be decomposed asA=P1JP{\displaystyle A=P^{-1}JP} whereP isinvertible andJ is inJordan normal form.

To see that any complex matrix with positive eigenvalues has a square root of the same form, it suffices to check this for a Jordan block. Any such block has the form λ(I +N) with λ > 0 andNnilpotent. If(1 +z)1/2 = 1 +a1z +a2z2 + ⋯ is the binomial expansion for the square root (valid in |z| < 1), then as aformal power series its square equals 1 +z. SubstitutingN forz, only finitely many terms will be non-zero andS = √λ (I +a1N +a2N2 + ⋯) gives a square root of the Jordan block with eigenvalue√λ.

It suffices to check uniqueness for a Jordan block with λ = 1. The square constructed above has the formS =I +L whereL is polynomial inN without constant term. Any other square rootT with positive eigenvalues has the formT =I +M withM nilpotent, commuting withN and henceL. But then0 =S2T2 = 2(LM)(I + (L +M)/2). SinceL andM commute, the matrixL +M is nilpotent andI + (L +M)/2 is invertible with inverse given by aNeumann series. HenceL =M.

IfA is a matrix with positive eigenvalues andminimal polynomialp(t), then the Jordan decomposition into generalized eigenspaces ofA can be deduced from the partial fraction expansion ofp(t)−1. The corresponding projections onto the generalized eigenspaces are given by real polynomials inA. On each eigenspace,A has the formλ(I +N) as above. Thepower series expression for the square root on the eigenspace show that the principal square root ofA has the formq(A) whereq(t) is a polynomial with real coefficients.

Power series

[edit]

Recall the formal power series(1z)12=n=0(1)n(1/2n)zn{\textstyle (1-z)^{\frac {1}{2}}=\sum _{n=0}^{\infty }(-1)^{n}{\binom {1/2}{n}}z^{n}}, which converges providedz1{\displaystyle \|z\|\leq 1} (since the coefficients of the power series are summable). Plugging inz=IA{\displaystyle z=I-A} into this expression yields

A12:=n=0(1)n(12n)(IA)n{\displaystyle A^{\frac {1}{2}}:=\sum _{n=0}^{\infty }(-1)^{n}{{\frac {1}{2}} \choose n}(I-A)^{n}}

provided thatlim supn(IA)n1n<1{\textstyle \limsup _{n}\|(I-A)^{n}\|^{\frac {1}{n}}<1}. By virtue ofGelfand formula, that condition is equivalent to the requirement that the spectrum ofA{\displaystyle A} is contained within the diskD(1,1)C{\displaystyle D(1,1)\subseteq \mathbb {C} }. This method of defining or computingA12{\displaystyle A^{\frac {1}{2}}} is especially useful in the case whereA{\displaystyle A} is positive semi-definite. In that case, we haveIAA1{\textstyle \left\|I-{\frac {A}{\|A\|}}\right\|\leq 1} and therefore(IAA)nIAAn1{\textstyle \left\|\left(I-{\frac {A}{\|A\|}}\right)^{n}\right\|\leq \left\|I-{\frac {A}{\|A\|}}\right\|^{n}\leq 1}, so that the expressionA12=(n=0(1)n(1/2n)(IAA)n){\textstyle \|A\|^{\frac {1}{2}}=\left(\sum _{n=0}^{\infty }(-1)^{n}{\binom {1/2}{n}}\left(I-{\frac {A}{\|A\|}}\right)^{n}\right)} defines a square root ofA{\displaystyle A} which moreover turns out to be the unique positive semi-definite root. This method remains valid to define square roots of operators on infinite-dimensional Banach or Hilbert spaces or certain elements of (C*) Banach algebras.

Iterative solutions

[edit]

By Denman–Beavers iteration

[edit]

Another way to find the square root of ann ×n matrixA is the Denman–Beavers square root iteration.[8]

LetY0 =A andZ0 =I, whereI is then ×nidentity matrix. The iteration is defined by

Yk+1=12(Yk+Zk1),Zk+1=12(Zk+Yk1).{\displaystyle {\begin{aligned}Y_{k+1}&={\frac {1}{2}}\left(Y_{k}+Z_{k}^{-1}\right),\\Z_{k+1}&={\frac {1}{2}}\left(Z_{k}+Y_{k}^{-1}\right).\end{aligned}}}

As this uses a pair of sequences of matrix inverses whose later elements change comparatively little, only the first elements have a high computational cost since the remainder can be computed from earlier elements with only a few passes of a variant ofNewton's method forcomputing inverses,

Xn+1=2XnXnBXn.{\displaystyle X_{n+1}=2X_{n}-X_{n}BX_{n}.}

With this, for later values ofk one would setX0=Zk11{\displaystyle X_{0}=Z_{k-1}^{-1}} andB=Zk,{\displaystyle B=Z_{k},} and then useZk1=Xn{\displaystyle Z_{k}^{-1}=X_{n}} for some smalln{\displaystyle n} (perhaps just 1), and similarly forYk1.{\displaystyle Y_{k}^{-1}.}

Convergence is not guaranteed, even for matrices that do have square roots, but if the process converges, the matrixYk{\displaystyle Y_{k}} converges quadratically to a square rootA1/2, whileZk{\displaystyle Z_{k}} converges to its inverse,A−1/2.

By the Babylonian method

[edit]

Yet another iterative method is obtained by taking the well-known formula of theBabylonian method for computing the square root of a real number, and applying it to matrices. LetX0 =I, whereI is theidentity matrix. The iteration is defined by

Xk+1=12(Xk+AXk1).{\displaystyle X_{k+1}={\frac {1}{2}}\left(X_{k}+AX_{k}^{-1}\right)\,.}

Again, convergence is not guaranteed, but if the process converges, the matrixXk{\displaystyle X_{k}} converges quadratically to a square rootA1/2. Compared to Denman–Beavers iteration, an advantage of the Babylonian method is that only onematrix inverse need be computed per iteration step. On the other hand, as Denman–Beavers iteration uses a pair of sequences of matrix inverses whose later elements change comparatively little, only the first elements have a high computational cost since the remainder can be computed from earlier elements with only a few passes of a variant ofNewton's method forcomputing inverses (seeDenman–Beavers iteration above); of course, the same approach can be used to get the single sequence of inverses needed for the Babylonian method. However, unlike Denman–Beavers iteration, the Babylonian method is numerically unstable and more likely to fail to converge.[1]

The Babylonian method follows fromNewton's method for the equationX2A=0{\displaystyle X^{2}-A=0} and usingAXk=XkA{\displaystyle AX_{k}=X_{k}A} forallk.{\displaystyle k.}[9]

Square roots of positive operators

[edit]
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Square root of a matrix" – news ·newspapers ·books ·scholar ·JSTOR
(July 2010) (Learn how and when to remove this message)

Inlinear algebra andoperator theory, given aboundedpositive semidefinite operator (a non-negative operator)T on a complex Hilbert space,B is a square root ofT ifT =B* B, whereB* denotes theHermitian adjoint ofB.[citation needed] According to thespectral theorem, thecontinuous functional calculus can be applied to obtain an operatorT1/2 such thatT1/2 is itself positive and (T1/2)2 =T. The operatorT1/2 is theunique non-negative square root ofT.[citation needed]

A bounded non-negative operator on a complex Hilbert space is self adjoint by definition. SoT = (T1/2)*T1/2. Conversely, it is trivially true that every operator of the formB* B is non-negative. Therefore, an operatorT is non-negativeif and only ifT =B* B for someB (equivalently,T =CC* for someC).

TheCholesky factorization provides another particular example of square root, which should not be confused with the unique non-negative square root.

Unitary freedom of square roots

[edit]

IfT is a non-negative operator on a finite-dimensional Hilbert space, then all square roots ofT are related by unitary transformations. More precisely, ifT =A*A =B*B, then there exists aunitaryU such thatA =UB.

Indeed, takeB =T1/2 to be the unique non-negative square root ofT. IfT is strictly positive, thenB is invertible, and soU =AB−1 is unitary:

UU=((B)1A)(AB1)=(B)1T(B1)=(B)1BB(B1)=I.{\displaystyle {\begin{aligned}U^{*}U&=\left(\left(B^{*}\right)^{-1}A^{*}\right)\left(AB^{-1}\right)=\left(B^{*}\right)^{-1}T\left(B^{-1}\right)\\&=\left(B^{*}\right)^{-1}B^{*}B\left(B^{-1}\right)=I.\end{aligned}}}

IfT is non-negative without being strictly positive, then the inverse ofB cannot be defined, but theMoore–Penrose pseudoinverseB+ can be. In that case, the operatorB+A is apartial isometry, that is, a unitary operator from the range ofT to itself. This can then be extended to a unitary operatorU on the whole space by setting it equal to the identity on the kernel ofT. More generally, this is true on an infinite-dimensional Hilbert space if, in addition,T hasclosed range. In general, ifA,B areclosed and densely defined operators on a Hilbert spaceH, andA* A =B* B, thenA = UB whereU is a partial isometry.

Some applications

[edit]

Square roots, and the unitary freedom of square roots, have applications throughoutfunctional analysis and linear algebra.

Polar decomposition

[edit]
Main article:Polar decomposition

IfA is an invertible operator on a finite-dimensional Hilbert space, then there is a unique unitary operatorU and positive operatorP such that

A=UP;{\displaystyle A=UP;}

this is the polar decomposition ofA. The positive operatorP is the unique positive square root of the positive operatorAA, andU is defined byU =AP−1.

IfA is not invertible, then it still has a polar composition in whichP is defined in the same way (and is unique). The unitary operatorU is not unique. Rather it is possible to determine a "natural" unitary operator as follows:AP+ is a unitary operator from the range ofA to itself, which can be extended by the identity on the kernel ofA. The resulting unitary operatorU then yields the polar decomposition ofA.

Kraus operators

[edit]
Main article:Choi's theorem on completely positive maps

By Choi's result, a linear map

Φ:Cn×nCm×m{\displaystyle \Phi :C^{n\times n}\to C^{m\times m}}

is completely positive if and only if it is of the form

Φ(A)=ikViAVi{\displaystyle \Phi (A)=\sum _{i}^{k}V_{i}AV_{i}^{*}}

whereknm. Let {Epq} ⊂Cn ×n be then2 elementary matrix units. The positive matrix

MΦ=(Φ(Epq))pqCnm×nm{\displaystyle M_{\Phi }=\left(\Phi \left(E_{pq}\right)\right)_{pq}\in C^{nm\times nm}}

is called theChoi matrix of Φ. The Kraus operators correspond to the, not necessarily square, square roots ofMΦ: For any square rootB ofMΦ, one can obtain a family of Kraus operatorsVi by undoing the Vec operation to each columnbi ofB. Thus all sets of Kraus operators are related by partial isometries.

Mixed ensembles

[edit]
Main article:Density matrix

In quantum physics, a density matrix for ann-level quantum system is ann ×n complex matrixρ that is positive semidefinite with trace 1. Ifρ can be expressed as

ρ=ipivivi{\displaystyle \rho =\sum _{i}p_{i}v_{i}v_{i}^{*}}

wherepi>0{\displaystyle p_{i}>0} and Σpi = 1, the set

{pi,vi}{\displaystyle \left\{p_{i},v_{i}\right\}}

is said to be anensemble that describes the mixed stateρ. Notice {vi} is not required to be orthogonal. Different ensembles describing the stateρ are related by unitary operators, via the square roots ofρ. For instance, suppose

ρ=jajaj.{\displaystyle \rho =\sum _{j}a_{j}a_{j}^{*}.}

The trace 1 condition means

jajaj=1.{\displaystyle \sum _{j}a_{j}^{*}a_{j}=1.}

Let

pi=aiai,{\displaystyle p_{i}=a_{i}^{*}a_{i},}

andvi be the normalizedai. We see that

{pi,vi}{\displaystyle \left\{p_{i},v_{i}\right\}}

gives the mixed stateρ.

Footnotes

[edit]
  1. ^APythagorean triple is any triplet of positive integers such that r2+s2=t2 .{\displaystyle \ r^{2}+s^{2}=t^{2}~.}
  2. ^Note that positive semidefinite matrices can be asymmetric, so the symmetric condition is a necessary, distinct requirement.
  3. ^There can be more than one non-symmetric and positive semidefinite matrixB{\displaystyle B} such thatA=BTB .{\displaystyle A=B^{T}B~.}

See also

[edit]

Citations

[edit]
  1. ^abHigham, Nicholas J. (April 1986),"Newton's Method for the Matrix Square Root"(PDF),Mathematics of Computation,46 (174):537–549,doi:10.2307/2007992,JSTOR 2007992
  2. ^Mitchell, Douglas W. (November 2003)."Using Pythagorean triples to generate square roots ofI2".The Mathematical Gazette.87 (510):499–500.doi:10.1017/s0025557200173723.
  3. ^abcHorn & Johnson (2013), p. 439, Theorem 7.2.6 withk=2{\displaystyle k=2}
  4. ^Horn, Roger A.; Johnson, Charles R. (1990).Matrix analysis. Cambridge: Cambridge Univ. Press. p. 411.ISBN 9780521386326.
  5. ^For analytic functions of matrices, see
  6. ^For the holomorphic functional calculus, see:
  7. ^Deadman, Edvin; Higham, Nicholas J.; Ralha, Rui (2013),"Blocked Schur Algorithms for Computing the Matrix Square Root"(PDF),Applied Parallel and Scientific Computing, Springer Berlin Heidelberg, pp. 171–182,doi:10.1007/978-3-642-36803-5_12,ISBN 978-3-642-36802-8
  8. ^Denman & Beavers 1976;Cheng et al. 2001
  9. ^Higham, Nicholas J. (1997). "Stable iterations for the matrix square root".Numerical Algorithms.15 (2):227–242.Bibcode:1997NuAlg..15..227H.doi:10.1023/A:1019150005407.

References

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Square_root_of_a_matrix&oldid=1281035162"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp