Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Matrix splitting

From Wikipedia, the free encyclopedia
Representation of a matrix as a sum

In themathematical discipline ofnumerical linear algebra, amatrix splitting is an expression which represents a givenmatrix as a sum or difference of matrices. Manyiterative methods (for example, for systems ofdifferential equations) depend upon the direct solution of matrix equations involving matrices more general thantridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised byRichard S. Varga in 1960.[1]

Regular splittings

[edit]

We seek to solve thematrix equation

Ax=k,{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {k} ,}1

whereA is a givenn ×nnon-singular matrix, andk is a givencolumn vector withn components. We split the matrixA into

A=BC,{\displaystyle \mathbf {A} =\mathbf {B} -\mathbf {C} ,}2

whereB andC aren ×n matrices. If, for an arbitraryn ×n matrixM,M has nonnegative entries, we writeM0. IfM has only positive entries, we writeM >0. Similarly, if the matrixM1M2 has nonnegative entries, we writeM1M2.

Definition:A =BC is aregular splitting of A ifB−10 andC0.

We assume that matrix equations of the form

Bx=g,{\displaystyle \mathbf {B} \mathbf {x} =\mathbf {g} ,}3

whereg is a given column vector, can be solved directly for the vectorx. If (2) represents a regular splitting ofA, then the iterative method

Bx(m+1)=Cx(m)+k,m=0,1,2,,{\displaystyle \mathbf {B} \mathbf {x} ^{(m+1)}=\mathbf {C} \mathbf {x} ^{(m)}+\mathbf {k} ,\quad m=0,1,2,\ldots ,}4

wherex(0) is an arbitrary vector, can be carried out. Equivalently, we write (4) in the form

x(m+1)=B1Cx(m)+B1k,m=0,1,2,{\displaystyle \mathbf {x} ^{(m+1)}=\mathbf {B} ^{-1}\mathbf {C} \mathbf {x} ^{(m)}+\mathbf {B} ^{-1}\mathbf {k} ,\quad m=0,1,2,\ldots }5

The matrixD =B−1C has nonnegative entries if (2) represents a regular splitting ofA.[2]

It can be shown that ifA−1 >0, thenρ(D){\displaystyle \rho (\mathbf {D} )} < 1, whereρ(D){\displaystyle \rho (\mathbf {D} )} represents thespectral radius ofD, and thusD is aconvergent matrix. As a consequence, the iterative method (5) is necessarilyconvergent.[3][4]

If, in addition, the splitting (2) is chosen so that the matrixB is adiagonal matrix (with the diagonal entries all non-zero, sinceB must beinvertible), thenB can be inverted in linear time (seeTime complexity).

Matrix iterative methods

[edit]

Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrixA are all nonzero, and we express the matrixA as the matrix sum

A=DUL,{\displaystyle \mathbf {A} =\mathbf {D} -\mathbf {U} -\mathbf {L} ,}6

whereD is the diagonal part ofA, andU andL are respectively strictly upper and lowertriangularn ×n matrices, then we have the following.

TheJacobi method can be represented in matrix form as a splitting

x(m+1)=D1(U+L)x(m)+D1k.{\displaystyle \mathbf {x} ^{(m+1)}=\mathbf {D} ^{-1}(\mathbf {U} +\mathbf {L} )\mathbf {x} ^{(m)}+\mathbf {D} ^{-1}\mathbf {k} .}[5][6]7

TheGauss–Seidel method can be represented in matrix form as a splitting

x(m+1)=(DL)1Ux(m)+(DL)1k.{\displaystyle \mathbf {x} ^{(m+1)}=(\mathbf {D} -\mathbf {L} )^{-1}\mathbf {U} \mathbf {x} ^{(m)}+(\mathbf {D} -\mathbf {L} )^{-1}\mathbf {k} .}[7][8]8

The method ofsuccessive over-relaxation can be represented in matrix form as a splitting

x(m+1)=(DωL)1[(1ω)D+ωU]x(m)+ω(DωL)1k.{\displaystyle \mathbf {x} ^{(m+1)}=(\mathbf {D} -\omega \mathbf {L} )^{-1}[(1-\omega )\mathbf {D} +\omega \mathbf {U} ]\mathbf {x} ^{(m)}+\omega (\mathbf {D} -\omega \mathbf {L} )^{-1}\mathbf {k} .}[9][10]9

Example

[edit]

Regular splitting

[edit]

In equation (1), let

A=(623142315),k=(51210).{\displaystyle \mathbf {A} ={\begin{pmatrix}6&-2&-3\\-1&4&-2\\-3&-1&5\end{pmatrix}},\quad \mathbf {k} ={\begin{pmatrix}5\\-12\\10\end{pmatrix}}.}10

Let us apply the splitting (7) which is used in the Jacobi method: we splitA in such a way thatB consists ofall of the diagonal elements ofA, andC consists ofall of the off-diagonal elements ofA, negated. (Of course this is not the only useful way to split a matrix into two matrices.) We have

B=(600040005),C=(023102310),{\displaystyle {\begin{aligned}&\mathbf {B} ={\begin{pmatrix}6&0&0\\0&4&0\\0&0&5\end{pmatrix}},\quad \mathbf {C} ={\begin{pmatrix}0&2&3\\1&0&2\\3&1&0\end{pmatrix}},\end{aligned}}}11
A1=147(181316112115131222),B1=(160001400015),{\displaystyle {\begin{aligned}&\mathbf {A^{-1}} ={\frac {1}{47}}{\begin{pmatrix}18&13&16\\11&21&15\\13&12&22\end{pmatrix}},\quad \mathbf {B^{-1}} ={\begin{pmatrix}{\frac {1}{6}}&0&0\\[4pt]0&{\frac {1}{4}}&0\\[4pt]0&0&{\frac {1}{5}}\end{pmatrix}},\end{aligned}}}
D=B1C=(013121401235150),B1k=(5632).{\displaystyle {\begin{aligned}\mathbf {D} =\mathbf {B^{-1}C} ={\begin{pmatrix}0&{\frac {1}{3}}&{\frac {1}{2}}\\[4pt]{\frac {1}{4}}&0&{\frac {1}{2}}\\[4pt]{\frac {3}{5}}&{\frac {1}{5}}&0\end{pmatrix}},\quad \mathbf {B^{-1}k} ={\begin{pmatrix}{\frac {5}{6}}\\[4pt]-3\\[4pt]2\end{pmatrix}}.\end{aligned}}}

SinceB−10 andC0, the splitting (11) is a regular splitting. SinceA−1 >0, the spectral radiusρ(D){\displaystyle \rho (\mathbf {D} )} < 1. (The approximateeigenvalues ofD areλi0.4599820,0.3397859,0.7997679.{\displaystyle \lambda _{i}\approx -0.4599820,-0.3397859,0.7997679.}) Hence, the matrixD is convergent and the method (5) necessarily converges for the problem (10). Note that the diagonal elements ofA are all greater than zero, the off-diagonal elements ofA are all less than zero andA isstrictly diagonally dominant.[11]

The method (5) applied to the problem (10) then takes the form

x(m+1)=(013121401235150)x(m)+(5632),m=0,1,2,{\displaystyle \mathbf {x} ^{(m+1)}={\begin{pmatrix}0&{\frac {1}{3}}&{\frac {1}{2}}\\[4pt]{\frac {1}{4}}&0&{\frac {1}{2}}\\[4pt]{\frac {3}{5}}&{\frac {1}{5}}&0\end{pmatrix}}\mathbf {x} ^{(m)}+{\begin{pmatrix}{\frac {5}{6}}\\[4pt]-3\\[4pt]2\end{pmatrix}},\quad m=0,1,2,\ldots }12

The exact solution to equation (12) is

x=(213).{\displaystyle \mathbf {x} ={\begin{pmatrix}2\\-1\\3\end{pmatrix}}.}13

The first few iterates for equation (12) are listed in the table below, beginning withx(0) = (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), albeit rather slowly.

x1(m){\displaystyle x_{1}^{(m)}}x2(m){\displaystyle x_{2}^{(m)}}x3(m){\displaystyle x_{3}^{(m)}}
0.00.00.0
0.83333-3.00002.0000
0.83333-1.79171.9000
1.1861-1.84172.1417
1.2903-1.63262.3433
1.4608-1.50582.4477
1.5553-1.41102.5753
1.6507-1.32352.6510
1.7177-1.26182.7257
1.7756-1.20772.7783
1.8199-1.16702.8238

Jacobi method

[edit]

As stated above, the Jacobi method (7) is the same as the specific regular splitting (11) demonstrated above.

Gauss–Seidel method

[edit]

Since the diagonal entries of the matrixA in problem (10) are all nonzero, we can express the matrixA as the splitting (6), where

D=(600040005),U=(023002000),L=(000100310).{\displaystyle \mathbf {D} ={\begin{pmatrix}6&0&0\\0&4&0\\0&0&5\end{pmatrix}},\quad \mathbf {U} ={\begin{pmatrix}0&2&3\\0&0&2\\0&0&0\end{pmatrix}},\quad \mathbf {L} ={\begin{pmatrix}0&0&0\\1&0&0\\3&1&0\end{pmatrix}}.}14

We then have

(DL)1=1120(2000530013624),{\displaystyle {\begin{aligned}&\mathbf {(D-L)^{-1}} ={\frac {1}{120}}{\begin{pmatrix}20&0&0\\5&30&0\\13&6&24\end{pmatrix}},\end{aligned}}}
(DL)1U=1120(040600107502651),(DL)1k=1120(100335233).{\displaystyle {\begin{aligned}&\mathbf {(D-L)^{-1}U} ={\frac {1}{120}}{\begin{pmatrix}0&40&60\\0&10&75\\0&26&51\end{pmatrix}},\quad \mathbf {(D-L)^{-1}k} ={\frac {1}{120}}{\begin{pmatrix}100\\-335\\233\end{pmatrix}}.\end{aligned}}}

The Gauss–Seidel method (8) applied to the problem (10) takes the form

x(m+1)=1120(040600107502651)x(m)+1120(100335233),m=0,1,2,{\displaystyle \mathbf {x} ^{(m+1)}={\frac {1}{120}}{\begin{pmatrix}0&40&60\\0&10&75\\0&26&51\end{pmatrix}}\mathbf {x} ^{(m)}+{\frac {1}{120}}{\begin{pmatrix}100\\-335\\233\end{pmatrix}},\quad m=0,1,2,\ldots }15

The first few iterates for equation (15) are listed in the table below, beginning withx(0) = (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), somewhat faster than the Jacobi method described above.

x1(m){\displaystyle x_{1}^{(m)}}x2(m){\displaystyle x_{2}^{(m)}}x3(m){\displaystyle x_{3}^{(m)}}
0.00.00.0
0.8333-2.79171.9417
0.8736-1.81072.1620
1.3108-1.59132.4682
1.5370-1.38172.6459
1.6957-1.25312.7668
1.7990-1.16682.8461
1.8675-1.11012.8985
1.9126-1.07262.9330
1.9423-1.04792.9558
1.9619-1.03162.9708

Successive over-relaxation method

[edit]

Letω = 1.1. Using the splitting (14) of the matrixA in problem (10) for the successive over-relaxation method, we have

(DωL)1=112(2000.55301.4410.662.4),{\displaystyle {\begin{aligned}&\mathbf {(D-\omega L)^{-1}} ={\frac {1}{12}}{\begin{pmatrix}2&0&0\\0.55&3&0\\1.441&0.66&2.4\end{pmatrix}},\end{aligned}}}
(DωL)1[(1ω)D+ωU]=112(1.24.46.60.330.018.4150.86462.90625.0073),{\displaystyle {\begin{aligned}&\mathbf {(D-\omega L)^{-1}[(1-\omega )D+\omega U]} ={\frac {1}{12}}{\begin{pmatrix}-1.2&4.4&6.6\\-0.33&0.01&8.415\\-0.8646&2.9062&5.0073\end{pmatrix}},\end{aligned}}}
ω(DωL)1k=112(1136.57525.6135).{\displaystyle {\begin{aligned}&\mathbf {\omega (D-\omega L)^{-1}k} ={\frac {1}{12}}{\begin{pmatrix}11\\-36.575\\25.6135\end{pmatrix}}.\end{aligned}}}

The successive over-relaxation method (9) applied to the problem (10) takes the form

x(m+1)=112(1.24.46.60.330.018.4150.86462.90625.0073)x(m)+112(1136.57525.6135),m=0,1,2,{\displaystyle \mathbf {x} ^{(m+1)}={\frac {1}{12}}{\begin{pmatrix}-1.2&4.4&6.6\\-0.33&0.01&8.415\\-0.8646&2.9062&5.0073\end{pmatrix}}\mathbf {x} ^{(m)}+{\frac {1}{12}}{\begin{pmatrix}11\\-36.575\\25.6135\end{pmatrix}},\quad m=0,1,2,\ldots }16

The first few iterates for equation (16) are listed in the table below, beginning withx(0) = (0.0, 0.0, 0.0)T. From the table one can see that the method is evidently converging to the solution (13), slightly faster than the Gauss–Seidel method described above.

x1(m){\displaystyle x_{1}^{(m)}}x2(m){\displaystyle x_{2}^{(m)}}x3(m){\displaystyle x_{3}^{(m)}}
0.00.00.0
0.9167-3.04792.1345
0.8814-1.57882.2209
1.4711-1.51612.6153
1.6521-1.25572.7526
1.8050-1.16412.8599
1.8823-1.09302.9158
1.9314-1.05592.9508
1.9593-1.03272.9709
1.9761-1.01852.9829
1.9862-1.01132.9901

See also

[edit]

Notes

[edit]
  1. ^Varga (1960)
  2. ^Varga (1960, pp. 121–122)
  3. ^Varga (1960, pp. 122–123)
  4. ^Varga (1962, p. 89)
  5. ^Burden & Faires (1993, p. 408)
  6. ^Varga (1962, p. 88)
  7. ^Burden & Faires (1993, p. 411)
  8. ^Varga (1962, p. 88)
  9. ^Burden & Faires (1993, p. 416)
  10. ^Varga (1962, p. 88)
  11. ^Burden & Faires (1993, p. 371)

References

[edit]
Key concepts
Problems
Hardware
Software
Retrieved from "https://en.wikipedia.org/w/index.php?title=Matrix_splitting&oldid=1296012803"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp