Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Laplacian matrix

From Wikipedia, the free encyclopedia

Matrix representation of a graph

In themathematical field ofgraph theory, theLaplacian matrix, also called thegraph Laplacian,admittance matrix,Kirchhoff matrix, ordiscrete Laplacian, is amatrix representation of agraph. Named afterPierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negativediscrete Laplace operator on a graph approximating the negative continuousLaplacian obtained by thefinite difference method.

The Laplacian matrix relates to many functional graph properties.Kirchhoff's theorem can be used to calculate the number ofspanning trees for a given graph. Thesparsest cut of a graph can be approximated through theFiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established byCheeger's inequality. Thespectral decomposition of the Laplacian matrix allows the construction oflow-dimensional embeddings that appear in manymachine learning applications and determines aspectral layout ingraph drawing. Graph-basedsignal processing is based on thegraph Fourier transform that extends the traditionaldiscrete Fourier transform by substituting the standard basis ofcomplexsinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.

The Laplacian matrix is the easiest to define for asimple graph but is more common in applications for anedge-weighted graph, i.e., with weights on its edges — the entries of the graphadjacency matrix.Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.

Definitions forsimple graphs

[edit]

Laplacian matrix

[edit]

Given asimple graphG{\displaystyle G} withn{\displaystyle n} verticesv1,,vn{\displaystyle v_{1},\ldots ,v_{n}}, its Laplacian matrixLn×n{\textstyle L_{n\times n}} is defined element-wise as[1]

Li,j:={deg(vi)if i=j1if ij and vi is adjacent to vj0otherwise,{\displaystyle L_{i,j}:={\begin{cases}\deg(v_{i})&{\mbox{if}}\ i=j\\-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}},\end{cases}}}

or equivalently by the matrix

L=DA,{\displaystyle L=D-A,}

whereD is thedegree matrix, andA is the graph'sadjacency matrix. SinceG{\textstyle G} is a simple graph,A{\textstyle A} only contains 1s or 0s and its diagonal elements are all 0s.

Here is a simple example of a labelled, undirected graph and its Laplacian matrix.

Labelled graphDegree matrixAdjacency matrixLaplacian matrix
(200000030000002000000300000030000001){\textstyle \left({\begin{array}{rrrrrr}2&0&0&0&0&0\\0&3&0&0&0&0\\0&0&2&0&0&0\\0&0&0&3&0&0\\0&0&0&0&3&0\\0&0&0&0&0&1\\\end{array}}\right)}(010010101010010100001011110100000100){\textstyle \left({\begin{array}{rrrrrr}0&1&0&0&1&0\\1&0&1&0&1&0\\0&1&0&1&0&0\\0&0&1&0&1&1\\1&1&0&1&0&0\\0&0&0&1&0&0\\\end{array}}\right)}(210010131010012100001311110130000101){\textstyle \left({\begin{array}{rrrrrr}2&-1&0&0&-1&0\\-1&3&-1&0&-1&0\\0&-1&2&-1&0&0\\0&0&-1&3&-1&-1\\-1&-1&0&-1&3&0\\0&0&0&-1&0&1\\\end{array}}\right)}

We observe for the undirected graph that both theadjacency matrix and the Laplacian matrix are symmetric and that the row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular).

Fordirected graphs, either theindegree or outdegree might be used, depending on the application, as in the following example:

Labelled graphAdjacency matrixOut-Degree matrixOut-Degree LaplacianIn-Degree matrixIn-Degree Laplacian
(011001100){\textstyle \left({\begin{array}{rrr}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(200010001){\textstyle \left({\begin{array}{rrr}2&0&0\\0&1&0\\0&0&1\\\end{array}}\right)}(201110111){\textstyle \left({\begin{array}{rrr}2&0&-1\\-1&1&0\\-1&-1&1\\\end{array}}\right)}(100010002){\textstyle \left({\begin{array}{rrr}1&0&0\\0&1&0\\0&0&2\\\end{array}}\right)}(111011102){\textstyle \left({\begin{array}{rrr}1&-1&-1\\0&1&-1\\-1&0&2\\\end{array}}\right)}

In the directed graph, theadjacency matrix and Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether theindegree or outdegree has been used.

Laplacian matrix for an undirected graph via the oriented incidence matrix

[edit]

The|v|×|e|{\textstyle |v|\times |e|} orientedincidence matrixB with elementBve for the vertexv and the edgee (connecting verticesvi{\textstyle v_{i}} andvj{\textstyle v_{j}}, withi ≠ j) is defined by

Bve={1,if v=vi1,if v=vj0,otherwise.{\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.}

Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian|v|×|v|{\textstyle |v|\times |v|} matrixL defined as

L=BBT{\displaystyle L=BB^{\textsf {T}}}

whereBT{\textstyle B^{\textsf {T}}} is thematrix transpose ofB.

Undirected graphIncidence matrixLaplacian matrix
(1110100001010011){\textstyle \left({\begin{array}{rrrr}1&1&1&0\\-1&0&0&0\\0&-1&0&1\\0&0&-1&-1\\\end{array}}\right)}(3111110010211012){\textstyle \left({\begin{array}{rrrr}3&-1&-1&-1\\-1&1&0&0\\-1&0&2&-1\\-1&0&-1&2\\\end{array}}\right)}

An alternative productBTB{\displaystyle B^{\textsf {T}}B} defines the so-called|e|×|e|{\textstyle |e|\times |e|}edge-based Laplacian, as opposed to the original commonly usedvertex-based Laplacian matrixL.

Symmetric Laplacian for a directed graph

[edit]

The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditionalspectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to applying techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter.

In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as aBoolean sum of the adjacency matrixA{\displaystyle A} of the original directed graph and itsmatrix transposeAT{\displaystyle A^{T}}, where the zero and one entries ofA{\displaystyle A} are treated as logical, rather than numerical, values, as in the following example:

Adjacency matrixSymmetrized adjacencySymmetric Laplacian matrix
(011001100){\textstyle \left({\begin{array}{ccc}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(011101110){\textstyle \left({\begin{array}{ccc}0&1&1\\1&0&1\\1&1&0\\\end{array}}\right)}(211121112){\textstyle \left({\begin{array}{ccc}2&-1&-1\\-1&2&-1\\-1&-1&2\\\end{array}}\right)}

Laplacian matrix normalization

[edit]

A vertex with a large degree, also called aheavy node, results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization.

Symmetrically normalized Laplacian

[edit]

The symmetrically normalized Laplacian matrix is defined as:[1]

Lsym:=(D+)1/2L(D+)1/2=I(D+)1/2A(D+)1/2,{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},}

whereD+{\displaystyle D^{+}} is theMoore–Penrose inverse of the degree matrix.

The elements ofLsym{\textstyle L^{\text{sym}}} are thus given by

Li,jsym:={1if i=j and deg(vi)01deg(vi)deg(vj)if ij and vi is adjacent to vj0otherwise.{\displaystyle L_{i,j}^{\text{sym}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\sqrt {\deg(v_{i})\deg(v_{j})}}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}

The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric.

Adjacency matrixDegree matrixNormalized Laplacian
(010101010){\textstyle \left({\begin{array}{rrr}0&1&0\\1&0&1\\0&1&0\\\end{array}}\right)}(100020001){\textstyle \left({\begin{array}{rrr}1&0&0\\0&2&0\\0&0&1\\\end{array}}\right)}(11/201/211/201/21){\textstyle \left({\begin{array}{rrr}1&-{\sqrt {1/2}}&0\\-{\sqrt {1/2}}&1&-{\sqrt {1/2}}\\0&-{\sqrt {1/2}}&1\\\end{array}}\right)}

For a non-symmetric adjacency matrix of a directed graph, either ofindegree and outdegree can be used for normalization:

Adjacency matrixOut-Degree matrixOut-Degree normalized LaplacianIn-Degree matrixIn-Degree normalized Laplacian
(011001100){\textstyle \left({\begin{array}{rrr}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(200010001){\textstyle \left({\begin{array}{rrr}2&0&0\\0&1&0\\0&0&1\\\end{array}}\right)}(11/21/20111/201){\textstyle \left({\begin{array}{rrr}1&-{\sqrt {1/2}}&-{\sqrt {1/2}}\\0&1&-1\\-{\sqrt {1/2}}&0&1\\\end{array}}\right)}(100010002){\textstyle \left({\begin{array}{rrr}1&0&0\\0&1&0\\0&0&2\\\end{array}}\right)}(111/2011/21/201){\textstyle \left({\begin{array}{rrr}1&-1&-{\sqrt {1/2}}\\0&1&-{\sqrt {1/2}}\\-{\sqrt {1/2}}&0&1\\\end{array}}\right)}

Left (random-walk) and right normalized Laplacians

[edit]

The left (random-walk) normalized Laplacian matrix is defined as:

Lrw:=D+L=ID+A,{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A,}

whereD+{\displaystyle D^{+}} is theMoore–Penrose inverse.The elements ofLrw{\textstyle L^{\text{rw}}} are given by

Li,jrw:={1if i=j and deg(vi)01deg(vi)if ij and vi is adjacent to vj0otherwise.{\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}

Similarly, the right normalized Laplacian matrix is defined as

LD+=IAD+{\displaystyle LD^{+}=I-AD^{+}}.

The left or right normalized Laplacian matrix is symmetric if the adjacency matrix is symmetric and the graph is regular. Otherwise, the left or right normalized Laplacian matrix is asymmetric. For example,

Adjacency matrixDegree matrixLeft normalized LaplacianRight normalized Laplacian
(010101010){\textstyle \left({\begin{array}{rrr}0&1&0\\1&0&1\\0&1&0\\\end{array}}\right)}(100020001){\textstyle \left({\begin{array}{rrr}1&0&0\\0&2&0\\0&0&1\\\end{array}}\right)}(1101/211/2011){\textstyle \left({\begin{array}{rrr}1&-1&0\\-1/2&1&-1/2\\0&-1&1\\\end{array}}\right)}(11/2011101/21){\textstyle \left({\begin{array}{rrr}1&-1/2&0\\-1&1&-1\\0&-1/2&1\\\end{array}}\right)}

The example also demonstrates that ifG{\displaystyle G} has no isolated vertices, thenD+A{\displaystyle D^{+}A}right stochastic and hence is the matrix of arandom walk, so that the left normalized LaplacianLrw:=D+L=ID+A{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A} has each row summing to zero. Thus we sometimes alternatively callLrw{\displaystyle L^{\text{rw}}} therandom-walk normalized Laplacian. In the less uncommonly used right normalized LaplacianLD+=IAD+{\displaystyle LD^{+}=I-AD^{+}} each column sums to zero sinceAD+{\displaystyle AD^{+}} isleft stochastic.

For a non-symmetric adjacency matrix of a directed graph, one also needs to chooseindegree or outdegree for normalization:

Adjacency matrixOut-Degree matrixOut-Degree left normalized LaplacianIn-Degree matrixIn-Degree right normalized Laplacian
(011001100){\textstyle \left({\begin{array}{rrr}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(200010001){\textstyle \left({\begin{array}{rrr}2&0&0\\0&1&0\\0&0&1\\\end{array}}\right)}(11/21/2011101){\textstyle \left({\begin{array}{rrr}1&-1/2&-1/2\\0&1&-1\\-1&0&1\\\end{array}}\right)}(100010002){\textstyle \left({\begin{array}{rrr}1&0&0\\0&1&0\\0&0&2\\\end{array}}\right)}(111/2011/2101){\textstyle \left({\begin{array}{rrr}1&-1&-1/2\\0&1&-1/2\\-1&0&1\\\end{array}}\right)}

The left out-degree normalized Laplacian with row-sums all 0 relates toright stochasticDout+A{\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 containsleft stochasticADin+{\displaystyle AD_{\text{in}}^{+}}.

Definitions for graphs with weighted edges

[edit]

Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. Inspectral clustering and graph-basedsignal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to thedistances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization.

Laplacian matrix

[edit]

The Laplacian matrix is defined by

L=DA,{\displaystyle L=D-A,}

whereD is thedegree matrix andA is theadjacency matrix of the graph.

Fordirected graphs, either theindegree or outdegree might be used, depending on the application, as in the following example:

Adjacency matrixIn-Degree matrixIn-Degree LaplacianOut-Degree matrixOut-Degree Laplacian
(012305670){\textstyle \left({\begin{array}{rrr}0&1&2\\3&0&5\\6&7&0\\\end{array}}\right)}(900080007){\textstyle \left({\begin{array}{rrr}9&0&0\\0&8&0\\0&0&7\\\end{array}}\right)}(912385677){\textstyle \left({\begin{array}{rrr}9&-1&-2\\-3&8&-5\\-6&-7&7\\\end{array}}\right)}(3000800013){\textstyle \left({\begin{array}{rrr}3&0&0\\0&8&0\\0&0&13\\\end{array}}\right)}(3123856713){\textstyle \left({\begin{array}{rrr}3&-1&-2\\-3&8&-5\\-6&-7&13\\\end{array}}\right)}

Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values.

Symmetric Laplacian via the incidence matrix

[edit]
A 2-dimensional spring system.

For graphs with weighted edges one can define a weighted incidence matrixB and use it to construct the corresponding symmetric Laplacian asL=BBT{\displaystyle L=BB^{\textsf {T}}}. An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using theincidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. Aspring system is an example of this model used inmechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges.

We thus reuse the definition of the weightless|v|×|e|{\textstyle |v|\times |e|}incidence matrixB with elementBve for the vertexv and the edgee (connecting vertexesvi{\textstyle v_{i}} andvj{\textstyle v_{j}}, withi > j) defined by

Bve={1,if v=vi1,if v=vj0,otherwise.{\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.}

We now also define a diagonal|e|×|e|{\textstyle |e|\times |e|} matrixW containing the edge weights. Even though the edges in the definition ofB are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian|v|×|v|{\textstyle |v|\times |v|} matrixL defined as

L=BWBT{\displaystyle L=BWB^{\textsf {T}}}

whereBT{\textstyle B^{\textsf {T}}} is thematrix transpose ofB.

The construction is illustrated in the following example, where every edgeei{\textstyle e_{i}} is assigned the weight valuei, withi=1,2,3,4.{\textstyle i=1,2,3,4.}

Undirected graphIncidence matrixEdge weightsLaplacian matrix
(1110100001010011){\textstyle \left({\begin{array}{rrrr}1&1&1&0\\-1&0&0&0\\0&-1&0&1\\0&0&-1&-1\\\end{array}}\right)}(1000020000300004){\textstyle \left({\begin{array}{rrrr}1&0&0&0\\0&2&0&0\\0&0&3&0\\0&0&0&4\\\end{array}}\right)}(6123110020643047){\textstyle \left({\begin{array}{rrrr}6&-1&-2&-3\\-1&1&0&0\\-2&0&6&-4\\-3&0&-4&7\\\end{array}}\right)}

Symmetric Laplacian for a directed graph

[edit]

Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrixA{\displaystyle A} of the original directed graph and itsmatrix transposeAT{\displaystyle A^{T}} as in the following example:

Adjacency matrixSymmetrized adjacency matrixSymmetric Laplacian matrix
(011001100){\textstyle \left({\begin{array}{rrr}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(012101210){\textstyle \left({\begin{array}{rrr}0&1&2\\1&0&1\\2&1&0\\\end{array}}\right)}(312121213){\textstyle \left({\begin{array}{rrr}3&-1&-2\\-1&2&-1\\-2&-1&3\\\end{array}}\right)}

where the zero and one entries ofA{\displaystyle A} are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2.

Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using theindegree and outdegree, as in the following example:

Adjacency matrixOut-Degree matrixOut-Degree LaplacianIn-Degree matrixIn-Degree Laplacian
(011001100){\textstyle \left({\begin{array}{rrr}0&1&1\\0&0&1\\1&0&0\\\end{array}}\right)}(200010001){\textstyle \left({\begin{array}{rrr}2&0&0\\0&1&0\\0&0&1\\\end{array}}\right)}(211011101){\textstyle \left({\begin{array}{rrr}2&-1&-1\\0&1&-1\\-1&0&1\\\end{array}}\right)}(100010002){\textstyle \left({\begin{array}{rrr}1&0&0\\0&1&0\\0&0&2\\\end{array}}\right)}(111011102){\textstyle \left({\begin{array}{rrr}1&-1&-1\\0&1&-1\\-1&0&2\\\end{array}}\right)}

The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix.

Laplacian matrix normalization

[edit]

The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.

Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors.

Symmetrically normalized Laplacian

[edit]

Thesymmetrically normalized Laplacian is defined as

Lsym:=(D+)1/2L(D+)1/2=I(D+)1/2A(D+)1/2,{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},}

whereL is the unnormalized Laplacian,A is the adjacency matrix,D is the degree matrix, andD+{\displaystyle D^{+}} is theMoore–Penrose inverse. Since the degree matrixD is diagonal, its reciprocal square root(D+)1/2{\textstyle (D^{+})^{1/2}} is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries ofD. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example:

Adjacency matrixIn-Degree matrixIn-Degree normalized LaplacianOut-Degree matrixOut-Degree normalized Laplacian
(010400000){\textstyle \left({\begin{array}{rrr}0&1&0\\4&0&0\\0&0&0\\\end{array}}\right)}(100040000){\textstyle \left({\begin{array}{rrr}1&0&0\\0&4&0\\0&0&0\\\end{array}}\right)}(11/20210000){\textstyle \left({\begin{array}{rrr}1&-1/2&0\\-2&1&0\\0&0&0\\\end{array}}\right)}(400010000){\textstyle \left({\begin{array}{rrr}4&0&0\\0&1&0\\0&0&0\\\end{array}}\right)}(11/20210000){\textstyle \left({\begin{array}{rrr}1&-1/2&0\\-2&1&0\\0&0&0\\\end{array}}\right)}

The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrixA is symmetric and the diagonal entries ofD are nonnegative, in which case we can use the term thesymmetric normalized Laplacian.

The symmetric normalized Laplacian matrix can be also written as

Lsym:=(D+)1/2L(D+)1/2=(D+)1/2BWBT(D+)1/2=SST{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=(D^{+})^{1/2}BWB^{\textsf {T}}(D^{+})^{1/2}=SS^{T}}

using the weightless|v|×|e|{\textstyle |v|\times |e|}incidence matrixB and the diagonal|e|×|e|{\textstyle |e|\times |e|} matrixW containing the edge weights and defining the new|v|×|e|{\textstyle |v|\times |e|} weighted incidence matrixS=(D+)1/2BW1/2{\textstyle S=(D^{+})^{1/2}BW^{{1}/{2}}} whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edgee = {u, v} has an entry1du{\textstyle {\frac {1}{\sqrt {d_{u}}}}} in the row corresponding tou, an entry1dv{\textstyle -{\frac {1}{\sqrt {d_{v}}}}} in the row corresponding tov, and has 0 entries elsewhere.

Random walk normalized Laplacian

[edit]

Therandom walk normalized Laplacian is defined as

Lrw:=D+L=ID+A{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A}

whereD is the degree matrix. Since the degree matrixD is diagonal, its inverseD+{\textstyle D^{+}} is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries ofD. For the isolated vertices (those with degree 0), a common choice is to set the corresponding elementLi,irw{\textstyle L_{i,i}^{\text{rw}}} to 0. The matrix elements ofLrw{\textstyle L^{\text{rw}}} are given by

Li,jrw:={1if i=j and deg(vi)01deg(vi)if ij and vi is adjacent to vj0otherwise.{\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if}}\ i=j\ {\mbox{and}}\ \deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}

The name of the random-walk normalized Laplacian comes from the fact that this matrix isLrw=IP{\textstyle L^{\text{rw}}=I-P}, whereP=D+A{\textstyle P=D^{+}A} is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, letei{\textstyle e_{i}} denote the i-thstandard basis vector. Thenx=eiP{\textstyle x=e_{i}P} is aprobability vector representing the distribution of a random walker's locations after taking a single step from vertexi{\textstyle i}; i.e.,xj=P(vivj){\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)}. More generally, if the vectorx{\textstyle x} is a probability distribution of the location of a random walker on the vertices of the graph, thenx=xPt{\textstyle x'=xP^{t}} is the probability distribution of the walker aftert{\textstyle t} steps.

The random walk normalized Laplacian can also be called the left normalized LaplacianLrw:=D+L{\displaystyle L^{\text{rw}}:=D^{+}L} since the normalization is performed by multiplying the Laplacian by the normalization matrixD+{\displaystyle D^{+}} on the left. It has each row summing to zero sinceP=D+A{\displaystyle P=D^{+}A} isright stochastic, assuming all the weights are non-negative.

In the less uncommonly used right normalized LaplacianLD+=IAD+{\displaystyle LD^{+}=I-AD^{+}} each column sums to zero sinceAD+{\displaystyle AD^{+}} isleft stochastic.

For a non-symmetric adjacency matrix of a directed graph, one also needs to chooseindegree or outdegree for normalization:

Adjacency matrixOut-Degree matrixOut-Degree left normalized LaplacianIn-Degree matrixIn-Degree right normalized Laplacian
(010002100){\textstyle \left({\begin{array}{rrr}0&1&0\\0&0&2\\1&0&0\\\end{array}}\right)}(100020001){\textstyle \left({\begin{array}{rrr}1&0&0\\0&2&0\\0&0&1\\\end{array}}\right)}(110011101){\textstyle \left({\begin{array}{rrr}1&-1&0\\0&1&-1\\-1&0&1\\\end{array}}\right)}(100010002){\textstyle \left({\begin{array}{rrr}1&0&0\\0&1&0\\0&0&2\\\end{array}}\right)}(110011101){\textstyle \left({\begin{array}{rrr}1&-1&0\\0&1&-1\\-1&0&1\\\end{array}}\right)}

The left out-degree normalized Laplacian with row-sums all 0 relates toright stochasticDout+A{\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 containsleft stochasticADin+{\displaystyle AD_{\text{in}}^{+}}.

Negative weights

[edit]

Negative weights present several challenges for normalization:

  • The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex.
  • Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist.
  • Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.

Properties

[edit]

For an (undirected) graphG and its Laplacian matrixL witheigenvaluesλ0λ1λn1{\textstyle \lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{n-1}}:

λi=viTLvi=viTMTMvi=(Mvi)T(Mvi).{\displaystyle {\begin{aligned}\lambda _{i}&=\mathbf {v} _{i}^{\textsf {T}}L\mathbf {v} _{i}\\&=\mathbf {v} _{i}^{\textsf {T}}M^{\textsf {T}}M\mathbf {v} _{i}\\&=\left(M\mathbf {v} _{i}\right)^{\textsf {T}}\left(M\mathbf {v} _{i}\right).\\\end{aligned}}}

Becauseλi{\textstyle \lambda _{i}} can be written as the inner product of the vectorMvi{\textstyle M\mathbf {v} _{i}} with itself, this shows thatλi0{\textstyle \lambda _{i}\geq 0} and so the eigenvalues ofL{\textstyle L} are all non-negative.

  • All eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ0 ≤ … ≤ μn−1 ≤ 2. These eigenvalues (known as the spectrum of the normalized Laplacian) relate well to other graph invariants for general graphs.[1]
  • One can check that:
Lrw=ID12(ILsym)D12{\displaystyle L^{\text{rw}}=I-D^{-{\frac {1}{2}}}\left(I-L^{\text{sym}}\right)D^{\frac {1}{2}}},

i.e.,Lrw{\textstyle L^{\text{rw}}} is similar to the normalized LaplacianLsym{\textstyle L^{\text{sym}}}. For this reason, even ifLrw{\textstyle L^{\text{rw}}} is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric LaplacianLsym{\textstyle L^{\text{sym}}}.

Interpretation as the discrete Laplace operator approximating the continuous Laplacian

[edit]

The graph Laplacian matrix can be further viewed as a matrix form of the negativediscrete Laplace operator on a graph approximating the negative continuousLaplacian operator obtained by thefinite difference method.(SeeDiscrete Poisson equation)[2] In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximationstencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneousNeumann boundary condition, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.

Generalizations and extensions of the Laplacian matrix

[edit]

Generalized Laplacian

[edit]

The generalized LaplacianQ{\displaystyle Q} is defined as:[3]

{Qi,j<0if ij and vi is adjacent to vjQi,j=0if ij and vi is not adjacent to vjany numberotherwise.{\displaystyle {\begin{cases}Q_{i,j}<0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\Q_{i,j}=0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is not adjacent to }}v_{j}\\{\mbox{any number}}&{\mbox{otherwise}}.\end{cases}}}

Notice the ordinary Laplacian is a generalized Laplacian.

Admittance matrix of an AC circuit

[edit]

The Laplacian of a graph was first introduced to model electrical networks.In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances.The weight of edge (i,j) is, by convention,minus the reciprocal of the impedance directly betweeni andj.In models of such networks, the entries of theadjacency matrix are complex, but the Kirchhoff matrix remains symmetric, rather than beingHermitian.Such a matrix is usually called an "admittance matrix", denotedY{\displaystyle Y}, rather than a "Laplacian".This is one of the rare applications that give rise tocomplex symmetric matrices.

Adjacency matrixWeighted degree matrixAdmittance matrix
(0i00i012i0012i010010){\textstyle \left({\begin{array}{rrrr}0&i&0&0\\i&0&1-2i&0\\0&1-2i&0&1\\0&0&1&0\\\end{array}}\right)}(i00001i000022i00001){\textstyle \left({\begin{array}{rrrr}i&0&0&0\\0&1-i&0&0\\0&0&2-2i&0\\0&0&0&1\\\end{array}}\right)}(ii00i1+i12i0012i2+2i10011){\textstyle \left({\begin{array}{rrrr}-i&i&0&0\\i&-1+i&1-2i&0\\0&1-2i&-2+2i&1\\0&0&1&-1\\\end{array}}\right)}

Magnetic Laplacian

[edit]

There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become aHermitian matrix. The Magnetic Laplacian for a directed graph with real weightswij{\displaystyle w_{ij}} is constructed as theHadamard product of thereal symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with thecomplex entries

γq(i,j)=ei2πq(wijwji){\displaystyle \gamma _{q}(i,j)=e^{i2\pi q(w_{ij}-w_{ji})}}

which encode the edge direction into the phase in the complex plane.In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameterq{\displaystyle q} is called electric charge.[4]In the following exampleq=1/4{\displaystyle q=1/4}:

Adjacency matrixSymmetrized LaplacianPhase matrixMagnetic Laplacian
(0100101000000010){\textstyle \left({\begin{array}{rrrr}0&1&0&0\\1&0&1&0\\0&0&0&0\\0&0&1&0\\\end{array}}\right)}(2200231001210011){\textstyle \left({\begin{array}{rrrr}2&-2&0&0\\-2&3&-1&0\\0&-1&2&-1\\0&0&-1&1\\\end{array}}\right)}(111111i11i1i11i1){\textstyle \left({\begin{array}{rrrr}1&1&1&1\\1&1&i&1\\1&-i&1&-i\\1&1&i&1\\\end{array}}\right)}(220023i00i2i00i1){\textstyle \left({\begin{array}{rrrr}2&-2&0&0\\-2&3&-i&0\\0&i&2&i\\0&0&-i&1\\\end{array}}\right)}

Deformed Laplacian

[edit]

Thedeformed Laplacian is commonly defined as

Δ(s)=IsA+s2(DI){\displaystyle \Delta (s)=I-sA+s^{2}(D-I)}

whereI{\textstyle I} is the identity matrix,A{\textstyle A} is the adjacency matrix,D{\textstyle D} is the degree matrix, ands{\textstyle s} is a (complex-valued) number.[5]
The standard Laplacian is justΔ(1){\textstyle \Delta (1)} andΔ(1)=D+A{\textstyle \Delta (-1)=D+A} is the signless Laplacian.

Signless Laplacian

[edit]

Thesignless Laplacian is defined as

Q=D+A{\displaystyle Q=D+A}

whereD{\displaystyle D} is the degree matrix, andA{\displaystyle A} is the adjacency matrix.[6] Like the signed LaplacianL{\displaystyle L}, the signless LaplacianQ{\displaystyle Q} also is positive semi-definite as it can be factored as

Q=RRT{\displaystyle Q=RR^{\textsf {T}}}

whereR{\textstyle R} is the incidence matrix.Q{\displaystyle Q} has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as

xTQx=xTRRTxRTx=0.{\displaystyle \mathbf {x} ^{\textsf {T}}Q\mathbf {x} =\mathbf {x} ^{\textsf {T}}RR^{\textsf {T}}\mathbf {x} \implies R^{\textsf {T}}\mathbf {x} =\mathbf {0} .}

This has a solution wherex0{\displaystyle \mathbf {x} \neq \mathbf {0} } if and only if the graph has a bipartite connected component.

Directed multigraphs

[edit]

An analogue of the Laplacian matrix can be defined for directed multigraphs.[7] In this case the Laplacian matrixL is defined as

L=DA{\displaystyle L=D-A}

whereD is a diagonal matrix withDi,i equal to the outdegree of vertexi andA is a matrix withAi,j equal to the number of edges fromi toj (including loops).

Open source software implementations

[edit]

Application software

[edit]
  • scikit-learn Spectral Clustering[11]
  • PyGSP: Graph Signal Processing in Python[12]
  • megaman: Manifold Learning for Millions of Points[13]
  • smoothG[14]
  • Laplacian Change Point Detection for Dynamic Graphs (KDD 2020)[15]
  • LaplacianOpt (A Julia Package for Maximizing Laplacian's Second Eigenvalue of Weighted Graphs)[16]
  • LigMG (Large Irregular Graph MultiGrid)[17]
  • Laplacians.jl[18]

See also

[edit]

References

[edit]
  1. ^abcChung, Fan (1997) [1992].Spectral Graph Theory. American Mathematical Society.ISBN 978-0821803158.
  2. ^Smola, Alexander J.; Kondor, Risi (2003), "Kernels and regularization on graphs",Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24–27, 2003, Proceedings, Lecture Notes in Computer Science, vol. 2777, Springer, pp. 144–158,CiteSeerX 10.1.1.3.7020,doi:10.1007/978-3-540-45167-9_12,ISBN 978-3-540-40720-1.
  3. ^Godsil, C.; Royle, G. (2001).Algebraic Graph Theory, Graduate Texts in Mathematics. Springer-Verlag.
  4. ^Satoshi Furutani; Toshiki Shibahara; Mitsuaki Akiyama; Kunio Hato; Masaki Aida (2020).Graph Signal Processing for Directed Graphs based on the Hermitian Laplacian(PDF). ECML PKDD 2019: Machine Learning and Knowledge Discovery in Databases. pp. 447–463.doi:10.1007/978-3-030-46150-8_27.
  5. ^Morbidi, F. (2013)."The Deformed Consensus Protocol"(PDF).Automatica.49 (10):3049–3055.doi:10.1016/j.automatica.2013.07.006.S2CID 205767404.
  6. ^Cvetković, Dragoš; Simić, Slobodan K. (2010)."Towards a Spectral Theory of Graphs Based on the Signless Laplacian, III".Applicable Analysis and Discrete Mathematics.4 (1):156–166.doi:10.2298/AADM1000001C.ISSN 1452-8630.JSTOR 43671298.
  7. ^Chaiken, S.;Kleitman, D. (1978)."Matrix Tree Theorems".Journal of Combinatorial Theory, Series A.24 (3):377–381.doi:10.1016/0097-3165(78)90067-5.ISSN 0097-3165.
  8. ^"SciPy".GitHub. 4 October 2023.
  9. ^"NetworkX".GitHub. 4 October 2023.
  10. ^"Julia".GitHub. 4 October 2023.
  11. ^"2.3. Clustering".
  12. ^"PyGSP: Graph Signal Processing in Python".GitHub. 23 March 2022.
  13. ^"Megaman: Manifold Learning for Millions of Points".GitHub. 14 March 2022.
  14. ^"SmoothG".GitHub. 17 September 2020.
  15. ^"Announcing Our Paper at KDD 2020". 2 July 2020.
  16. ^"Harshangrjn/LaplacianOpt.jl".GitHub. 2 February 2022.
  17. ^"LigMG (Large Irregular Graph MultiGrid)-- A distributed memory graph Laplacian solver for large irregular graphs".GitHub. 5 January 2022.
  18. ^"Laplacians.jl".GitHub. 11 March 2022.
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
Retrieved from "https://en.wikipedia.org/w/index.php?title=Laplacian_matrix&oldid=1338250737"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp