Ingraph theory andcomputer science, anadjacency matrix is asquare matrix used to represent a finitegraph. The elements of thematrix indicate whether pairs ofvertices areadjacent or not within the graph.
In the special case of a finitesimple graph, the adjacency matrix is a(0,1)-matrix with zeros on its diagonal. If the graph isundirected (i.e. all of itsedges are bidirectional), the adjacency matrix issymmetric. The relationship between a graph and theeigenvalues andeigenvectors of its adjacency matrix is studied inspectral graph theory.
The adjacency matrix of a graph should be distinguished from itsincidence matrix, a different matrix representation whose elements indicate whether vertex–edge pairs areincident or not, and itsdegree matrix, which contains information about thedegree of each vertex.
For a simple graph with vertex setU = {u1, ...,un}, the adjacency matrix is a squaren ×n matrixA such that its elementAij is 1 when there is an edge from vertexui to vertexuj, and 0 when there is no edge.[1] The diagonal elements of the matrix are all 0, since edges from a vertex to itself (loops) are not allowed in simple graphs. It is also sometimes useful inalgebraic graph theory to replace the nonzero elements with algebraic variables.[2] The same concept can be extended tomultigraphs and graphs with loops by storing the number of edges between each two vertices in the corresponding matrix element, and by allowing nonzero diagonal elements. Loops may be counted either once (as a single edge) or twice (as two vertex-edge incidences), as long as a consistent convention is followed. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention.
The adjacency matrixA of abipartite graph whose two parts haver ands vertices can be written in the form
whereB is anr ×s matrix, and0r,r and0s,s represent ther ×r ands ×szero matrices. In this case, the smaller matrixB uniquely represents the graph, and the remaining parts ofA can be discarded as redundant.B is sometimes called thebiadjacency matrix.
Formally, letG = (U,V,E) be abipartite graph with partsU = {u1, ...,ur},V = {v1, ...,vs} and edgesE. The biadjacency matrix is ther ×s 0–1 matrixB in whichbi,j = 1if and only if(ui,vj) ∈E.
IfG is a bipartitemultigraph orweighted graph, then the elementsbi,j are taken to be the number of edges between the vertices or the weight of the edge(ui,vj), respectively.
An(a,b,c)-adjacency matrixA of a simple graph hasAi,j =a if(i,j) is an edge,b if it is not, andc on the diagonal. TheSeidel adjacency matrix is a(−1, 1, 0)-adjacency matrix. This matrix is used in studyingstrongly regular graphs andtwo-graphs.[3]
Thedistance matrix has in position(i,j) the distance between verticesvi andvj. The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which containsBoolean values), it gives the exact distance between them.
The convention followed here (for undirected graphs) is that each edge adds 1 to the appropriate cell in the matrix, and each loop (an edge from a vertex to itself) adds 2 to the appropriate cell on the diagonal in the matrix.[4] This allows the degree of a vertex to be easily found by taking the sum of the values in either its respective row or column in the adjacency matrix.
| Labeled graph | Adjacency matrix |
|---|---|
| |
|
The adjacency matrix of a directed graph can be asymmetric. One can define the adjacency matrix of a directed graph either such that
The former definition is commonly used in graph theory and social network analysis (e.g., sociology, political science, economics, psychology).[5] The latter is more common in other applied sciences (e.g., dynamical systems, physics, network science) whereA is sometimes used to describe linear dynamics on graphs.[6]
Using the first definition, thein-degrees of a vertex can be computed by summing the entries of the corresponding column and the out-degree of vertex by summing the entries of the corresponding row. When using the second definition, the in-degree of a vertex is given by the corresponding row sum and the out-degree is given by the corresponding column sum.
| Labeled graph | Adjacency matrix |
|---|---|
|
The adjacency matrix of acomplete graph contains all ones except along the diagonal where there are only zeros. The adjacency matrix of anempty graph is azero matrix.
The adjacency matrix of an undirected simple graph issymmetric, and therefore has a complete set ofrealeigenvalues and an orthogonaleigenvector basis. The set of eigenvalues of a graph is thespectrum of the graph.[7] It is common to denote the eigenvalues by
The greatest eigenvalue is bounded above by the maximum degree. This can be seen as result of thePerron–Frobenius theorem, but it can be proved easily. Letv be one eigenvector associated to andx the entry in whichv has maximum absolute value. Without loss of generality assumevx is positive since otherwise you simply take the eigenvector -v, also associated to. Then
Ford-regular graphs,d is the first eigenvalue ofA for the vectorv = (1, ..., 1) (it is easy to check that it is an eigenvalue and it is the maximum because of the above bound). The multiplicity of this eigenvalue is the number of connected components ofG, in particular for connected graphs. It can be shown that for each eigenvalue, its opposite is also an eigenvalue ofA ifG is abipartite graph.[8] In particular −d is an eigenvalue of anyd-regular bipartite graph.
The difference is called thespectral gap and it is related to theexpansion ofG. It is also useful to introduce thespectral radius of denoted by. This number is bounded by. This bound is tight in theRamanujan graphs.
Suppose two directed or undirected graphsG1 andG2 with adjacency matricesA1 andA2 are given.G1 andG2 areisomorphic if and only if there exists apermutation matrixP such that
In particular,A1 andA2 aresimilar and therefore have the sameminimal polynomial,characteristic polynomial,eigenvalues,determinant andtrace. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic.[9] Suchlinear operators are said to beisospectral.
IfA is the adjacency matrix of the directed or undirected graphG, then the matrixAn (i.e., thematrix product ofn copies ofA) has an interesting interpretation: the element(i,j) gives the number of (directed or undirected)walks of lengthn from vertexi to vertexj.[10] Ifn is the smallest nonnegative integer, such that for somei,j, the element(i,j) ofAn is positive, thenn is the distance between vertexi and vertexj. A great example of how this is useful is in counting the number of triangles in an undirected graphG, which is exactly thetrace ofA3 divided by 3 or 6 depending on whether the graph is directed or not. We divide by those values to compensate for the overcounting of each triangle. In an undirected graph, each triangle will be counted twice for all three nodes, because the path can be followed clockwise or counterclockwise : ijk or ikj. The adjacency matrix can be used to determine whether or not the graph isconnected.
If a directed graph has anilpotent adjacency matrix (i.e., if there existsn such thatAn is the zero matrix), then it is adirected acyclic graph.[11]
The adjacency matrix may be used as adata structure for therepresentation of graphs in computer programs for manipulating graphs.Boolean data types are used, such asTrue andFalse inPython. The main alternative data structure, also in use for this application, is theadjacency list.[12][13]
The space needed to represent an adjacency matrix and the time needed to perform operations on them is dependent on thematrix representation chosen for the underlying matrix.Sparse matrix representations only store non-zero matrix entries and implicitly represent the zero entries. They can, for example, be used to representsparse graphs without incurring the space overhead from storing the many zero entries in the adjacency matrix of the sparse graph. In the following section the adjacency matrix is assumed to be represented by anarray data structure so that zero and non-zero entries are all directly represented in storage.
Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only|V |2 / 8 bytes to represent a directed graph, or (by using a packed triangular format and only storing the lower triangular part of the matrix) approximately|V |2 / 16 bytes to represent an undirected graph. Although slightly more succinct representations are possible, this method gets close to the information-theoretic lower bound for the minimum number of bits needed to represent alln-vertex graphs.[14] For storing graphs intext files, fewer bits per byte can be used to ensure that all bytes are text characters, for instance by using aBase64 representation.[15] Besides avoiding wasted space, this compactness encourageslocality of reference.However, for a largesparse graph, adjacency lists require less storage space, because they do not waste any space representing edges that arenot present.[13][16]
An alternative form of adjacency matrix (which, however, requires a larger amount of space) replaces the numbers in each element of the matrix with pointers to edge objects (when edges are present) or null pointers (when there is no edge).[16] It is also possible to storeedge weights directly in the elements of an adjacency matrix.[13]
Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list, and takes time proportional to the number of neighbors. With an adjacency matrix, an entire row must instead be scanned, which takes a larger amount of time, proportional to the number of vertices in the whole graph. On the other hand, testing whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.[13][16]