| Part ofa series on | ||||
| Network science | ||||
|---|---|---|---|---|
| Network types | ||||
| Graphs | ||||
| ||||
| Models | ||||
| ||||
| ||||
Thestochasticblock model is agenerative model for randomgraphs. This model tends to produce graphs containingcommunities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis byPaul W. Holland et al.[1] The stochastic block model is important instatistics,machine learning, andnetwork science, where it serves as a useful benchmark for the task of recoveringcommunity structure in graph data.
The stochastic block model takes the following parameters:
The edge set is then sampled at random as follows: any two vertices and are connected by an edge with probability. An example problem is: given a graph with vertices, where the edges are sampled as described, recover the groups.

If the probability matrix is a constant, in the sense that for all, then the result is theErdős–Rényi model. This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.
Theplanted partition model is the special case that the values of the probability matrix are a constant on the diagonal and another constant off the diagonal. Thus two vertices within the same community share an edge with probability, while two vertices in different communities share an edge with probability. Sometimes it is this restricted model that is called the stochastic block model. The case where is called anassortative model, while the case is calleddisassortative.
Returning to the general stochastic block model, a model is calledstrongly assortative if whenever: all diagonal entries dominate all off-diagonal entries. A model is calledweakly assortative if whenever: each diagonal entry is only required to dominate the rest of its own row and column.[2]Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.[2]
Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similarErdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.[3]
In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.[4]
In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known[5] or unknown.[6]
Stochastic block models exhibit a sharp threshold effect reminiscent ofpercolation thresholds.[7][3][8] Suppose that we allow the size of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.
For partial recovery, the appropriate scaling is to take for fixed, resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrixpartial recovery is feasible[4] with probability whenever, whereas anyestimator fails[3] partial recovery with probability whenever.
For exact recovery, the appropriate scaling is to take, resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with equal-sized communities, the threshold lies at. In fact, the exact recovery threshold is known for the fully general stochastic block model.[5]
In principle, exact recovery can be solved in its feasible range usingmaximum likelihood, but this amounts to solving a constrained orregularized cut problem such as minimum bisection that is typicallyNP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.
However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms includespectral clustering of the vertices,[9][4][5][10]semidefinite programming,[2][8] forms ofbelief propagation,[7][11] and community detection[12] among others.
Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to acategorical distribution, rather than in a fixed partition.[5] More significant variants include the degree-corrected stochastic block model,[13] the hierarchical stochastic block model,[14] the geometric block model,[15] censored block model and the mixed-membership block model.[16]
Stochastic block model have been recognised to be atopic model on bipartite networks.[17] In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models.[18]
GraphChallenge[19] encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017.[20]Spectral clustering has demonstrated outstanding performance compared to the original and even improved[21]base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.[22][23]