Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Kruskal's algorithm

From Wikipedia, the free encyclopedia
Minimum spanning forest algorithm that greedily adds edges
Not to be confused withKruskal's principle.
Kruskal's algorithm
Animation of Kruskal's algorithm on acomplete graph with weights based on Euclidean distance
ClassMinimum spanning tree algorithm
Data structureGraph
Worst-caseperformanceO(|E|log|V|){\displaystyle O(|E|\log |V|)}

Kruskal's algorithm[1] finds aminimum spanning forest of an undirectededge-weighted graph. If the graph isconnected, it finds aminimum spanning tree. It is agreedy algorithm that in each step adds to the forest the lowest-weight edge that will not form acycle.[2] The key steps of the algorithm aresorting and the use of adisjoint-set data structure to detect cycles. Its running time is dominated by the time to sort all of the graph edges by their weight.

A minimum spanning tree of a connected weighted graph is a connected subgraph, without cycles, for which the sum of the weights of all the edges in the subgraph is minimal. For a disconnected graph, a minimum spanning forest is composed of a minimum spanning tree for eachconnected component.

This algorithm was first published byJoseph Kruskal in 1956,[3] and was rediscovered soon afterward byLoberman & Weinberger (1957).[4] Other algorithms for this problem includePrim's algorithm,Borůvka's algorithm, and thereverse-delete algorithm.

Algorithm

[edit]

The algorithm performs the following steps:

  • Create a forest (a set of trees) initially consisting of a separate single-vertex tree for each vertex in the input graph.
  • Sort the graph edges by weight.
  • Loop through the edges of the graph, in ascending sorted order by their weight. For each edge:
    • Test whether adding the edge to the current forest would create a cycle.
    • If not, add the edge to the forest, combining two trees into a single tree.

At the termination of the algorithm, the forest forms a minimum spanning forest of the graph. If the graph is connected, the forest has a single component and forms a minimum spanning tree.

Pseudocode

[edit]

The following code is implemented with adisjoint-set data structure. It represents the forestF as a set of undirected edges, and uses the disjoint-set data structure to efficiently determine whether two vertices are part of the same tree.

function Kruskal(Graph G)is    F:= ∅for each vin G.Verticesdo        MAKE-SET(v)for each {u, v}in G.Edges ordered by increasing weight({u, v})doif FIND-SET(u) ≠ FIND-SET(v)then            F := F ∪ { {u, v} }            UNION(FIND-SET(u), FIND-SET(v))return F

Complexity

[edit]

For a graph withE edges andV vertices, Kruskal's algorithm can be shown to run in timeO(E logE) time, with simple data structures. This time bound is often written instead asO(E logV), which is equivalent for graphs with no isolated vertices, because for these graphsV / 2 ≤E <V2 and the logarithms ofV andV2 are again within a constant factor of each other.

To achieve this bound, first sort the edges by weight using acomparison sort inO(E logE) time. Once sorted, it is possible to loop through the edges in sorted order in constant time per edge. Next, use adisjoint-set data structure, with a set of vertices for each component, to keep track of which vertices are in which components. Creating this structure, with a separate set for each vertex, takesV operations andO(V) time. The final iteration through all edges performs two find operations and possibly one union operation per edge. These operations takeamortized timeO(α(V)) time per operation, giving worst-case total timeO(Eα(V)) for this loop, whereα is the extremely slowly growinginverse Ackermann function. This part of the time bound is much smaller than the time for the sorting step, so the total time for the algorithm can be simplified to the time for the sorting step.

In cases where the edges are already sorted, or where they have small enough integer weight to allowinteger sorting algorithms such ascounting sort orradix sort to sort them in linear time, the disjoint set operations are the slowest remaining part of the algorithm and the total time isO(Eα(V)).

Example

[edit]
ImageDescription
AD andCE are the shortest edges, with length 5, andAD has beenarbitrarily chosen, so it is highlighted.
CE is now the shortest edge that does not form a cycle, with length 5, so it is highlighted as the second edge.
The next edge,DF with length 6, is highlighted using much the same method.
The next-shortest edges areAB andBE, both with length 7.AB is chosen arbitrarily, and is highlighted. The edgeBD has been highlighted in red, because there already exists a path (in green) betweenB andD, so it would form a cycle (ABD) if it were chosen.
The process continues to highlight the next-smallest edge,BE with length 7. Many more edges are highlighted in red at this stage:BC because it would form the loopBCE,DE because it would form the loopDEBA, andFE because it would formFEBAD.
Finally, the process finishes with the edgeEG of length 9, and the minimum spanning tree is found.

Proof of correctness

[edit]

The proof consists of two parts. First, it is proved that the algorithm produces aspanning tree. Second, it is proved that the constructed spanning tree is of minimal weight.

Spanning tree

[edit]

LetG{\displaystyle G} be a connected, weighted graph and letY{\displaystyle Y} be the subgraph ofG{\displaystyle G} produced by the algorithm.Y{\displaystyle Y} cannot have a cycle, as by definition an edge is not added if it results in a cycle.Y{\displaystyle Y} cannot be disconnected, since the first encountered edge that joins two components ofY{\displaystyle Y} would have been added by the algorithm. Thus,Y{\displaystyle Y} is a spanning tree ofG{\displaystyle G}.

Minimality

[edit]

We show that the following propositionP is true byinduction: IfF is the set of edges chosen at any stage of the algorithm, then there is some minimum spanning tree that containsF and none of the edges rejected by the algorithm.

  • ClearlyP is true at the beginning, whenF is empty: any minimum spanning tree will do, and there exists one because a weighted connected graph always has a minimum spanning tree.
  • Now assumeP is true for some non-final edge setF and letT be a minimum spanning tree that containsF.
    • If the next chosen edgee is also inT, thenP is true forF +e.
    • Otherwise, ife is not inT thenT +e has a cycleC. The cycleC contains edges which do not belong toF +e, sincee does not form a cycle when added toF but does inT. Letf be an edge which is inC but not inF +e. Note thatf also belongs toT, sincef belongs toT +e but notF +e. ByP,f has not been considered by the algorithm.f must therefore have a weight at least as large ase. ThenTf +e is a tree, and it has the same or less weight asT. However sinceT is a minimum spanning tree thenTf +e has the same weight asT, otherwise we get a contradiction andT would not be a minimum spanning tree. SoTf +e is a minimum spanning tree containingF +e and againP holds.
  • Therefore, by the principle of induction,P holds whenF has become a spanning tree, which is only possible ifF is a minimum spanning tree itself.

Parallel algorithm

[edit]

Kruskal's algorithm is inherently sequential and hard to parallelize. It is, however, possible to perform the initial sorting of the edges in parallel or, alternatively, to use a parallel implementation of abinary heap to extract the minimum-weight edge in every iteration.[5]As parallel sorting is possible in timeO(n){\displaystyle O(n)} onO(logn){\displaystyle O(\log n)} processors,[6] the runtime of Kruskal's algorithm can be reduced toO(E α(V)), where α again is the inverse of the single-valuedAckermann function.

A variant of Kruskal's algorithm, named Filter-Kruskal, has been described by Osipov et al.[7] and is better suited for parallelization. The basic idea behind Filter-Kruskal is to partition the edges in a similar way toquicksort and filter out edges that connect vertices of the same tree to reduce the cost of sorting. The followingpseudocode demonstrates this.

function filter_kruskal(G)isif |G.E| < kruskal_threshold:return kruskal(G)    pivot = choose_random(G.E)    E, E> = partition(G.E, pivot)    A = filter_kruskal(E)    E> = filter(E>)    A = A ∪ filter_kruskal(E>)return Afunction partition(E, pivot)is    E = ∅, E> = ∅foreach (u, v) in Edoif weight(u, v) ≤ pivotthen            E = E ∪ {(u, v)}else            E> = E> ∪ {(u, v)}return E, E>function filter(E)is    Ef = ∅foreach (u, v) in Edoif find_set(u) ≠ find_set(v)then            Ef = Ef ∪ {(u, v)}return Ef

Filter-Kruskal lends itself better to parallelization as sorting, filtering, and partitioning can easily be performed in parallel by distributing the edges between the processors.[7]

Finally, other variants of a parallel implementation of Kruskal's algorithm have been explored. Examples include a scheme that uses helper threads to remove edges that are definitely not part of the MST in the background,[8] and a variant which runs the sequential algorithm onp subgraphs, then merges those subgraphs until only one, the final MST, remains.[9]

See also

[edit]

References

[edit]
  1. ^Kleinberg, Jon (2006).Algorithm design. Éva Tardos. Boston: Pearson/Addison-Wesley. pp. 142–151.ISBN 0-321-29535-8.OCLC 57422612.
  2. ^Cormen, Thomas; Charles E Leiserson, Ronald L Rivest, Clifford Stein (2009).Introduction To Algorithms (Third ed.). MIT Press. pp. 631.ISBN 978-0262258104.{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. ^Kruskal, J. B. (1956)."On the shortest spanning subtree of a graph and the traveling salesman problem".Proceedings of the American Mathematical Society.7 (1):48–50.doi:10.1090/S0002-9939-1956-0078686-7.JSTOR 2033241.
  4. ^Loberman, H.; Weinberger, A. (October 1957)."Formal Procedures for connecting terminals with a minimum total wire length".Journal of the ACM.4 (4):428–437.doi:10.1145/320893.320896.S2CID 7320964.
  5. ^Quinn, Michael J.; Deo, Narsingh (1984)."Parallel graph algorithms".ACM Computing Surveys.16 (3):319–348.doi:10.1145/2514.2515.S2CID 6833839.
  6. ^Grama, Ananth; Gupta, Anshul; Karypis, George; Kumar, Vipin (2003).Introduction to Parallel Computing. Addison-Wesley. pp. 412–413.ISBN 978-0201648652.
  7. ^abOsipov, Vitaly; Sanders, Peter; Singler, Johannes (2009)."The filter-kruskal minimum spanning tree algorithm".Proceedings of the Eleventh Workshop on Algorithm Engineering and Experiments (ALENEX). Society for Industrial and Applied Mathematics:52–61.doi:10.1137/1.9781611972894.5.ISBN 978-0-89871-930-7.
  8. ^Katsigiannis, Anastasios; Anastopoulos, Nikos; Konstantinos, Nikas; Koziris, Nectarios (2012). "An Approach to Parallelize Kruskal's Algorithm Using Helper Threads".2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum(PDF). pp. 1601–1610.doi:10.1109/IPDPSW.2012.201.ISBN 978-1-4673-0974-5.S2CID 14430930.
  9. ^Lončar, Vladimir; Škrbić, Srdjan; Balaž, Antun (2014)."Parallelization of Minimum Spanning Tree Algorithms Using Distributed Memory Architectures".Transactions on Engineering Technologies. pp. 543–554.doi:10.1007/978-94-017-8832-8_39.ISBN 978-94-017-8831-1.

External links

[edit]
Graph andtree traversal algorithms
Search
Shortest path
Minimum spanning tree
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kruskal%27s_algorithm&oldid=1324442836"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp