Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Fibonacci heap

From Wikipedia, the free encyclopedia
Data structure for priority queue operations
Fibonacci heap
Typeheap
Invented1984
Invented byMichael L. Fredman and Robert E. Tarjan
Complexities inbig O notation
Space complexity
Time complexity
FunctionAmortizedWorst case
InsertΘ(1)
Find-minΘ(1)
Delete-minO(logn)
Decrease-keyΘ(1)
MergeΘ(1)

Incomputer science, aFibonacci heap is adata structure forpriority queue operations, consisting of a collection ofheap-ordered trees. It has a betteramortized running time than many other priority queue data structures including thebinary heap andbinomial heap.Michael L. Fredman andRobert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after theFibonacci numbers, which are used in their running time analysis.

The amortized times of all operations on Fibonacci heaps is constant, exceptdelete-min.[1][2] Deleting an element (most often used in the special case of deleting the minimum element) works inO(logn){\displaystyle O(\log n)} amortized time, wheren{\displaystyle n} is the size of the heap.[2] This means that starting from an empty data structure, any sequence ofa insert anddecrease-key operations andbdelete-min operations would takeO(a+blogn){\displaystyle O(a+b\log n)} worst case time, wheren{\displaystyle n} is the maximum heap size. In a binary or binomial heap, such a sequence of operations would takeO((a+b)logn){\displaystyle O((a+b)\log n)} time. A Fibonacci heap is thus better than a binary or binomial heap whenb{\displaystyle b} is smaller thana{\displaystyle a} by a non-constant factor. It is also possible to merge two Fibonacci heaps in constant amortized time, improving on the logarithmic merge time of a binomial heap, and improving on binary heaps which cannot handle merges efficiently.

Using Fibonacci heaps improves the asymptotic running time of algorithms which utilize priority queues. For example,Dijkstra's algorithm andPrim's algorithm can be made to run inO(|E|+|V|log|V|){\displaystyle O(|E|+|V|\log |V|)} time.

Structure

[edit]
Figure 1. Example of a Fibonacci heap. It has three trees of degrees 0, 1 and 3. Three vertices are marked (shown in blue). Therefore, the potential of the heap is 9 (3 trees + 2 × (3 marked-vertices)).

A Fibonacci heap is a collection oftrees satisfying theminimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in alazy manner, postponing the work for later operations. For example, merging heaps is done simply by concatenating the two lists of trees, and operationdecrease key sometimes cuts a node from its parent and forms a new tree.

However, at some point order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of direct children) are kept quite low: every node has degree at mostlogn{\displaystyle \log n} and the size of a subtree rooted in a node of degreek{\displaystyle k} is at leastFk+2{\displaystyle F_{k+2}}, whereFi{\displaystyle F_{i}} is thei{\displaystyle i}thFibonacci number. This is achieved by the rule: at most one child can be cut off each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree (see Proof of degree bounds, below). The number of trees is decreased in the operationdelete-min, where trees are linked together.

As a result of a relaxed structure, some operations can take a long time while others are done very quickly. For theamortized running time analysis, we use thepotential method, in that we pretend that very fast operations take a little bit longer than they actually do. This additional time is then later combined and subtracted from the actual running time of slow operations. The amount of time saved for later use is measured at any given moment by a potential function. The potentialϕ{\displaystyle \phi } of a Fibonacci heap is given by

ϕ=t+2m{\displaystyle \phi =t+2m},

wheret{\displaystyle t} is the number of trees in the Fibonacci heap, andm{\displaystyle m} is the number of marked nodes. A node is marked if at least one of its children was cut, since this node was made a child of another node (all roots are unmarked). The amortized time for an operation is given by the sum of the actual time andc{\displaystyle c} times the difference in potential, wherec is a constant (chosen to match the constant factors in thebig O notation for the actual time).

Thus, the root of each tree in a heap has one unit of time stored. This unit of time can be used later to link this tree with another tree at amortized time 0. Also, each marked node has two units of time stored. One can be used to cut the node from its parent. If this happens, the node becomes a root and the second unit of time will remain stored in it as in any other root.

Operations

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(September 2025) (Learn how and when to remove this message)

To allow fast deletion and concatenation, the roots of all trees are linked using a circulardoubly linked list. The children of each node are also linked using such a list. For each node, we maintain its number of children and whether the node is marked.

Find-min

[edit]

We maintain a pointer to the root containing the minimum key, allowingO(1){\displaystyle O(1)} access to the minimum. This pointer must be updated during the other operations, which adds only a constant time overhead.

Merge

[edit]

The merge operation simply concatenates the root lists of two heaps together and sets the minimum to be the smaller of the two heaps. This can be done in constant time, and the potential does not change, leading again to constant amortized time.

Insert

[edit]

The insertion operation can be considered a special case of the merge operation, with a single node. The node is simply appended to the root list, increasing the potential by one. The amortized cost is thus still constant.

Delete-min

[edit]
Figure 2. First phase ofdelete-min.
Figure 3. Third phase ofdelete-min.

The delete-min operation does most of the work in restoring the structure of the heap. It has three phases:

  1. The root containing the minimum element is removed, and each of itsd{\displaystyle d} children becomes a new root. It takes timeO(d){\displaystyle O(d)} to process these new roots, and the potential increases byd1{\displaystyle d-1}. Therefore, the amortized running time of this phase isO(d)=O(logn){\displaystyle O(d)=O(\log n)}.
  2. There may be up ton{\displaystyle n} roots. We therefore decrease the number of roots by successively linking together roots of the same degree. When two roots have the same degree, we make the one with the larger key a child of the other, so that the minimum heap property is observed. The degree of the smaller root increases by one. This is repeated until every root has a different degree. To find trees of the same degree efficiently, we use an array of lengthO(logn){\displaystyle O(\log n)} in which we keep a pointer to one root of each degree. When a second root is found of the same degree, the two are linked and the array is updated. The actual running time isO(logn+m){\displaystyle O(\log n+m)}, wherem{\displaystyle m} is the number of roots at the beginning of the second phase. In the end, we will have at mostO(logn){\displaystyle O(\log n)} roots (because each has a different degree). Therefore, the difference in the potential from before to after this phase isO(logn)m{\displaystyle O(\log n)-m}. Thus, the amortized running time isO(logn+m)+c(O(logn)m){\displaystyle O(\log n+m)+c(O(\log n)-m)}. By choosing a sufficiently largec{\displaystyle c} such that the terms inm{\displaystyle m} cancel out, this simplifies toO(logn){\displaystyle O(\log n)}.
  3. Search the final list of roots to find the minimum, and update the minimum pointer accordingly. This takesO(logn){\displaystyle O(\log n)} time, because the number of roots has been reduced.

Overall, the amortized time of this operation isO(logn){\displaystyle O(\log n)}, provided thatd=O(logn){\displaystyle d=O(\log n)}. The proof of this is given in the following section.

Decrease-key

[edit]
Figure 4. Fibonacci heap from Figure 1 after decreasing key of node 9 to 0.

If decreasing the key of a nodex{\displaystyle x} causes it to become smaller than its parent, then it is cut from its parent, becoming a new unmarked root. If it is also less than the minimum key, then the minimum pointer is updated.

We then initiate a series ofcascading cuts, starting with the parent ofx{\displaystyle x}. As long as the current node is marked, it is cut from its parent and made an unmarked root. Its original parent is then considered. This process stops when we reach an unmarked nodey{\displaystyle y}. Ify{\displaystyle y} is not a root, it is marked. In this process we introduce some number, sayk{\displaystyle k}, of new trees. Except possiblyx{\displaystyle x}, each of these new trees loses its original mark. The terminating nodey{\displaystyle y} may become marked. Therefore, the change in the number of marked nodes is between ofk{\displaystyle -k} andk+2{\displaystyle -k+2}. The resulting change in potential isk+2(k+2)=k+4{\displaystyle k+2(-k+2)=-k+4}. The actual time required to perform the cutting wasO(k){\displaystyle O(k)}. Hence, the amortized time isO(k)+c(k+4){\displaystyle O(k)+c(-k+4)}, which is constant, providedc{\displaystyle c} is sufficiently large.

Proof of degree bounds

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(September 2025) (Learn how and when to remove this message)

The amortized performance of a Fibonacci heap depends on the degree (number of children) of any tree root beingO(logn){\displaystyle O(\log n)}, wheren{\displaystyle n} is the size of the heap. Here we show that the size of the (sub)tree rooted at any nodex{\displaystyle x} of degreed{\displaystyle d} in the heap must have size at leastFd+2{\displaystyle F_{d+2}}, whereFi{\displaystyle F_{i}} is thei{\displaystyle i}thFibonacci number. The degree bound follows from this and the fact (easily proved by induction) thatFd+2φd{\displaystyle F_{d+2}\geq \varphi ^{d}} for all integersd0{\displaystyle d\geq 0}, whereφ=(1+5)/21.618{\displaystyle \varphi =(1+{\sqrt {5}})/2\approx 1.618} is thegolden ratio. We then havenFd+2φd{\displaystyle n\geq F_{d+2}\geq \varphi ^{d}}, and taking the log to baseφ{\displaystyle \varphi } of both sides givesdlogφn{\displaystyle d\leq \log _{\varphi }n} as required.

Letx{\displaystyle x} be an arbitrary node in a Fibonacci heap, not necessarily a root. Definesize(x){\displaystyle \mathrm {size} (x)} to be the size of the tree rooted atx{\displaystyle x} (the number of descendants ofx{\displaystyle x}, includingx{\displaystyle x} itself). We prove by induction on the height ofx{\displaystyle x} (the length of the longest path fromx{\displaystyle x} to a descendant leaf) thatsize(x)Fd+2{\displaystyle \mathrm {size} (x)\geq F_{d+2}}, whered{\displaystyle d} is the degree ofx{\displaystyle x}.

Base case: Ifx{\displaystyle x} has height0{\displaystyle 0}, thend=0{\displaystyle d=0}, andsize(x)=1F2{\displaystyle \mathrm {size} (x)=1\geq F_{2}}.

Inductive case: Supposex{\displaystyle x} has positive height and degreed>0{\displaystyle d>0}. Lety1,y2yd{\displaystyle y_{1},y_{2}\dots y_{d}} be the children ofx{\displaystyle x}, indexed in order of the times they were most recently made children ofx{\displaystyle x} (y1{\displaystyle y_{1}} being the earliest andyd{\displaystyle y_{d}} the latest), and letc1,c2cd{\displaystyle c_{1},c_{2}\dots c_{d}} be their respective degrees.

We claim thatcii2{\displaystyle c_{i}\geq i-2} for eachi{\displaystyle i}. Just beforeyi{\displaystyle y_{i}} was made a child ofx{\displaystyle x},y1yi1{\displaystyle y_{1}\dots y_{i-1}} were already children ofx{\displaystyle x}, and sox{\displaystyle x} must have had degree at leasti1{\displaystyle i-1} at that time. Since trees are combined only when the degrees of their roots are equal, it must have been the case thatyi{\displaystyle y_{i}} also had degree at leasti1{\displaystyle i-1} at the time when it became a child ofx{\displaystyle x}. From that time to the present,yi{\displaystyle y_{i}} could have only lost at most one child (as guaranteed by the marking process), and so its current degreeci{\displaystyle c_{i}} is at leasti2{\displaystyle i-2}. This proves the claim.

Since the heights of all theyi{\displaystyle y_{i}} are strictly less than that ofx{\displaystyle x}, we can apply the inductive hypothesis to them to getsize(yi)Fci+2F(i2)+2=Fi.{\displaystyle \mathrm {size} (y_{i})\geq F_{c_{i}+2}\geq F_{(i-2)+2}=F_{i}.}The nodesx{\displaystyle x} andy1{\displaystyle y_{1}} each contribute at least 1 tosize(x){\displaystyle \mathrm {size} (x)}, and so we havesize(x)2+i=2dsize(yi)2+i=2dFi=1+i=0dFi=Fd+2{\displaystyle {\begin{aligned}\mathrm {size} (x)&\geq 2+\sum _{i=2}^{d}\mathrm {size} (y_{i})\\&\geq 2+\sum _{i=2}^{d}F_{i}\\&=1+\sum _{i=0}^{d}F_{i}\\&=F_{d+2}\end{aligned}}}where the last step is an identity for Fibonacci numbers. This gives the desired lower bound onsize(x){\displaystyle \mathrm {size} (x)}.

Performance

[edit]

Although Fibonacci heaps look very efficient, they have the following two drawbacks:[3]

  1. They are complicated to implement.
  2. They are not as efficient in practice when compared with theoretically less efficient forms of heaps.[4] In their simplest version, they require manipulation of four pointers per node, whereas only two or three pointers per node are needed in other structures, such as thebinomial heap, orpairing heap. This results in large memory consumption per node and high constant factors on all operations.

Although the total running time of a sequence of operations starting with an empty structure is bounded by the bounds given above, some (very few) operations in the sequence can take very long to complete (in particular, delete-min has linear running time in the worst case). For this reason, Fibonacci heaps and other amortized data structures may not be appropriate forreal-time systems.

It is possible to create a data structure which has the same worst-case performance as the Fibonacci heap has amortized performance. One such structure, theBrodal queue,[5] is, in the words of the creator, "quite complicated" and "[not] applicable in practice." Invented in 2012, thestrict Fibonacci heap[6] is a simpler (compared to Brodal's) structure with the same worst-case bounds. Despite being simpler, experiments show that in practice the strict Fibonacci heap performs slower than more complicatedBrodal queue and also slower than basic Fibonacci heap.[7][8] The run-relaxed heaps of Driscoll et al. give good worst-case performance for all Fibonacci heap operations except merge.[9] Recent experimental results suggest that the Fibonacci heap is more efficient in practice than most of its later derivatives, including quake heaps, violation heaps, strict Fibonacci heaps, and rank pairing heaps, but less efficient than pairing heaps or array-based heaps.[8]

Summary of running times

[edit]

Here aretime complexities[10] of various heap data structures. The abbreviationam. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O(f)" and "Θ(f)" seeBig O notation. Names of operations assume a min-heap.

Operationfind-mindelete-mindecrease-keyinsertmeldmake-heap[a]
Binary[10]Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(n)Θ(n)
Skew[11]Θ(1)O(log n)am.O(log n)am.O(log n)am.O(log n)am.Θ(n)am.
Leftist[12]Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(log n)Θ(n)
Binomial[10][14]Θ(1)Θ(log n)Θ(log n)Θ(1)am.Θ(log n)[b]Θ(n)
Skew binomial[15]Θ(1)Θ(log n)Θ(log n)Θ(1)Θ(log n)[b]Θ(n)
2–3 heap[17]Θ(1)O(log n)am.Θ(1)Θ(1)am.O(log n)[b]Θ(n)
Bottom-up skew[11]Θ(1)O(log n)am.O(log n)am.Θ(1)am.Θ(1)am.Θ(n)am.
Pairing[18]Θ(1)O(log n)am.o(log n)am.[c]Θ(1)Θ(1)Θ(n)
Rank-pairing[21]Θ(1)O(log n)am.Θ(1)am.Θ(1)Θ(1)Θ(n)
Fibonacci[10][2]Θ(1)O(log n)am.Θ(1)am.Θ(1)Θ(1)Θ(n)
Strict Fibonacci[22][d]Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n)
Brodal[23][d]Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n)[24]
  1. ^make-heap is the operation of building a heap from a sequence ofn unsorted elements. It can be done inΘ(n) time whenevermeld runs inO(log n) time (where both complexities can be amortized).[11][12] Another algorithm achievesΘ(n) for binary heaps.[13]
  2. ^abcForpersistent heaps (not supportingdecrease-key), a generic transformation reduces the cost ofmeld to that ofinsert, while the new cost ofdelete-min is the sum of the old costs ofdelete-min andmeld.[16] Here, it makesmeld run inΘ(1) time (amortized, if the cost ofinsert is) whiledelete-min still runs inO(log n). Applied to skew binomial heaps, it yields Brodal-Okasaki queues, persistent heaps with optimal worst-case complexities.[15]
  3. ^Lower bound ofΩ(loglogn),{\displaystyle \Omega (\log \log n),}[19] upper bound ofO(22loglogn).{\displaystyle O(2^{2{\sqrt {\log \log n}}}).}[20]
  4. ^abBrodal queues and strict Fibonacci heaps achieve optimal worst-case complexities for heaps. They were first described as imperative data structures. The Brodal-Okasaki queue is apersistent data structure achieving the same optimum, except thatdecrease-key is not supported.

References

[edit]
  1. ^Cormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L.;Stein, Clifford (2001) [1990]. "Chapter 20: Fibonacci Heaps".Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 476–497.ISBN 0-262-03293-7. Third edition p. 518.
  2. ^abcFredman, Michael Lawrence;Tarjan, Robert E. (July 1987)."Fibonacci heaps and their uses in improved network optimization algorithms"(PDF).Journal of the Association for Computing Machinery.34 (3):596–615.CiteSeerX 10.1.1.309.8927.doi:10.1145/28869.28874.
  3. ^Fredman, Michael L.;Sedgewick, Robert;Sleator, Daniel D.;Tarjan, Robert E. (1986)."The pairing heap: a new form of self-adjusting heap"(PDF).Algorithmica.1 (1–4):111–129.doi:10.1007/BF01840439.S2CID 23664143.
  4. ^http://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/FibonacciHeaps.pdf, p. 79
  5. ^Gerth Stølting Brodal (1996),"Worst-Case Efficient Priority Queues",Proc. 7th ACM-SIAM Symposium on Discrete Algorithms,Society for Industrial and Applied Mathematics:52–58,CiteSeerX 10.1.1.43.8133,ISBN 0-89871-366-8
  6. ^Brodal, G. S. L.; Lagogiannis, G.; Tarjan, R. E. (2012).Strict Fibonacci heaps(PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. p. 1177.doi:10.1145/2213977.2214082.ISBN 978-1-4503-1245-5.
  7. ^Mrena, Michal; Sedlacek, Peter; Kvassay, Miroslav (June 2019). "Practical Applicability of Advanced Implementations of Priority Queues in Finding Shortest Paths".2019 International Conference on Information and Digital Technologies (IDT). Zilina, Slovakia: IEEE. pp. 335–344.doi:10.1109/DT.2019.8813457.ISBN 9781728114019.S2CID 201812705.
  8. ^abLarkin, Daniel; Sen, Siddhartha; Tarjan, Robert (2014). "A Back-to-Basics Empirical Study of Priority Queues".Proceedings of the Sixteenth Workshop on Algorithm Engineering and Experiments:61–72.arXiv:1403.0252.Bibcode:2014arXiv1403.0252L.doi:10.1137/1.9781611973198.7.ISBN 978-1-61197-319-8.S2CID 15216766.
  9. ^Driscoll, James R.; Gabow, Harold N.; Shrairman, Ruth; Tarjan, Robert E. (November 1988)."Relaxed heaps: An alternative to Fibonacci heaps with applications to parallel computation".Communications of the ACM.31 (11):1343–1354.doi:10.1145/50087.50096.S2CID 16078067.
  10. ^abcdCormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L. (1990).Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill.ISBN 0-262-03141-8.
  11. ^abcSleator, Daniel Dominic;Tarjan, Robert Endre (February 1986)."Self-Adjusting Heaps".SIAM Journal on Computing.15 (1):52–69.CiteSeerX 10.1.1.93.6678.doi:10.1137/0215004.ISSN 0097-5397.
  12. ^abTarjan, Robert (1983). "3.3. Leftist heaps".Data Structures and Network Algorithms. pp. 38–42.doi:10.1137/1.9781611970265.ISBN 978-0-89871-187-5.
  13. ^Hayward, Ryan; McDiarmid, Colin (1991)."Average Case Analysis of Heap Building by Repeated Insertion"(PDF).J. Algorithms.12:126–153.CiteSeerX 10.1.1.353.7888.doi:10.1016/0196-6774(91)90027-v. Archived fromthe original(PDF) on 2016-02-05. Retrieved2016-01-28.
  14. ^"Binomial Heap | Brilliant Math & Science Wiki".brilliant.org. Retrieved2019-09-30.
  15. ^abBrodal, Gerth Stølting; Okasaki, Chris (November 1996), "Optimal purely functional priority queues",Journal of Functional Programming,6 (6):839–857,doi:10.1017/s095679680000201x
  16. ^Okasaki, Chris (1998). "10.2. Structural Abstraction".Purely Functional Data Structures (1st ed.). pp. 158–162.ISBN 9780521631242.
  17. ^Takaoka, Tadao (1999),Theory of 2–3 Heaps(PDF), p. 12
  18. ^Iacono, John (2000), "Improved upper bounds for pairing heaps",Proc. 7th Scandinavian Workshop on Algorithm Theory(PDF), Lecture Notes in Computer Science, vol. 1851, Springer-Verlag, pp. 63–77,arXiv:1110.4428,CiteSeerX 10.1.1.748.7812,doi:10.1007/3-540-44985-X_5,ISBN 3-540-67690-2
  19. ^Fredman, Michael Lawrence (July 1999)."On the Efficiency of Pairing Heaps and Related Data Structures"(PDF).Journal of the Association for Computing Machinery.46 (4):473–501.doi:10.1145/320211.320214.
  20. ^Pettie, Seth (2005).Towards a Final Analysis of Pairing Heaps(PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183.CiteSeerX 10.1.1.549.471.doi:10.1109/SFCS.2005.75.ISBN 0-7695-2468-0.
  21. ^Haeupler, Bernhard; Sen, Siddhartha;Tarjan, Robert E. (November 2011)."Rank-pairing heaps"(PDF).SIAM J. Computing.40 (6):1463–1485.doi:10.1137/100785351.
  22. ^Brodal, Gerth Stølting; Lagogiannis, George;Tarjan, Robert E. (2012).Strict Fibonacci heaps(PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184.CiteSeerX 10.1.1.233.1740.doi:10.1145/2213977.2214082.ISBN 978-1-4503-1245-5.
  23. ^Brodal, Gerth S. (1996),"Worst-Case Efficient Priority Queues"(PDF),Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58
  24. ^Goodrich, Michael T.;Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction".Data Structures and Algorithms in Java (3rd ed.). pp. 338–341.ISBN 0-471-46983-1.

External links

[edit]
Types
Abstract
Arrays
Linked
Trees
Graphs
Retrieved from "https://en.wikipedia.org/w/index.php?title=Fibonacci_heap&oldid=1313117863"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp