Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Priority queue

From Wikipedia, the free encyclopedia
Abstract data type in computer science

Incomputer science, apriority queue is anabstract data type similar to a regularqueue orstack abstract data type.

In a priority queue, each element has an associatedpriority, which determines its order of service.[1] Priority queue serves highest priority items first.[1] Priority values have to be instances of an ordered data type, and higher priority can be given either to the lesser or to the greater values with respect to the given order relation. For example, inJava standard library,PriorityQueue's the least elements with respect to the order have the highest priority.[2] This implementation detail is without much practical significance, since passing to theopposite order relation turns the least values into the greatest, and vice versa.

While priority queues are often implemented usingheaps, they are conceptually distinct. A priority queue can be implemented with a heap or with other methods; just as a list can be implemented with alinked list or with anarray.

Operations

[edit]

A priority queue has the following operations:[3][4][5]

Max-priority queue

[edit]
  • insert(S, element, priority):[4][5] add an element to setS with an associated priority.
  • maximum(S): return the element withhighest priority.
    This is also known as "find_max".
  • extract_max(S): remove the element from setS withhighest priority, and return it.
    This is also known as "delete",[4] or "extract".[5]
  • increase_key(S, element, k): increase the associated priority with an element to the new valuek.

Min-priority queue

[edit]
  • insert(S, element, priority):[4][5] add an element to setS with an associated priority.
  • minimum(S): return the element withlowest priority.
    This is also known as "find_min".
  • extract_min(S): remove the element from setS withlowest priority, and return it.
    This is also known as "delete",[4] or "extract".[5]
  • decrease_key(S, element, k): decrease the associated priority with an element to the new valuek.

Stacks andqueues can be implemented as particular kinds of priority queues, with the priority determined by the order in which the elements are inserted. In a stack, the priority of each inserted element is monotonically increasing; thus, the last element inserted is always the first retrieved. In a queue, the priority of each inserted element is monotonically decreasing; thus, the first element inserted is always the first retrieved.

In some implementations, if two elements have the same priority, they are served in the same order in which they were enqueued. In other implementations, the order of elements with the same priority is undefined.

Implementation

[edit]

Naive implementations

[edit]

One can create a simple, but inefficient priority queue in a number of ways. These naive implementations can demonstrate the expected behaviour of a priority queue in a simpler manner.

  • insert elements into an unsorted array; find andextract element withhighest priority
    Performance: "insert" performs inO(1){\displaystyle O(1)} constant time, where "extract_max" performs inO(n){\displaystyle O(n)} linear time.
insert(element, priority):    node.element ← element    node.priority ← priority    list.append(node)extract_max():    highest ← 0    foreach node in list:        if highest.priority < node.priority:            highest ← node    list.remove(highest)    return highest.element
insert(element, priority):    node.element ← element    node.priority ← priority    for i in [0...N]:        element ← list.get_at_index(i)        if element.priority < node.priority:            list.insert_at_index(node, i + 1)            returnextract_max():    highest ← list.get_at_index(0)    list.remove(highest)    return highest.element

Usual implementation

[edit]

To improve performance, priority queues are typically based on aheap, givingO(logn){\displaystyle O(\log n)} performance for inserts and removals, andO(n){\displaystyle O(n)} to build theheap initially from a set ofn{\displaystyle n} elements. Variants of the basic heap data structure such aspairing heaps orFibonacci heaps can provide better bounds for some operations.[6]

Alternatively, when aself-balancing binary search tree is used, insertion and removal also takeO(logn){\displaystyle O(\log n)} time, although building trees from existing sequences of elements takesO(nlogn){\displaystyle O(n\log n)} time; this is typical where one might already have access to these data structures, such as with third-party or standard libraries. From a space-complexity standpoint, usingself-balancing binary search tree withlinked list takes more storage, since it requires to store extra references to other nodes.

From a computational-complexity standpoint, priority queues are congruent to sorting algorithms. The section onthe equivalence of priority queues and sorting algorithms, below, describes how efficient sorting algorithms can create efficient priority queues.

Specialized heaps

[edit]

There are several specializedheapdata structures that either supply additional operations or outperform heap-based implementations for specific types of keys, specifically integer keys. Suppose the set of possible keys is{1,2,...,C}{\displaystyle \{1,2,...,C\}}.

For applications that do many "peek" operations for every "extract-min" operation, the time complexity for peek actions can be reduced toO(1){\displaystyle O(1)} in all tree and heap implementations by caching the highest priority element after every insertion and removal. For insertion, this adds at most a constant cost, since the newly inserted element is compared only to the previously cached minimum element. For deletion, this at most adds an additional "peek" cost, which is typically cheaper than the deletion cost, so overall time complexity is not significantly impacted.

Monotone priority queues are specialized queues that are optimized for the case where no item is ever inserted that has a lower priority (in the case of min-heap) than any item previously extracted. This restriction is met by several practical applications of priority queues.

Summary of running times

[edit]

Here aretime complexities[10] of various heap data structures. The abbreviationam. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O(f)" and "Θ(f)" seeBig O notation. Names of operations assume a min-heap.

Operationfind-mindelete-mindecrease-keyinsertmeldmake-heap[a]
Binary[10]Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(n)Θ(n)
Skew[11]Θ(1)O(log n)am.O(log n)am.O(log n)am.O(log n)am.Θ(n)am.
Leftist[12]Θ(1)Θ(log n)Θ(log n)Θ(log n)Θ(log n)Θ(n)
Binomial[10][14]Θ(1)Θ(log n)Θ(log n)Θ(1)am.Θ(log n)[b]Θ(n)
Skew binomial[15]Θ(1)Θ(log n)Θ(log n)Θ(1)Θ(log n)[b]Θ(n)
2–3 heap[17]Θ(1)O(log n)am.Θ(1)Θ(1)am.O(log n)[b]Θ(n)
Bottom-up skew[11]Θ(1)O(log n)am.O(log n)am.Θ(1)am.Θ(1)am.Θ(n)am.
Pairing[18]Θ(1)O(log n)am.o(log n)am.[c]Θ(1)Θ(1)Θ(n)
Rank-pairing[21]Θ(1)O(log n)am.Θ(1)am.Θ(1)Θ(1)Θ(n)
Fibonacci[10][22]Θ(1)O(log n)am.Θ(1)am.Θ(1)Θ(1)Θ(n)
Strict Fibonacci[23][d]Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n)
Brodal[24][d]Θ(1)Θ(log n)Θ(1)Θ(1)Θ(1)Θ(n)[25]
  1. ^make-heap is the operation of building a heap from a sequence ofn unsorted elements. It can be done inΘ(n) time whenevermeld runs inO(log n) time (where both complexities can be amortized).[11][12] Another algorithm achievesΘ(n) for binary heaps.[13]
  2. ^abcForpersistent heaps (not supportingdecrease-key), a generic transformation reduces the cost ofmeld to that ofinsert, while the new cost ofdelete-min is the sum of the old costs ofdelete-min andmeld.[16] Here, it makesmeld run inΘ(1) time (amortized, if the cost ofinsert is) whiledelete-min still runs inO(log n). Applied to skew binomial heaps, it yields Brodal-Okasaki queues, persistent heaps with optimal worst-case complexities.[15]
  3. ^Lower bound ofΩ(loglogn),{\displaystyle \Omega (\log \log n),}[19] upper bound ofO(22loglogn).{\displaystyle O(2^{2{\sqrt {\log \log n}}}).}[20]
  4. ^abBrodal queues and strict Fibonacci heaps achieve optimal worst-case complexities for heaps. They were first described as imperative data structures. The Brodal-Okasaki queue is apersistent data structure achieving the same optimum, except thatdecrease-key is not supported.

Equivalence of priority queues and sorting algorithms

[edit]

Using a priority queue to sort

[edit]

Thesemantics of priority queues naturally suggest a sorting method: insert all the elements to be sorted into a priority queue, and sequentially remove them; they will come out in sorted order. This is actually the procedure used by severalsorting algorithms, once the layer ofabstraction provided by the priority queue is removed. This sorting method is equivalent to the following sorting algorithms:

NamePriority Queue ImplementationBestAverageWorst
HeapsortHeapnlogn{\displaystyle n\log n}nlogn{\displaystyle n\log n}nlogn{\displaystyle n\log n}
SmoothsortLeonardo Heapnnlogn{\displaystyle n\log n}nlogn{\displaystyle n\log n}
Selection sortUnorderedArrayn2{\displaystyle n^{2}}n2{\displaystyle n^{2}}n2{\displaystyle n^{2}}
Insertion sortOrderedArrayn{\displaystyle n}n2{\displaystyle n^{2}}n2{\displaystyle n^{2}}
Tree sortSelf-balancing binary search treenlogn{\displaystyle n\log n}nlogn{\displaystyle n\log n}nlogn{\displaystyle n\log n}

Using a sorting algorithm to make a priority queue

[edit]

A sorting algorithm can also be used to implement a priority queue. Specifically, Thorup says:[26]

We present a general deterministic linear space reduction from priority queues to sorting implying that if we can sort up ton{\displaystyle n} keys inS(n){\displaystyle S(n)} time per key, then there is a priority queue supportingdelete andinsert inO(S(n)){\displaystyle O(S(n))} time andfind-min in constant time.

That is, if there is a sorting algorithm which can sort inO(S){\displaystyle O(S)} time per key, whereS{\displaystyle S} is some function ofn{\displaystyle n} andword size,[27] then one can use the given procedure to create a priority queue where pulling the highest-priority element isO(1){\displaystyle O(1)} time, and inserting new elements (and deleting elements) isO(S){\displaystyle O(S)} time. For example, if one has anO(nlogn){\displaystyle O(n\log n)} sort algorithm, one can create a priority queue withO(1){\displaystyle O(1)} pulling andO(logn){\displaystyle O(\log n)} insertion.

Libraries

[edit]

A priority queue is often considered to be a "container data structure".

TheStandard Template Library (STL), and theC++ 1998 standard, specifiesstd::priority_queue as one of the STLcontaineradaptorclass templates. However, it does not specify how two elements with same priority should be served, and indeed, common implementations will not return them according to their order in the queue. It implements a max-priority-queue, and has three parameters: a comparison object for sorting such as a function object (defaults toless<T> if unspecified), the underlying container for storing the data structures (defaults tostd::vector<T>), and two iterators to the beginning and end of a sequence. Unlike actual STL containers, it does not allowiteration of its elements (it strictly adheres to its abstract data type definition). STL also has utility functions for manipulating another random-access container as a binary max-heap. TheBoost libraries also have an implementation in the library heap.

Python'sheapq module implements a binary min-heap on top of a list.

Java's library contains aPriorityQueue (java.util.PriorityQueue) class, which implements a min-priority-queue as a binary heap.

.NET's library contains aSystem.Collections.Generic.PriorityQueue class, which implements an array-backed, quaternary min-heap.

Scala's library contains ascala.collection.mutable.PriorityQueue class, which implements a max-priority-queue.

Go's library contains acontainer/heap module, which implements a min-heap on top of any compatible data structure.

Rust's standard library contains astd::collections::BinaryHeap struct, which implements a priority queue with a binary heap.

TheStandard PHP Library extension contains the classSplPriorityQueue.

Apple'sCore Foundation framework contains aCFBinaryHeap structure, which implements a min-heap.

Applications

[edit]

Bandwidth management

[edit]

Priority queuing can be used to manage limited resources such asbandwidth on a transmission line from anetworkrouter. In the event of outgoingtraffic queuing due to insufficient bandwidth, all other queues can be halted to send the traffic from the highest priority queue upon arrival. This ensures that the prioritized traffic (such as real-time traffic, e.g. anRTP stream of aVoIP connection) is forwarded with the least delay and the least likelihood of being rejected due to a queue reaching its maximum capacity. All other traffic can be handled when the highest priority queue is empty. Another approach used is to send disproportionately more traffic from higher priority queues.

Many modern protocols forlocal area networks also include the concept of priority queues at themedia access control (MAC) sub-layer to ensure that high-priority applications (such asVoIP orIPTV) experience lower latency than other applications which can be served withbest-effort service. Examples includeIEEE 802.11e (an amendment toIEEE 802.11 which providesquality of service) andITU-TG.hn (a standard for high-speedlocal area network using existing home wiring (power lines, phone lines andcoaxial cables).

Usually a limitation (policer) is set to limit the bandwidth that traffic from the highest priority queue can take, in order to prevent high priority packets from choking off all other traffic. This limit is usually never reached due to high level control instances such as theCiscoCallmanager, which can be programmed to inhibit calls which would exceed the programmed bandwidth limit.

Discrete event simulation

[edit]

Another use of a priority queue is to manage the events in adiscrete event simulation. The events are added to the queue with their simulation time used as the priority. The execution of the simulation proceeds by repeatedly pulling the top of the queue and executing the event thereon.

See also:Scheduling (computing),queueing theory

Dijkstra's algorithm

[edit]

When the graph is stored in the form ofadjacency list or matrix, priority queue can be used to extract minimum efficiently when implementingDijkstra's algorithm, although one also needs the ability to alter the priority of a particular vertex in the priority queue efficiently.

If instead, a graph is stored as node objects, and priority-node pairs are inserted into a heap, altering the priority of a particular vertex is not necessary if one tracks visited nodes. Once a node is visited, if it comes up in the heap again (having had a lower priority number associated with it earlier), it is popped-off and ignored.

Huffman coding

[edit]

Huffman coding requires one to repeatedly obtain the two lowest-frequency trees. A priority queue isone method of doing this.

Best-first search algorithms

[edit]

Best-first search algorithms, like theA* search algorithm, find the shortest path between twovertices ornodes of aweighted graph, trying out the most promising routes first. A priority queue (also known as thefringe) is used to keep track of unexplored routes; the one for which the estimate (a lower bound in the case of A*) of the total path length is smallest is given highest priority. If memory limitations make best-first search impractical, variants like theSMA* algorithm can be used instead, with adouble-ended priority queue to allow removal of low-priority items.

ROAM triangulation algorithm

[edit]

The Real-time Optimally Adapting Meshes (ROAM) algorithm computes a dynamically changing triangulation of a terrain. It works by splitting triangles where more detail is needed and merging them where less detail is needed. The algorithm assigns each triangle in the terrain a priority, usually related to the error decrease if that triangle would be split. The algorithm uses two priority queues, one for triangles that can be split and another for triangles that can be merged. In each step the triangle from the split queue with the highest priority is split, or the triangle from the merge queue with the lowest priority is merged with its neighbours.

Prim's algorithm for minimum spanning tree

[edit]

Usingmin heap priority queue inPrim's algorithm to find theminimum spanning tree of aconnected andundirected graph, one can achieve a good running time. This min heap priority queue uses the min heap data structure which supports operations such asinsert,minimum,extract-min,decrease-key.[28] In this implementation, theweight of the edges is used to decide the priority of thevertices. Lower the weight, higher the priority and higher the weight, lower the priority.[29]

Parallel priority queue

[edit]

Parallelization can be used to speed up priority queues, but requires some changes to the priority queue interface. The reason for such changes is that a sequential update usually only hasO(1){\textstyle O(1)} orO(logn){\textstyle O(\log n)} cost, and there is no practical gain to parallelize such an operation. One possible change is to allow the concurrent access of multiple processors to the same priority queue. The second possible change is to allow batch operations that work onk{\textstyle k} elements, instead of just one element. For example,extractMin will remove the firstk{\textstyle k} elements with the highest priority.

Concurrent parallel access

[edit]

If the priority queue allows concurrent access, multiple processes can perform operations concurrently on that priority queue. However, this raises two issues. First of all, the definition of the semantics of the individual operations is no longer obvious. For example, if two processes want to extract the element with the highest priority, should they get the same element or different ones? This restricts parallelism on the level of the program using the priority queue. In addition, because multiple processes have access to the same element, this leads to contention.

Node 3 is inserted and sets the pointer of node 2 to node 3. Immediately after that, node 2 is deleted and the pointer of node 1 is set to node 4. Now node 3 is no longer reachable.

The concurrent access to a priority queue can be implemented on a Concurrent Read, Concurrent Write (CRCW) PRAM model. In the following the priority queue is implemented as askip list.[30][31] In addition, an atomic synchronization primitive,CAS, is used to make the skip listlock-free. The nodes of the skip list consists of a unique key, a priority, anarray ofpointers, for each level, to the next nodes and adelete mark. Thedelete mark marks if the node is about to be deleted by a process. This ensures that other processes can react to the deletion appropriately.

  • insert(e): First, a new node with a key and a priority is created. In addition, the node is assigned a number of levels, which dictates the size of the array of pointers. Then a search is performed to find the correct position where to insert the new node. The search starts from the first node and from the highest level. Then the skip list is traversed down to the lowest level until the correct position is found. During the search, for every level the last traversed node will be saved as parent node for the new node at that level. In addition, the node to which the pointer, at that level, of the parent node points towards, will be saved as the successor node of the new node at that level. Afterwards, for every level of the new node, the pointers of the parent node will be set to the new node. Finally, the pointers, for every level, of the new node will be set to the corresponding successor nodes.
  • extract-min: First, the skip list is traversed until a node is reached whosedelete mark is not set. Thisdelete mark is than set to true for that node. Finally the pointers of the parent nodes of the deleted node are updated.

If the concurrent access to a priority queue is allowed, conflicts may arise between two processes. For example, a conflict arises if one process is trying to insert a new node, but at the same time another process is about to delete the predecessor of that node.[30] There is a risk that the new node is added to the skip list, yet it is not longer reachable. (See image)

K-element operations

[edit]

In this setting, operations on a priority queue is generalized to a batch ofk{\textstyle k} elements.For instance,k_extract-min deletes thek{\textstyle k} smallest elements of the priority queue and returns those.

In ashared-memory setting, the parallel priority queue can be easily implemented using parallelbinary search trees andjoin-based tree algorithms. In particular,k_extract-min corresponds to asplit on the binary search tree that hasO(logn){\textstyle O(\log n)} cost and yields a tree that contains thek{\textstyle k} smallest elements.k_insert can be applied by aunion of the original priority queue and the batch of insertions. If the batch is already sorted by the key,k_insert hasO(klog(1+nk)){\textstyle O(k\log(1+{\frac {n}{k}}))} cost. Otherwise, we need to first sort the batch, so the cost will beO(klog(1+nk)+klogk)=O(klogn){\textstyle O(k\log(1+{\frac {n}{k}})+k\log k)=O(k\log n)}. Other operations for priority queue can be applied similarly. For instance,k_decrease-key can be done by first applyingdifference and thenunion, which first deletes the elements and then inserts them back with the updated keys. All these operations are highly parallel, and the theoretical and practical efficiency can be found in related research papers.[32][33]

The rest of this section discusses a queue-based algorithm on distributed memory. We assume each processor has its own local memory and a local (sequential) priority queue. The elements of the global (parallel) priority queue are distributed across all processors.

k_extract-min is executed on a priority queue with three processors. The green elements are returned and removed from the priority queue.

Ak_insert operation assigns the elements uniformly random to the processors which insert the elements into their local queues. Note that single elements can still be inserted into the queue. Using this strategy the global smallest elements are in the union of the local smallest elements of every processor with high probability. Thus each processor holds a representative part of the global priority queue.

This property is used whenk_extract-min is executed, as the smallestm{\textstyle m} elements of each local queue are removed and collected in a result set. The elements in the result set are still associated with their original processor. The number of elementsm{\textstyle m} that is removed from each local queue depends onk{\textstyle k} and the number of processorsp{\textstyle p}.[34]By parallel selection thek{\textstyle k} smallest elements of the result set are determined. With high probability these are the globalk{\textstyle k} smallest elements. If not,m{\textstyle m} elements are again removed from each local queue and put into the result set. This is done until the globalk{\textstyle k} smallest elements are in the result set. Now thesek{\textstyle k} elements can be returned. All other elements of the result set are inserted back into their local queues. The running time ofk_extract-min is expectedO(kplog(n)){\textstyle O({\frac {k}{p}}\log(n))}, wherek=Ω(plog(p)){\textstyle k=\Omega (p\cdot \log(p))} andn{\textstyle n} is the size of the priority queue.[34]

The priority queue can be further improved by not moving the remaining elements of the result set directly back into the local queues after ak_extract-min operation. This saves moving elements back and forth all the time between the result set and the local queues.

By removing several elements at once a considerable speedup can be reached. But not all algorithms can use this kind of priority queue. Dijkstra's algorithm for example can not work on several nodes at once. The algorithm takes the node with the smallest distance from the priority queue and calculates new distances for all its neighbor nodes. If you would take outk{\textstyle k} nodes, working at one node could change the distance of another one of thek{\textstyle k} nodes. So using k-element operations destroys the label setting property of Dijkstra's algorithm.

See also

[edit]

References

[edit]
  1. ^abMiller Jr., Robert G. (1960)."Priority queues"(PDF).The Annals of Mathematical Statistics.31. Stanford University:86–103.doi:10.1214/aoms/1177705990.
  2. ^"PriorityQueue (Java SE 9 & JDK 9 )".docs.oracle.com. Retrieved2025-03-13.
  3. ^Cormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L.;Stein, Clifford (2022) [1990]. "Chapter 6.5: Priority queues".Introduction to Algorithms (4th ed.). MIT Press and McGraw-Hill. pp. 172–176.ISBN 0-262-04630-X.
  4. ^abcdeRönngren, Robert; Ayani, Rassul (1997-04-01)."A comparative study of parallel and sequential priority queue algorithms".ACM Trans. Model. Comput. Simul.7 (2):157–209.doi:10.1145/249204.249205.ISSN 1049-3301.
  5. ^abcdeAyani, R. (December 1990). "LR-algorithm: Concurrent operations on priority queues".Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing 1990. pp. 22–25.doi:10.1109/SPDP.1990.143500.ISBN 0-8186-2087-0.
  6. ^Cormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L.;Stein, Clifford (2001) [1990]. "Chapter 20: Fibonacci Heaps".Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 476–497.ISBN 0-262-03293-7. Third edition, p. 518.
  7. ^Skiena, Steven (2010).The Algorithm Design Manual (2nd ed.).Springer Science+Business Media.ISBN 978-1-849-96720-4.
  8. ^P. van Emde Boas. Preserving order in a forest in less than logarithmic time. InProceedings of the 16th Annual Symposium on Foundations of Computer Science, pages 75–84. IEEE Computer Society, 1975.
  9. ^Michael L. Fredman and Dan E. Willard. Surpassing the information theoretic bound with fusion trees.Journal of Computer and System Sciences, 48(3):533-551, 1994
  10. ^abcdCormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L. (1990).Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill.ISBN 0-262-03141-8.
  11. ^abcSleator, Daniel Dominic;Tarjan, Robert Endre (February 1986)."Self-Adjusting Heaps".SIAM Journal on Computing.15 (1):52–69.CiteSeerX 10.1.1.93.6678.doi:10.1137/0215004.ISSN 0097-5397.
  12. ^abTarjan, Robert (1983). "3.3. Leftist heaps".Data Structures and Network Algorithms. pp. 38–42.doi:10.1137/1.9781611970265.ISBN 978-0-89871-187-5.
  13. ^Hayward, Ryan; McDiarmid, Colin (1991)."Average Case Analysis of Heap Building by Repeated Insertion"(PDF).J. Algorithms.12:126–153.CiteSeerX 10.1.1.353.7888.doi:10.1016/0196-6774(91)90027-v. Archived fromthe original(PDF) on 2016-02-05. Retrieved2016-01-28.
  14. ^"Binomial Heap | Brilliant Math & Science Wiki".brilliant.org. Retrieved2019-09-30.
  15. ^abBrodal, Gerth Stølting; Okasaki, Chris (November 1996), "Optimal purely functional priority queues",Journal of Functional Programming,6 (6):839–857,doi:10.1017/s095679680000201x
  16. ^Okasaki, Chris (1998). "10.2. Structural Abstraction".Purely Functional Data Structures (1st ed.). pp. 158–162.ISBN 9780521631242.
  17. ^Takaoka, Tadao (1999),Theory of 2–3 Heaps(PDF), p. 12
  18. ^Iacono, John (2000), "Improved upper bounds for pairing heaps",Proc. 7th Scandinavian Workshop on Algorithm Theory(PDF), Lecture Notes in Computer Science, vol. 1851, Springer-Verlag, pp. 63–77,arXiv:1110.4428,CiteSeerX 10.1.1.748.7812,doi:10.1007/3-540-44985-X_5,ISBN 3-540-67690-2
  19. ^Fredman, Michael Lawrence (July 1999)."On the Efficiency of Pairing Heaps and Related Data Structures"(PDF).Journal of the Association for Computing Machinery.46 (4):473–501.doi:10.1145/320211.320214.
  20. ^Pettie, Seth (2005).Towards a Final Analysis of Pairing Heaps(PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183.CiteSeerX 10.1.1.549.471.doi:10.1109/SFCS.2005.75.ISBN 0-7695-2468-0.
  21. ^Haeupler, Bernhard; Sen, Siddhartha;Tarjan, Robert E. (November 2011)."Rank-pairing heaps"(PDF).SIAM J. Computing.40 (6):1463–1485.doi:10.1137/100785351.
  22. ^Fredman, Michael Lawrence;Tarjan, Robert E. (July 1987)."Fibonacci heaps and their uses in improved network optimization algorithms"(PDF).Journal of the Association for Computing Machinery.34 (3):596–615.CiteSeerX 10.1.1.309.8927.doi:10.1145/28869.28874.
  23. ^Brodal, Gerth Stølting; Lagogiannis, George;Tarjan, Robert E. (2012).Strict Fibonacci heaps(PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184.CiteSeerX 10.1.1.233.1740.doi:10.1145/2213977.2214082.ISBN 978-1-4503-1245-5.
  24. ^Brodal, Gerth S. (1996),"Worst-Case Efficient Priority Queues"(PDF),Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58
  25. ^Goodrich, Michael T.;Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction".Data Structures and Algorithms in Java (3rd ed.). pp. 338–341.ISBN 0-471-46983-1.
  26. ^Thorup, Mikkel (2007). "Equivalence between priority queues and sorting".Journal of the ACM.54 (6): 28.doi:10.1145/1314690.1314692.S2CID 11494634.
  27. ^"Archived copy"(PDF).Archived(PDF) from the original on 2011-07-20. Retrieved2011-02-10.{{cite web}}: CS1 maint: archived copy as title (link)
  28. ^Cormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L.;Stein, Clifford (2009) [1990].Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. p. 634.ISBN 0-262-03384-4. "In order to implement Prim's algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A."
  29. ^"Prim's Algorithm". Geek for Geeks. 18 November 2012.Archived from the original on 9 September 2014. Retrieved12 September 2014.
  30. ^abSundell, Håkan; Tsigas, Philippas (2005)."Fast and lock-free concurrent priority queues for multi-thread systems".Proceedings International Parallel and Distributed Processing Symposium. Vol. 65. pp. 609–627.CiteSeerX 10.1.1.67.1310.doi:10.1109/IPDPS.2003.1213189.ISBN 0-7695-1926-1.S2CID 20995116.
  31. ^Lindén, Jonsson (2013),"A Skiplist-Based Concurrent Priority Queue with Minimal Memory Contention",Technical Report 2018-003 (in German)
  32. ^Blelloch, Guy E.; Ferizovic, Daniel; Sun, Yihan (2016), "Just Join for Parallel Ordered Sets",Symposium on Parallel Algorithms and Architectures, Proc. of 28th ACM Symp. Parallel Algorithms and Architectures (SPAA 2016), ACM, pp. 253–264,arXiv:1602.02120,doi:10.1145/2935764.2935768,ISBN 978-1-4503-4210-0,S2CID 2897793
  33. ^Blelloch, Guy E.; Ferizovic, Daniel; Sun, Yihan (2018), "PAM: parallel augmented maps",Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ACM, pp. 290–304
  34. ^abSanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (2019).Sequential and Parallel Algorithms and Data Structures - The Basic Toolbox. Springer International Publishing. pp. 226–229.doi:10.1007/978-3-030-25209-0.ISBN 978-3-030-25208-3.S2CID 201692657.

Further reading

[edit]

External links

[edit]
Types
Abstract
Arrays
Linked
Trees
Graphs
Retrieved from "https://en.wikipedia.org/w/index.php?title=Priority_queue&oldid=1321392960"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp