Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commita52875e

Browse files
committed
README: clarify BinaryHeapStrategy is default
[closes#12]
1 parentf291ac1 commita52875e

File tree

1 file changed

+14
-13
lines changed

1 file changed

+14
-13
lines changed

‎README.md‎

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -66,18 +66,18 @@ Strategies
6666
==========
6767

6868
We can implement this with a regular`Array`. We'll keep it sorted inversely,
69-
so`queue.dequeue()` maps to`array.pop()`.
69+
so`queue.dequeue()` maps to`array.pop()`. Each`queue()` is a`splice()`,
70+
which rewrites the entire array. This is fast for tiny queues.
7071

71-
But with an`Array`, we'll need to`splice()`, which can affect every single
72-
element in the array. An alternative is to create a
73-
[Binary Heap](http://en.wikipedia.org/wiki/Binary_heap), which writes far
74-
fewer array elements when queueing (though each element is written more slowly).
72+
An alternative is a[Binary Heap](http://en.wikipedia.org/wiki/Binary_heap): it
73+
modifies just a few array elements when queueing (though each modification has
74+
a cost).
7575

7676
Finally, we can use a[B-Heap](http://en.wikipedia.org/wiki/B-heap). It's like a
77-
binary heap, exceptit orders elements such that during a single operation,
78-
writes occur closer to each otherin memory. Unfortunately, it's slower to
79-
calculate where in memory each write should occur (it costs a function call
80-
instead of a bit-shift). So while it'sfast in theory, it'sslower in practice.
77+
binary heap, exceptits modifications often occur close together in memory.
78+
Unfortunately, calculating_where_in memory the modifications should occur is
79+
slower. (It costs a function call instead of a bit-shift.) So while B-heap is
80+
fast in theory, it'sslow in practice.
8181

8282
Create the queues like this:
8383

@@ -97,10 +97,11 @@ You'll see running times like this:
9797
| Dequeue| O(1) (fast)| O(lg n)| O(lg n)|
9898

9999
According to[JsPerf](http://jsperf.com/js-priority-queue-queue-dequeue), the
100-
fastest strategy for most cases is`BinaryHeapStrategy`. Only use`ArrayStrategy`
101-
only if you're queuing items in a very particular order. Don't use
102-
`BHeapStrategy`, except as a lesson in how sometimes miracles in one
103-
programming language aren't great in other languages.
100+
fastest strategy for most cases is`BinaryHeapStrategy`. Use`ArrayStrategy`
101+
in edge cases, after performance-testing your specific data. Don't use
102+
`BHeapStrategy`: it's a lesson that a miracle in C can flop in JavaScript.
103+
104+
The default strategy is`BinaryHeapStrategy`.
104105

105106
Contributing
106107
============

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp