Talk:Priority queue

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Computer science (Rated C-class, High-importance)
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
WikiProject Computing / Software / CompSci (Rated C-class, Mid-importance)
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (marked as Mid-importance).
Taskforce icon
This article is supported by WikiProject Computer science (marked as High-importance).


That Bandwidth Management section seems to be talking about Priority-queueing (Queueing_theory) rather than priority queues (Abstract_data_type)

The peeking time complexity can be reduced to O(1) in all tree and heap implementations by caching the highest priority element after every insertion (with greater or equal priority) and removal (since those are already Ω(log n) on average, their own time complexity is not affected by this change). This is typically pointless with respect to overall performance, though. Aragorn2 16:10, 4 May 2006 (UTC)

Sorted List Implementation[edit]

Surely this should be O(log(n)) insertion time, as it would be a simple binary search. —Preceding unsigned comment added by (talk) 11:08, 26 November 2008 (UTC)

I agree, for random access lists insertion is log(n). I think the main bad thing about sorted lists is the initial O(n*log(n)) build, compared to O(n) for a heap. Emeraldemon (talk) 02:19, 12 May 2009 (UTC)

Ok, someone reverted my change with the comment "whether it's a linked-list or a vector, you need O(n) to do this." As explained in Dynamic array and Amortized analysis, a dynamically-resized array can do an insert in amortized O(1), although it is O(n) worst-case. But given that we do amortized analysis for the fibonacci heap, I think it makes sense to allow the same standard for a linked list. There is a table on the dynamic array page comparing list implementations (which I think might be better on the List (computing) page) where dynamic arrays are listed as O(n) insert, but really I think it makes more sense to describe them as O(1). A similar table comparing big-o for various heaps, etc. might be nice here. I'm changing it back, although if someone can convince me we shouldn't use amortized numbers, I may relent. Emeraldemon (talk) 01:38, 16 August 2009 (UTC)

I can't see how inserting an element in the middle a dynamically-resized array can take O(1). Dynamic array do says that it's O(n), in fact you have to shift elements forwards (ignoring possible array reallocation). Thus, I agree with who wrote that with both a linked list and a vector a sorted insert is O(n). Not changing the article right now to avoid a revert war, waiting for further comments. SalvoIsaja (talk) 15:13, 27 August 2009 (UTC)

Insert has to be O(n) for a linked list or dynamic array (whether worst-case, average-case, or amortised). Anything else (e.g. a skip list) doesn't belong under "Simple implementations". If simple lists gave O(lg n) then there would have been no motivation for the binary heap, which is O(lg n) average case and O(n) worst case (when it degenerates to a list). HenryAyoola (talk) 09:33, 8 September 2009 (UTC)
PS But building a list-based queue before doing any accesses is O(n lg n) because you can use a O(n lg n) sort on the whole thing. HenryAyoola (talk) 09:35, 8 September 2009 (UTC)
You're right, of course. I was confusing this with in-place update. Thanks for clarifying. Emeraldemon (talk) 02:05, 9 October 2009 (UTC)

O(1) implementation?[edit]

Should we mention that if the range of all possible priorities is known and limited to a small number m (such as process execution priorities in operating systems), priority queue can be implemented as an array of m simple FIFO queues, making both insertion and removal O(1) (technically, O(1) to add, O(m) to read) operations, assuming the underlying FIFO queues are O(1), such as linked lists? I don't have a reference handy, but I'm pretty sure I've seen this approach in use in an OS scheduler, and, generally, it seems obvious. --Cubbi (talk) 16:14, 17 August 2009 (UTC)

obvious implementation[edit]

I have implemented in the past, a very simple obvious implementation of priority queue, based on a sorted dictionary of <int, Queue> It has the basic operations, and simplicity as its purpose. even though naive, I don't think it would be slower. o(1) since a queue is like a linked list, and the sorted dictionary o(1) as well, except maybe for the insertion part as I don't know how they implemented the SortedDictionary in dot net, it might be that the internal structure that keeps the order (maybe sortedlist, or rb-tree) is O(logn) in insertion costs. see here: — Preceding unsigned comment added by (talk) 13:15, 31 May 2011 (UTC)

Operational Support Criteria[edit]

The following is stated at the beginning of the article (27th December 09):

"A priority queue is an abstract data type in computer programming that supports the following three operations..."

Yet the third operation, PeekAtNext, is declared in parentheses "optional". If supporting the operation is "optional" for a priority queue, it is clearly not needed to actually be one. Therefore, the article is contradictory in its definition of what actually constitutes a priority queue ("A priority queue is... that supports the following three operations..." =/= a specification of two mandatory operations and one that can be used optionally).

Does PeekAtNext deserve its listing at the top along with the two necessary operations, and should the opening line be edited to specify mandatory support for only two operations?

Owing to its merely supplementary nature, it seems no more significant than any other possible optional operation that could be of some non-fundamental use. Therefore, I believe if PeekAtNext is deemed worthy of listing, so do all other possibly useful operations for a priority queue. However, I certainly would not advocate the latter, thus I would vote "remove". (talk) 02:59, 27 December 2009 (UTC)

Priority queues and the A* algorithm[edit]

This article mentions the A* algorithm as an important application of priority queues. However, I'm implementing the A* algorithm, and have come across a problem: the A* algorithm will often change the key of an element (it does this whenever a shorter path is found to the element). This will break any implementation that uses a Heap, which is most implementations. What A* needs is a priority queue with O(log(n)) resorting of a single element. Does anyone know what implementation does this? Tcotco (talk) 01:50, 10 June 2010 (UTC)

This should not be just a complexity theory article[edit]

This article is written as though good complexity equals a good algorithm in practice which is not at all the case. Most people who look up "priority queue" are going to be wanting a good practical data structure, and this article says precisely nothing about that -- it only appears to do so by substituting complexity statements for performance statements. So this article at best confuses the practical reader by pretending that complexity is a good way to evaluate algorithms and data structures for practicality. —Preceding unsigned comment added by (talk) 20:33, 12 May 2011 (UTC)

I personally think that the article manages pretty well to point out, which complexity statements are misleading. Anyhow, complexity is a good way to evaluate algorithms, it is not perfect by any means, but you make it seem like it does not mean anything. What would you have preferred? I think perhaps the article has gotten better in the meantime.


Under the "Usual implementation" heading, the text says:

To get better performance, priority queues typically use a heap as their backbone, giving O(log n) performance for inserts and removals. Alternatively, if a self-balancing binary search tree is used, all three operations take O(log n) time...

All three? It's not clear what this refers to. ChesMartin (talk) 09:51, 10 July 2011 (UTC)

Building a heap in O(n) time?[edit]

The following statement under "Usual Implementation" is not entirely true:

"giving O(log n) performance for inserts and removals, and O(n) to build initially"

It will take O(n) to build initially on average but the worst case runtime is O(n log n). I think we should change it to:

"giving O(log n) performance for inserts and removals, and O(n) to build initially on average, O(n log n) in the worst case" Krohn211 (talk) 20:11, 16 November 2011 (UTC)

Specialized Heaps[edit]

" in O(log log C) time, but has a space cost for small queues of about O(2^(m/2)), where m is the number of bits in the priority value ". If 1,...,C are the possible priority values, then m should be about log C, right? Why doesn't it just say O(sqrt C) then? I'm not sure enough to have understood it correctly to correct it though.