Jump to content

Talk:Shellsort

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Nnkx00 (talk | contribs) at 23:55, 29 October 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This sounds like it might have been taken straight from a copyrighted textbook. Anyone know the source for sure? Wesley

I found a german version which seems to be a word by word translation. Even the examples are the same. Since the german version is longer, it seems that the english text in wikipedia is taken from there. The URL is http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/shell/shell.htm

80.145.80.78 20:36, 29 Nov 2003 (UTC) (de:Benutzer:Hubi)

Yes, it seems that page has an English version whence much of this article was copied. But it's gone so long without detection that we've made significant changes. I don't know how we can possibly separate the violating from the nonviolating material at this juncture. We may just have to rewrite the article. Deco 18:10, 15 December 2005 (UTC)[reply]

Is shell sort different from comb sort?

They are similar but different. Comb sort is to bubbls sort what shell sort is to insertion sort. Bubba73 16:16, August 31, 2005 (UTC)

According to NIST, the difference is that comb sort applies only one pass through the elements at each iteration whereas shell sort completely sorts at each iteration.


Has anyone actually *used* shell sort at all recently? Quicksort is much faster in practice, and just as simple to implement, merge sort give you guaranteed n log n performance and only requires sequential access, heapsort is also guaranteed n log n, in-place, with pretty much no overhead whatsoever, and there are a variety of specialised sorting algorithms - for instance, for strings, which have much lower constant factors. Anybody? --Robert Merkel 14:14 May 14, 2003 (UTC)

I actually use Shell sort regularly. Quicksort isn't that much faster than a good shell sort until you get into millions of items. Bubba73 16:16, August 31, 2005 (UTC)


Shell sort is a very powerful algorithm, especially when used in combination with Insertion sort.

---

The sample claims to be in Java, and yet is in C.

No, it uses Java's array syntax. All it's missing is the method access qualifier. Deco 18:12, 15 December 2005 (UTC)[reply]

Stover

I didn't know about the work of Stover until a couple of days ago, when I saw it in the article. I did some tests, and it is pretty good. First, however, the Sedgewick sequence he gives in figure 3 is not as good as the Sedgewick sequence in (11) of Knuth, vol 3, 2nd ed, 5.2.1 (which is about 5% faster than the one Stover uses). Nevertheless, Stover's #7 sometimes beats the better Seqgewick sequence, but usually not. Bubba73 16:16, August 31, 2005 (UTC)

Nitrogen peroxyde ?

O(N2) means something else in the context of calculation. Please write it plain. Thanks. --DLL 12:36, 26 January 2006 (UTC)[reply]

Shell or shell ?

For many years I tried to visualize the shells, much as there are moving bubbles in bubble sort. Then I figured out that it was named after its inventor, and it was not an analogy.

Would it be worth capitalizing shell where it appears?

Not all names remain capitalized in compound words, but it seems like a lot of people who write about it do capitalize it. Perhaps we should. Deco 00:21, 3 February 2006 (UTC)[reply]

Algorithm question

First of all i think that instead of: for (j = i - gap; j >= 0 && a[j] > value; --j) there should be for (j = i - gap; j >= 0 && a[j] > value; j-=gap)


The code still yelds the correct result beacause in the final step gap=1 and the alg does an insert sort.... But in terms of complexity it won't act well


Secondly i think the algorithm should also be explained in words, most of the time this helps a lot... A similar explanation like the one in the first link of the external links section would be great. I will insert it if you guys don't mind.

I agree with your second point. However, your first point is incorrect; if we were to move j by gap, we would only sort one column of the matrix containing the list rearranged into gap columns, so the sort would not be effective. This is crucial. Deco 00:34, 19 June 2006 (UTC)[reply]

Contradiction regarding running time?

At the top of the article, it says "estimates range from O(n ^ 1.25) to O(n ^ 1.5)" - that is, it is believed to be impossible to go below n^1.25. Later it says - "Depending on the choice of gap sequence, Shellsort has a worst-case running time of ... O(n log^2 n)" - which, as far as I understand, means that a sequence is known for which it is n (ln n)^2. Does it mean something else? This is either a contradiction, or a very unclear wording which should be rewritten. Any ideas? -- Meni Rosenfeld (talk) 18:34, 18 June 2006 (UTC)[reply]

The person who wrote the 1.25 was probably unfamiliar with the sequence permitting O(nlog2n) shellsort. I've updated it. Deco 00:24, 19 June 2006 (UTC)[reply]

Now that's more like it :). Thanks. -- Meni Rosenfeld (talk) 15:00, 19 June 2006 (UTC)[reply]

Speaking of which, it provides formulas for the O(n ^ 1.25) and O(n ^ 1.5) sequences, but none for the O(nlog^2(n)) sequence. Why isn't that there? 66.32.36.227 17:46, 7 July 2007 (UTC)[reply]

Need discussion of gap insertion sort and sequence details

I've unlinked gap insertion sort in the article because this algorithm has virtually no use outside of shellsort, and is therefore most appropriate to discuss in this article (correct me if I'm wrong). We need to add discussion of gap insertion sort and of the specific sequences that produce certain asymptotic runtimes. I think it would be helpful to use the matrix characterization, where the list is rewritten as a table of k columns, where k is the gap, and the columns are sorted. Even the summary of this algorithm on the sorting algorithm page is already more complete than this article - maybe we should steal that. Deco 15:06, 19 June 2006 (UTC)[reply]

big O/theta consistency

All instances of big O notation are written like O(x), however, in the second line Ω(n2) appears. Is this on purpose? If not, which version should be used?

Dummies102 02:38, 10 February 2007 (UTC)[reply]

Ω(n2) simply means that the algorithm has an asymptotic lower bound of n2. Big O is roughly the asymptotic upper bound. Best123 00:55, 28 February 2007 (UTC)[reply]

Dividing by 2.2

There is a citation needed for the following quote on the page:

For some unexplained reason, when the increment is divided by 2.2 as opposed to being continuously halved, this algorithm works about 25 to 30 per cent faster on large inputs.

While I cannot offer an actual citation, I have tested this sequence and it proves to be very effective. Here is the output from sorting 1000 random elements using the shell sorter on Sedgewick's analysis page linked to in the External Links section:

 19054 comparisons and 19054 exchanges for 1 2 3 4 6 9 8 12 18 27 16.
 11182 comparisons and 11182 exchanges for 1 8 23 77 281 1073 4193.  <Usually considered to be one of the best>
 7412 comparisons and 7412 exchanges for 1 3 7 21 48 112 336 861 1968.
 6668 comparisons and 6668 exchanges for 1 2 4 9 20 44 97 213.  <Divide by 2.2>
 5258 comparisons and 5258 exchanges for 1 3 6 10 15 21 28 36 45 55 66 78 91 105 120 136 153 171 190 210.  <Triangular numbers>

Aaron Rotenberg 17:27, 12 March 2007 (UTC)[reply]

These numbers are wrong. Are you really claiming that the number of exchanges is identical number of comparisons for random numbers? That means every time a comparison occurs, it is exchanged. Stephen Howe 11:43, 11 May 2007 (UTC)[reply]

Indeed. It is well-known that 2.2 performs very well. I haven't seen any papers on it either (perhaps because nobody knows why).

I have been using that page from Princeton. One criticism I can give is that the number of comparisons shown in the output may be bugged, because in reality the number of comparisons will always be greater than or equal to the number of swaps. This is evident when you try to Shellsort more than 100,000 numbers using Pratt's increments. Pratt's increments give the least number of swaps among the roster of increment sequences yet take a longer time to finish (especially if you use ALL possible increments.) Hibbard's increments finish faster even though the sorting algorithm will say that it performed more swaps than when using Pratt's. Try comparing the performance with array sizes of 1,000,000 and up, and you will see what I mean ^_^ (Also, make sure that you're using all the increments corresponding to your array size, otherwise you might not get a favorable result.)
I made variations of Hibbard's increments {1,3,7,15,31...} and Knuth's increments {1,4,13,40,121...} by replacing 2 or 3 with numbers in between, and my empirical results show that the most efficient divisor is approximately between 2.2 and 2.212, for some odd reason that still baffles me now. Most articles about Shellsort talk of the same number, although no one has come up with an answer as to why 2.2 seems to work better than the rest. Best123 05:58, 7 April 2007 (UTC)[reply]


I can offer an explanation. The 2.2 factor comes from the "Handbook of Algorithms and Data Structures" by Gonnet and Baeza-Yates. This is an incredible book on algorithms and data structures in its time, out of print, which I have a copy. Worth getting second-hand. Gonnet and Baeza-Yates say that if you decrease the gap by a constant factor ending on 1, the constant factor that appears to work best (empirically determined) for Shell Sort is 0.4545... which is 5/11. The reciprocal is, of course, 2.2. A online link to some of the books algorithms is here

www.dcc.uchile.cl/~rbaeza/handbook/hbook.html

Have a look at the index for an idea of the book. And if you look here

www.dcc.uchile.cl/~rbaeza/handbook/algs/4/414.sort.c.html

you can see the 5/11th factor

The theory behind this, is given N elements, a suitable sequence of gaps is N * α, N * α * α, N * α * α * α, ..., 1 - where α is between 0 and 1. Now If α is close to 1, the gap sequence is too long, each gap does few exchanges. If α is close to 0, the gap sequence is too short, each gap does more exchanges than it should So α being 0.4545... was determined empirically as minimising the number of comparisons/exchanges depending on your criteria. Stephen Howe 22:47, 9 May 2007 (UTC)[reply]


I don't have an account, and I don't know if I'm doing this right, either. I don't have much experience with programming (I've only taken three courses at my local high school), but I think I understand part of why this is. Integer rounding, in Java at least, always rounds down (try storing Math.random() as an integer--you always get zero). Maybe offsetting what you're dividing it by can upset the balance a little? --Patrick C.

incrmnt = 3

Why on earth does the C/C++ implementation have a hard-coded starting increment of 3! That certainly isn't right! It should be "incrmnt = size/2", otherwise it's not much of a 'sequence; is it (3, 1)! —The preceding unsigned comment was added by 203.109.182.198 (talk) 05:28, 18 March 2007 (UTC).[reply]

This may have fixed the problem, but it is not exactly an optimal solution. Quoted from higher up on the page: "Shellsort has a proven worst-case running time of O(n2) (using Shell's increments that start with 1/2 the array size and divide by 2 each time)." In other words, the C++ code uses the worst possible gap sequence. Aaron Rotenberg 03:39, 21 March 2007 (UTC)[reply]

for(incrmnt = 1 ; incrmnt<size;incrmnt=incrmnt*3+1);

Can you explain this line code in a better way?

Missing Citation

The run time of O(nlog2n) in the list of worst cases isn't cited with a source. Flamesplash 00:17, 16 August 2007 (UTC)[reply]

Is the Python Example Correct?

I am seeing two yield statements, where the second one wont be reached at all. It also uses numpy module, which is unnecessary. got to figure out a better one. Phoe6 10:27, 27 August 2007 (UTC)[reply]

I see nothing wrong with the generator function. The second yield statement will be reached because when the generator resumes, it will continue from the point at which it yielded.
The only point I question is whether the Python implementation should be in the article at all. Its inclusion suggests incorrectly that it is a good routine to use; in fact, any pure Python sort implementation will be an order of magnitude slower than the built-in, stable list.sort method, which is implemented in C. Peristarkawan 22:26, 3 October 2007 (UTC)[reply]

Algorithm

Does someone want to write a "pure" (not language-specific) algorithm for this? Nnkx00 23:55, 29 October 2007 (UTC)[reply]