Analysisof Shellsort Algorithms
Analysisof Shellsort Algorithms
net/publication/333917544
CITATIONS READS
0 4,398
3 authors, including:
All content following this page was uploaded by Avik Mitra on 21 June 2019.
Annesa Ganguly
Final year student, BCA
The Heritage Academy
Kolkata, India
Abstract: Shellsort is a comparison sort that uses insertion sort at each iteration to make a list of interleaved elements nearly sorted so that at the
last iteration the list is almost sorted. The time complexity of Shellsort is dependent upon the method of interleaving (called increment sequence)
giving variants of Shellsort. However, the problem of finding proper of interleaving to achieve the minimum time complexity of O(n log n) is
still open. In this paper, we have analyzed the performance of variants of Shellsort based on their time complexity. Our measure of time
complexity is independent of the machine configuration and considers all the operations of a sorting process. We found that the interleaving
method or increment sequence proposed by Sedgewick performs best among the analyzed variants.
Keywords: Shellsort; increment sequence; variants; survey; time complexity; algorithm; data structure
} //End of For-loop
II. SURVEY OF SHELLSORT VARIANTS complexity Ω(n5/3) (2) (n1/2, n1/4, 1) with time complexity
O(n3/2) (3) (n11/16, n7/16, n3/16, 1) with time complexity Ω(n 21/16).
At i-th iteration of Shellsort, list A gets subdivided into hi 1 1
sublists each of size ⌊ n/ hi ⌋ [12], and insertion sort is applied [13] with increment sequence ( ⌈ n +1⌉ , ⌈ n ⌉ , 1) obtained
3 3
to each of these lists. So, the time complexity of the sorting time complexity of O(n 5/3). Genetic algorithm used in [14] to
algorithm depends upon the time complexity of sorting each of generate two increment sequences based on the size of the list
the sublists. Moreover, since each of these sublists gets sorted A: 7-tuple increment sequence is used when size of A is
resulting partially sorted list, therefore, at subsequent iterations between 1000 and 1 million; 15-tuple increment sequence is
it is expected that there will be less swaps than the number of used when size of A exceeds 1 million. [15] uses binary search
comparisons. The sequence proposed by Shell [1] uses algorithm for each skip length so that the time complexity of
⌊ log n ⌋ length sequence. The time complexity in worst case the algorithm becomes O(n log n); the algorithm implicitly
is proved to be O(n2) when n is a power of 2. To reduce the uses reverse of the increment sequence
time complexity, [3] proposed that even skip length should be
9 hi
replaced by next odd number, resulting time complexity of hi =⌈ {9(9 /4)i−4 } /5 ⌉ such that < n∧i ≥ 0. The
O(n3/2). [4] also achieved the same worst-case complexity using 4
n n discussed variants did not achieve the lower bound
the increment sequence ( + 1,... ⌊ log n ⌋ + 1, 1). [5] obtained
2 2 Θ(n logn) [20] of a comparison sort algorithm. During
3 writing of this paper this bound is probabilistically achieved
a tighter bound of Θ(n ) using reverse of the following
2
by [2] and [17] proved that to achieve the lower bound the
generated sequence (note that mod represents modular length of the increment sequence will be Θ ¿, which is yet to
operation): be found.
Most of the time complexities of the Shellsort variants have
considered either number of swaps or number of comparisons,
⌊log (n−1) ⌋
For i =1 to 2 ; i = i*2 //Double value of i at each except [14] and [21] which have additionally used a
iteration specialized machine for actual time taken. However, during
run of algorithm associated variables like use of counter
{ variables and temporary variables, using instruction for
increment or decrement etc., adds up the time complexity and
J=i their contribution to the time complexity is proportional to the
number of times a loop, using these, runs. In the next section
Do{ Store J // Note J we define a parameter measuring of time complexity where we
include these factors and using this parameter we make
J = (3*J)/2 //Integer division
compare the performances of the Shellsort variants.
} while ( J mod 3 == 0 and J < n) //End of Do-while loop
III. COMPARATIVE ANALYSIS OF SHELLSORT
}//End of For-loop VARIANTS
hi =
{ i
8.2i−6. 2
i+1
2
2
9.2 −9. 2 +1 if s is even
+1 if s isodd
The definition (1) includes the number of comparisons,
number of exchanges and the number of times associated
variables are used. The definition is also independent of
underlying platform used for implementation of the algorithm
B.
Using this increment sequence, O(n 4/3) time complexity is
achieved. 3-tuple increment sequence for Shellsort are again B. Methodology for Comparison
explored in [10] where the increment sequence (n7/15, n1/5, 1) is We compare average case complexities of the Shellsort
proposed, getting a time complexity of O(n 23/15). [11] proposed variants. We have selected the variants [1][3]-[5][7]-[13] and
increment sequence where hi = 2i-1 (i ≥ 1) until hi ≥ n; the [15]. We have not taken [2] as it is a probabilistic algorithm
achieved time complexity is almost same as in [3]. [12] where sorting is not guaranteed (though the probability of
analyzed three increment sequences: (1) (n1/3, 1) with time getting a sorted list is very high). The increment sequence
[10] Svante Janson and Donald E. Knuth, “Shellsort with Three [16] Bronislava Brejova, “Analyzing Variants of Shellsort”,
Increments”, Random Structures and Algorithms, volume 10, Information Processing Letters, volume 79, issue 5, pp 223-227,
issue 1, pp 125-142, January 1997. September 2001.
[11] Thomas N. Hibbard, “An Empirical Study of Minimal Storage [17] Tao Jing, Ming Li, Paul Vitanyi, “Average-case Complexity of
Sorting”, Communications of the ACM, volume 6, issue 5, pp Shellsort”, International Colloquium on Automata, Languages
206-213, 1963. and Programming, 1999.
[12] Paul Vitnayi, “On the Average-case Complexity of Shellsort”, [18] Janet Incerpi and Robert Sedgewick, “Practical Variations of
Random Structures and Algorithms, volume 52, issue 2, pp 354- Shellsort”, Doctoral Dissertation, INRIA, 1986.
363, 2018. [19] J.L.Ramirez-Alfonsin, “Complexity of the Frobenius Problem”,
[13] M.A. Weiss, “Shellsort with Constant Number of Increments”, Combinatorica, volume 16, issue 1, pp 143-147, March 1996.
Algorithmica, volume 16, issue 6, pp 649-654, December 1996. [20] Thomas H. Cormen, Charles E Leiserson, Ronald R. Rivest,
[14] Richard Simpson, Shashidhar Yachavaram, “Faster Shellsort Clifford Stein, “Introduction to Algorithms”, 3rd Edition, MIT
Sequences: A Genetic Algorithm Application”, Computers and Press and Prentice Hall of India, February 2010.
Their Applications, 1999. [21] D. Ghoshdastidar and Mohit Kumar Roy, “A Study on the
[15] Naoyuki. Tokuda, “An Improved Shellsort”, IFIP 12th World Evaluation of Shell’s sorting Technique”, The Computer
Computer Congress on Algorithms, Software, Architecture – Journal, volume 18, issue 3, pp 234-235, 1975.
Information Processing, 1992.
[22]