Internal Sorting Array
Internal Sorting Array
ALGORITHMS
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
1
// DSutil.java -- contains a bunch of utility functions.
import java.util.*;
//Elem.java -- Elem interface. This is just an Object with support for a key field.
// IElem.java -- Sample implementation for Elem interface. A record with just an int field.
2
//Sortmain.java
import java.io.*;
public class Sortmain {
/*
Insertion sort
Idea:
Let a0, ..., an-1 be the sequence to be sorted. At the beginning and after each
iteration of the algorithm the sequence consists of two parts: the first part
a0, ..., ai-1 is already sorted, the second part ai, ..., an-1 is still unsorted (i
0, ..., n-1).
In order to insert element ai into the sorted part, it is compared with ai-1, ai-2
etc. When an element aj with aj ai is found, ai is inserted behind it. If no
such element is found, then ai is inserted at the beginning of the sequence.
After inserting element ai the length of the sorted part has increased by one.
In the next iteration, ai+1 is inserted into the sorted part etc.
While at the beginning the sorted part consists of element a0 only, at the end
it consists of all elements a0, ..., an-1.
Example: The following table shows the steps for sorting the sequence 5 7 0
3 4 2 6 1. On the left side the sorted part of the sequence is shown in red. For
each iteration, the number of positions the inserted element has moved is
shown in brackets. Altogether this amounts to 17 steps.
5 7 0 3 4 2 6 1 (0)
5 7 0 3 4 2 6 1 (0)
0 5 7 3 4 2 6 1 (2)
0 3 5 7 4 2 6 1 (2)
0 3 4 5 7 2 6 1 (2)
0 2 3 4 5 7 6 1 (4)
0 2 3 4 5 6 7 1 (1)
0 1 2 3 4 5 6 7 (6)
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
3
static void insertionSort(Elem[] array) { // Insertion Sort
for (int i=1; i<array.length; i++) // Insert i'th record
for (int j=i; (j>0) && (array[j].key() < array[j-1].key()); j--)
DSutil.swap(array, j, j-1);
}
/*
Properties
• Stable
• O(1) extra space
• O(n2) comparisons and swaps
• Adaptive: Θ(n) time when nearly sorted
• Very low overhead
Discussion
For these reasons, and because it is also stable, insertion sort is often used as
the base case (when the problem size is small) for more complex algorithms
that have higher overhead such as quicksort.
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
4
/*
Shell sort
// [D.L. Shell: A High-Speed Sorting Procedure. Communications of the ACM, 2, 7, 30-32 (1959)]
The effect is that the data sequence is partially sorted. The process above is repeated, but each time
with a narrower array, i.e. with a smaller number of columns. In the last step, the array consists of only
one column. In each step, the sortedness of the sequence is increased, until in the last step it is
completely sorted. However, the number of sorting operations necessary in each step is limited, due to
the presortedness of the sequence obtained in the preceding steps.
3 7 9 0 5 1 6 3 3 2 0 5 1 5
8 4 2 0 6 1 5 7 4 4 0 6 1 6
7 3 4 9 8 2 8 7 9 9 8 2
Data elements 8 and 9 have now already come to the end of the sequence, but a small element (2) is
also still there. In the next step, the sequence is arranged in 3 columns, which are again sorted:
3 3 2 0 0 1
0 5 1 1 2 2
5 7 4 3 3 4
4 0 6 4 5 6
1 6 8 5 6 8
7 9 9 7 7 9
8 2 8 9
Now the sequence is almost completely sorted. When arranging it in one column in the last step, it is
only a 6, an 8 and a 9 that have to move a little bit to their correct position.
Actually, the data sequence is not arranged in a two-dimensional array, but held in a one-dimensional
array that is indexed appropriately. For instance, data elements at positions 0, 5, 10, 15 etc. would form
the first column of an array with 5 columns. The "columns" obtained by indexing in this way are sorted
with Insertion Sort, since this method has a good performance with presorted sequences
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
5
static void shellSort(Elem[] array) { // Shell Sort
int[] cols = {1391376, 463792, 198768, 86961,
33936, 13776, 4592, 1968,
861, 336, 112, 48,
21, 7, 3, 1};
int h;
for(int k = 0; k < 16; k ++) {
h = cols[k];
for (int i=h; i<array.length; i+=h)
for (int j=i; (j>=h) && (array[j].key()<array[j-h].key()); j-=h)
DSutil.swap(array, j, j-h);
} // end for k
} // end shellSort
/*
Theorem: With the h-sequence 1, 3, 7, 15, 31, 63, 127, ..., 2k – 1, ... Shellsort needs O(n· n) steps for
sorting a sequence of length n . [A. Papernov, G. Stasevic: A Method of Information Sorting in
Computer Memories. Problems of Information Transmission 1, 63-75 (1965)]
Theorem: With the h-sequence 1, 2, 3, 4, 6, 8, 9, 12, 16, ..., 2p3q, ... Shellsort needs O(n·log(n)2) steps
for sorting a sequence of length n. [V. Pratt: Shellsort and Sorting Networks. Garland, New York (‘79)]
Theorem: With the h-sequence 1, 5, 19, 41, 109, 209, ... Shellsort needs O(n4/3) steps for sorting a
sequence of length n. [R. Sedgewick: Algorithms. Addison-Wesley (1988)]
Neither tight upper bounds on time complexity nor the best h-sequence are known.
Properties
Because of its low overhead, relatively simple implementation, adaptive properties, and sub-quadratic
time complexity, shell sort may be a viable alternative to the O(n·lg(n)) sorting algorithms for some
applications.
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
6
/*
Bubblesort
*/
static void bubbleSort(Elem[] array) { // Bubble Sort
boolean flag = true;
for (int i=0; flag && i<array.length-1; i++){//Bubble up i'th elem
flag = false;
for (int j=0; j<array.length-i-1; j++) {
if (array[j+1].key() < array[j].key()) {
DSutil.swap(array, j+1, j);
flag = true;
}
}
}
}
/*
Properties
• Stable
• O(1) extra space
• Worst case O(n2/2) comparisons and swaps
• Adaptive: Θ(n) when nearly sorted
Discussion
Bubble sort has many of the same properties as insertion sort, but has slightly higher overhead.
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
7
/*
Selection Sort
*/
static void selectionSort(Elem[] array) {// Selection Sort
for (int i=0; i<array.length-1; i++) { // Select i'th record
int lowindex = i; // Remember its index
for (int j=i+1; j< array.length; j++) // Find the least value
if (array[j].key() < array[lowindex].key())
lowindex = j; // Put it in place
DSutil.swap(array, i, lowindex);
}
}
/*
Properties
• Not stable
• O(1) extra space
• Θ(n2) comparisons, Θ(n) swaps
• Not adaptive
Discussion
From the comparisons presented here, one might conclude that selection sort should never be used. It
does not adapt to the data in any way, so its runtime is always quadratic.
However, selection sort has the property of minimizing the number of swaps (or write-operations). In
applications where the cost of swapping items is high, selection sort very well may be the algorithm of
choice.
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
8
/*
Mergesort
The Mergesort algorithm is based on a divide and conquer strategy. First, the sequence to be sorted is
decomposed into two halves (Divide). Each half is sorted independently (Conquer). Then the two
sorted halves are merged to a sorted sequence (Combine)
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
9
/*
Properties
• Stable
• Θ(n) extra space
• Θ(n·lg(n)) time
• Not adaptive
Discussion
If using Θ(n) extra space is of no concern, then merge sort is an excellent choice. It is simple to
implement, and it is the only stable O(n·lg(n)) sorting algorithm.
There do exist linear time in-place merge algorithms (for the last step of the algorithm), but they are
extremely complex. The complexity is justified for applications such as external sorting where the
extra space is not available, but one of the important properties (stability) may be lost.
When sorting linked lists, merge sort requires only Θ(lg(n)) extra space for recursion. Given its many
other advantages, merge sort is an excellent choice for sorting linked lists.
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
10
/*
Quicksort
Quicksort is one of the fastest and simplest sorting algorithms. It works recursively by a divide-and-
conquer strategy.
First, the sequence to be sorted is partitioned into two parts, such that all elements of the first part are
less than or equal to all elements of the second part (divide).
Then the two parts are sorted separately by recursive application of the same procedure (conquer).
Recombination of the two parts yields the sorted sequence (combine).
*/
72 6 57 88 60 42 83 73 48 85
48 6 57 42 60 88 83 73 72 85
6 42 57 48 72 73 85 88 83
42 48 57 85 83 88
42 48 83 85
6 42 48 57 60 72 73 83 85 88
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
11
static void quickSort(Elem[] array) {
quicksort(array, 0, array.length-1);
}
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
12
/*
Properties
• Not stable
• O(lg(n)) extra space
• O(n2) time, but typically O(n·lg(n)) time
• Not adaptive
⎨ n k =0 ∑
n k =0
⎪T (1) = d
⎩
n −1
nT (n) = cn 2 + 2∑ T ( k ) [moltiplico entrambi i membri per n]
k =0
n
(n + 1)T (n + 1) = c(n + 1) 2 + 2∑ T ( k ) [n+1]
k =0
= c(2n + 1) + 2T (n)
c(2n + 1) n+2
⇒ T (n + 1) = + T ( n)
n +1 n +1
13
/*
Discussion
When carefully implemented, quicksort has low overhead. When a stable sort is not needed, quicksort
is a decent general-purpose sort.
The time complexity ranges between n lg(n) and n2 depending on the key distribution and the pivot
choices. When the key distribution has many different values, choosing a pivot randomly guarantees
O(n lg(n)) time with very high probability. However, when there are O(1) unique keys, the standard 2-
way partitioning quicksort exhibits its worst case O(n2) time complexity no matter the pivot choices.
The algorithm requires O(lg(n)) extra space for the recursion stack in the worst case if recursion is
performed on only the smaller sub-array; the second recursive call, being tail recursion, may be done
with iteration. If both sub-arrays are sorted recursively, then the worst case space requirement grows to
O(n).
*/
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
14
static void print_array(Elem[] A) {
System.err.print("\n [ "+A[i]+", ");
for (int i=1; i<A.length-1; i++) System.err.print(A[i]+", ");
System.err.print(A[i] + " ]\n ");
}
System.err.print("original : \n ");print_array(a1);
insertionSort(a1);
System.err.print("insertionSort : \n ");print_array(a1);
shellSort(a2);
System.err.print("shellSort : \n ");print_array(a2);
bubbleSort(a3);
System.err.print("bubbleSort : \n ");print_array(a3);
selectionSort(a4);
System.err.print("selectionSort : \n ");print_array(a4);
mergeSort(a5);
System.err.print("mergeSort : \n ");print_array(a5);
quickSort(a6);
System.err.print("quickSort : \n ");print_array(a6);
}
}
Source code examples based on "A Practical Introduction to Data Structures and Algorithm Analysis"
by Clifford A. Shaffer, Prentice Hall, 1998. Copyright 1998 by Clifford A. Shaffer.
15