Computer Science Sorting Algorithms Runtime Analysis - Big O
Computer Science Sorting Algorithms Runtime Analysis - Big O
Runtime Analysis: (stable) Runtime Analysis: (not stable) Runtime Analysis: (stable) Runtime Analysis: (stable)
# of Comparisons = based off of inner # of Comparisons = based off of inner # of Comparisons = based off of inner # of Comparisons = each level of the
loop -> loop -> loop -> recursive tree takes CN time, there are
(n-1) + (n-2) + … + 1 = ∑ k=1 -> n = (n-1) + (n-2) + … + 1 = ∑ k=1 -> n = (n-1) + (n-2) + … + 1 = ∑ k=1 -> n = logn levels in the tree
n(n-1)/2 n(n-1)/2 n(n-1)/2
Total Cost = ∑ i=1 -> lgn (cn) = cn(logn)
Runtime Complexity: Θ(n2) Runtime Complexity: Θ(n2) Runtime Complexity: cn(logn)
- Worst Case: Θ(n2)
Explanation: In a typical iteration, each Explanation: The Selection Sort is a very - Best Case: Θ(n) (n-1) Runtime Complexity: Θ(nlogn)
adjacent pair of elements is compared, basic sort. It works by finding the
starting with the first two elements, then smallest element in the array and putting Explanation: In general, in the jth step of Explanation: To understand the
second and third, and all the way to the it at the beginning of the list and then the insertion sort, the jth element of the efficiency of the mergesort algorithm it
final two elements. -> Each time two repeating that process on the unsorted list is inserted into the correct position is useful to separate the merging from
elements are compared, if they are remainder of the data -> Rather than in the list of the previously sorted j − 1 the sorting. The sorting takes place
already in sorted order, nothing is done making successive swaps with adjacent elements. indirectly, by repeatedly splitting the
to them, and the next pair of elements is elements like bubble sort, selection sort data in half until sorted singleton sets
compared. In the case where the two makes only one, swapping the smallest are created. The merging then rebuilds
elements are not in sorted order, the number with the number occupying its Outer loop: n-1 iterations the complete, original data set by
two elements are swapped, putting correct position. Inner loop: at most j-1 iterations splicing together the sorted mini-lists.
them in order. (will be exactly (n-1) - breaking down of the data occurs
iterations in the outer loop) Outer loop: n-1 iterations with efficiency (log n)
Inner loop: n-i iterations - the merging process is linear each
time two lists have to be merged (n)
Loop Invariant: 2 ways to phrase it: Loop Invariant: 2 ways to phrase it: Loop Invariant: Loop Invariant:
1) At the start of the kth iteration (when 1) At the start of the kth iteration (when 1) At the start of the kth iteration (when 1) Initialization:
i = k), the largest k-1 elements are in i = k), the first k-1 elements are the i = k+1), the first k elements are in 2) Maintenance:
the last positions in sorted order smallest elements and are sorted sorted order 3) Termination:
2) At the end of the kth iteration, the 2) Geeks: two loop invariant conditions
most right k elements are sorted - in the outer loop, array is sorted
and in place for first i elements (n-1 iterations)
- in the inner loop, min is always the
minimum value in
Code: For QUICK SORT: Code: Applies to the case where all the elements Code:
QuickSort (A, p, r) have multiple keys each within a fixed range, e.g. beats Ω(nlogn)
if r > p then { -if your list consists of 1 element, you are done Dates (year, month, day)
q = Partition (A, p, r) - otherwise, choose any element in your list to be the ==============================
QuickSort(A, p, q-1) “pivot” ==============================
Quick Sort(A, q+1, r)
} ============================== BinarySearch (value, Array, low, high)
LinearSearch (Array, length, x) if (low > high) { return -1 }
z
it.si I
Partition (A, p, r) for i = 1 to length mid = (low + high)/2
leftp = p
rightp = r - 1
if (Array[ i ] = x) {return i} if (value == Array[ mid ]) { return mid }
if (value < Array[ mid ] {
orouthof
Functions
pivot = A[ r ] return -1 return binarySearch( value, Array, low, mid - 1)
while True do { return binarySearch (value, Array, mid + 1, high)
a
while A [leftp] < pivot do { Best Case Scenario:
leftp = leftp + 1 c1 * 1 + c2 = c1 + c2 problem
arrangesanctions
u naeronorationosinson manners.mn
}
while A[rightp] > pivot && rightp > p{ Worst Case Scenario:
equivalent or
castrictsubsero
eosin
nniziognexampies
remember nnioszznciog.nl
right p = rightp - 1 c1 * length + c2
snoumatnisnoton
} into classify exponential
polynomiall ogarithm
i ii
exists.at mecnunenenn.n
if leftp >= right p {break}
else { swap A[leftp] and A[rightp] } room.comia one
}
swap A[leftp] and A[r]
sumoeneeareconstantscaanamianea
uneven
cases see
return leftp
n unan.o
Enisstatematnaera.is noses
Runtime Analysis: Runtime Analysis: nomatteranatcasnae.itunn Runtime Analysis: Runtime Analysis:
Best Case Scenario:
- At each step we divide the list into two equal Runtime Complexity: Θ(n)
n Runtime Complexity: Θ(d * (n+k)) where k is the
largest range of any of the fields involved in the sort
Runtime Complexity: Θ(n)
parts -Takes Θ(n + k) and if k = Θ(n), then it simply takes Explanation: Sorting algorithm in which the elements
- Performance in this case would be the same as Θ(n) Runtime will be Θ(dn) if k = O(n) are separated into several groups that are called
MergeSort -> Θ(nlogn) (ALSO random runtime) buckets. Each bucket is then sorted individually using
Explanation: Applies to the case where all the Explanation: Basic idea is that we use a stable linear any other algorithm or recursively using bucket sort
Worst Case Scenario: elements of the array are integers in the range 1 to k. search to sort the array d times -> d is the number of itself, Then the sorted buckets are gathered together
- At each step if the pivot element is the largest Basic idea is to go through the list and count the fields involved in the sort. - mainly useful when input is uniformly distributed
element, thene there will only be one subproblem number of occurrences of each element. Then, over a range
which is only one element smaller -> Θ(n2) construct a sorted list based on this count -> we divide the range into (n) equal sized buckets.
information. beats Ω(nlogn) we then distribute the n elements into the buckets (in
Explanation: Good choice of a pivot element is key one linear time pass). Since the elements have a
for performance. Strategies to choose pivot: 1) beats Ω(nlogn) uniform distribution, there wont be many elements
median of first, middle, and last elements 2) median within each buckets. So we can quickly sort the
of first middle, last, n/4th, 3n/4th elements 3) choose elements within each bucket and then go through the
pivot element randomly buckets in order. (the buckets can be sorted using
insertion sort)
Loop Invariant:
The pivot is in the exact correct position.