This document compares and contrasts several sorting algorithms: insertion sort, merge sort, quicksort. It provides pseudocode examples and analyzes the time complexity of each algorithm using Big O notation. Insertion sort runs in O(n) time if the list is already sorted but O(n^2) time in the worst case. Merge sort divides the list in half at each step and runs in O(n log n) time. Quicksort selects a pivot point and partitions the list around it, running in O(n log n) time on average but potentially O(n^2) in the worst case.
This document compares and contrasts several sorting algorithms: insertion sort, merge sort, quicksort. It provides pseudocode examples and analyzes the time complexity of each algorithm using Big O notation. Insertion sort runs in O(n) time if the list is already sorted but O(n^2) time in the worst case. Merge sort divides the list in half at each step and runs in O(n log n) time. Quicksort selects a pivot point and partitions the list around it, running in O(n log n) time on average but potentially O(n^2) in the worst case.
This document compares and contrasts several sorting algorithms: insertion sort, merge sort, quicksort. It provides pseudocode examples and analyzes the time complexity of each algorithm using Big O notation. Insertion sort runs in O(n) time if the list is already sorted but O(n^2) time in the worst case. Merge sort divides the list in half at each step and runs in O(n log n) time. Quicksort selects a pivot point and partitions the list around it, running in O(n log n) time on average but potentially O(n^2) in the worst case.
This document compares and contrasts several sorting algorithms: insertion sort, merge sort, quicksort. It provides pseudocode examples and analyzes the time complexity of each algorithm using Big O notation. Insertion sort runs in O(n) time if the list is already sorted but O(n^2) time in the worst case. Merge sort divides the list in half at each step and runs in O(n log n) time. Quicksort selects a pivot point and partitions the list around it, running in O(n log n) time on average but potentially O(n^2) in the worst case.
Download as DOCX, PDF, TXT or read online from Scribd
Download as docx, pdf, or txt
You are on page 1of 5
Insertion sort: advantage over bubblesort.
It requires fewer comparison than bubblesort unless the list is backward.
If our array is already sorted, we only make (1) N comparison perelement.. Can be used to add new element with a sorted array.
Time complexity.
Insertion sort: 1) for loop to loop over the index at the entry that has to be inserted. 2) run through a while loop that performs insertion. 3) while loops swaps the insertion element with the one before as long as it is lesser.
For I = 1 to n-1 J = i While j > 0 and A[j] < A[j-1] Swap (a[j], A[j-1]) J = j-1
Merge sort:
Two sorted sequences and merges into one sorted sequence.
Outer loop starts at one, doubles Inner looping, j starts at the index of the start of the arrays to be merged Merge function &A[j] = pointer to start the first array to be merged. (the address) second argument is length of the first array, i third argument is the size of both arrays, with the second arrays ends based on the points of the first array. Usually 2i but maybe shorter for the last merge of each loop
For I = 1; I < size; I = 2i For j = 0, j < size, j = j +2i Merge(&A[j], I, min(2i, size j))
** temp is dynamically allocated/ Merge(A, end1, end2) I = 0, j = end1, k = 0 While I < end1 and j < end2 If (A[i] < A[j]) Temp [k] = A[i] I = I +1, k = k+1 Else Temp [k] = A[j] J = j+1, k = k+1
When the while loop above completes we have copied all the elements from one of the arrays into temp.
Copy the remaining largest elements from the first or second arrays
While I < end1 Temp [k] = A[i] I = I + 1, k = k +1 While j < end2 Temp [k] = A[j] J = j +1, k = k+1
Copy merged from temp back to orginal array. For (I = 0, I < end2; i++) A[i] = temp [i]
Quicksort: Select pivot and sorts everything relative to the pivot point Use recusion to select a pivot point within those sorted points, Keep on sorting until array is fully sorted.
1) pivot point 1) top and bottom. If bottom element is greater than the pivot point. If right side element is greater than pivot point and greater than pivot point. Next element. If greater than pivot point, no swap required. Relative to the pivot point, everything to the right is greater, left is less than.
Recursively swap. Pick a middle element in the right side of the array.
Mergesort(L) Will produce a new sorted list. If the size of list is one L = L; return list is the orginal list itself.
Else { Split the list into two non-empty sets, L1 and L2. L1 = mergesort (L1) recursively merge sorted L1 to get L1 And L2 to get L2
L = merge routine of two sorted list.
Recursively.
Use the indices of the array. Pass the indices of the list. When we split the list, we just split it
Mergesort(A,I,j) When I = j, return. Else: Split the list into two parts. Use an index between I and j mid = (i+j)/2 mergesort(A, I, mid); mergesort(A, mid+1,j); merge(A, I, j, mid);
when merge, compare the elements, smaller of the two gets chosen. Once crosses the mid value, the rest can be copied down.
merge(A,I,j,mid) {
package de.vogella.algorithms.sort.mergesort; public class Mergesort { private int[] numbers; private int[] helper; private int number; public void sort(int[] values) { this.numbers = values; number = values.length; this.helper = new int[number]; mergesort(0, number - 1); } private void mergesort(int low, int high) { // check if low is smaller then high, if not then the array is sorted if (low < high) { // Get the index of the element which is in the middle int middle = low + (high - low) / 2; // Sort the left side of the array mergesort(low, middle); // Sort the right side of the array mergesort(middle + 1, high); // Combine them both merge(low, middle, high); } } private void merge(int low, int middle, int high) { // Copy both parts into the helper array for (int i = low; i <= high; i++) { helper[i] = numbers[i]; } int i = low; int j = middle + 1; int k = low; // Copy the smallest values from either the left or the right side back // to the original array while (i <= middle && j <= high) { if (helper[i] <= helper[j]) { numbers[k] = helper[i]; i++; } else { numbers[k] = helper[j]; j++; } k++; } // Copy the rest of the left side of the array into the target array while (i <= middle) { numbers[k] = helper[i]; k++; i++; } } }
During the Mergesort process the object in the collection are divided into two collections. To split a collection,Mergesort will take the middle of the collection and split the collection into its left and its right part.
Once the sorting process of the two collections is finished, the result of the two collections is combined. To combine both collections Mergesort start at each collection at the beginning. It pick the object which is smaller and inserts this object into the new collection. For this collection it now selects the next elements and selects the smaller element from both collection.
To avoid the creation of too many collections, typically one new collection is created and the left and right side are treated as different collections.
Big O Notations: Big O notations are used to measure how well a computer algorithm scales as the amount of data involved increases. It isn't however always a measure of speed as you'll see. Instead it measures how well an alograthrim scale
45^3 + 20n^2 + 19 = 84 if n = 2, ans = 459
n = 10, 47,019
19 does not have much bearing. N^2 has very little , 45n^3, part of this alograthim has a lot to do. bad assets scales is to going to be 45. 0(N^3)
O(1): execute in the same amount of time regardless how big the arrays is.
MINIMUM: O(N): time to complete in direct proportion to amount of data. Linera search. Look in exactly in each array. Worst case scenario, every single items will have to be search.
O(N^2): proportion to square amount of data. (Bubble sort) nested iterations. Each passed through of outer, go through the entire list again. Go through the loop again. SLOWER.
Binary search: O(log N) Data being used decreased by 50% each time through the algro. Pretty fast. As log N increases, N increases, log N increases dramatically different, Increase in log N in comparision to N, N is increases dramtically fast than log N. Increase in data has little or no effect. Amount of data is half each time. Binary search requires us to do the bubble sort. Efficient.
O(n log n)comparing and moving values very efficient without shifting. Values are only going to be compared once. Each comparison will reduce the possible final sorted list in half. Number of comparison is = to log n! factorial of n. Log n (greatest value + log(n-1) + + log (1) Comparison = n log n