Computational Complexity
Computational Complexity
time
albeit with a little more math. Remember that bubble sort involved
comparing things by pairs. In a list of length n, n - 1 pairs were
compared. For example, if we have an array of size 6, we would
have to compare array[0] and array[1], then array[1] and ar-
ray[2], and so on until array[4] and array[5]. That's 5 pairs for an log(n)
array of size 6. Bubble sort ensures that after k passthroughs of the
array, the last k elements will be in the correct location. So in the
first passthrough there are n-1 pairs to compare, then on the next
size of problem
passthrough only n-2 comparisons and so forth until there is only
1 pair to be compared. In math (n-1) + (n-2) + ... + 1 can be simplified to n(n-1)/2 which can be simplified even
further to n2/2 - n/2. Looking at the expression n2/2 - n/2, the leading term would be n2/2, which is the same as
(1/2) n2. Getting rid of the coefficient, we are left with n2. Therefore, in the worst case scenario, bubble sort is on
the order of n2, which can be expressed as O(n2). Similar to big O, we have big Ω (omega) notation. Big Ω refers
to the best case scenario. In linear search, the best case would be that the desired element is the first in the
array. Because the time needed to find the element does not depend on the size of the array, we can say the
operation happens in constant time. In other words, linear search is Ω(1). In bubble sort, the best case scenario
is an already sorted array. Since bubble sort only knows that a list is sorted if no swaps are made, this would
still require n-1 comparisons. Again, since we only use the leading term without the coefficients, we would say
bubble sort is Ω(n).