CSDS233 HW5
CSDS233 HW5
ME I I 11931572196,83
Left Half 12,5 93,15
Right Half 7,2 10,6 83
P fintwo iv Split right half in two
at left
Ront Hif 93,153
i oSplit to left Halfandsort a
t.fi EE E totens t
12 d 5 33 and 153
sort to 5 123 from 1225 sort to 3,153 from 3215
i mage 93and
3,183 sorted
3,9153 from 9 3
vii Maze 5,123and 3,9153 inso tedorder
F is 3is siz.
to
f iit er i3fintoEEItii
Risit Kif Right
196,83 Half 6,8
fix Sp d sort E Split mint heifer two
tty if he'fg
63 i d 83
sort to 2,7 from 7 2 sort to 6,83 from 628
Aiil Merse 103 and 6,83sorted
68,103 from 1056 1078
A Masc 2,73 and 68,103 inso ted order
26,8 103 from 2 26
26,78,103 from772,706,708
Xiv Mose 3 5,912,153 and 2,6 7,8 103 in so tidord
2,35,912,153 from223 2,3 56,7 8,9 10,12153
2,35,6912,1s from6 2,35609 from 10 2,356,78,0 10212
2 3,56,7913153from7 2,356,729
2,356,78,912,153from872,35,617829
b QuickSort on 12,59,3 15,7 2,10 6,8
C Choose a Pinot 8
i Partioning
i a 1228 5 3,712,159,2106,83
iiF2
noaction
ii b 58 5,12 3,157,216,8 8 5,37,215,912.106,8
ii c 978 no action ii b 10 8 monition
ii d 328 5,3912,157,2 106,8 ii i 6 8 5,37,2 6,9121015,8
ii e 1578 no a teen 53,726,8122015,93
Pivot 8 snow phad at the on
iii RecusivelySort lifts bist 5,3 72,63 choose pirot b
v Ri tooing
iv a 5 6
5 5,3 7,26 iv d 226 7 53,27,63
2
iv b 32632s 3,57,363 iv e Place 6comity 5326
iv c 7 6 noaction g.ts.gr
v Recursing So t left s bist 53,23 with Prot 2
v c Snip 2nd as Sis first elent
YEEI.EE EI ttTItI.untcmsnnesnsts
Maimum 12 7 Swip with 12 no change
Worst Case:
This means that the list is sorted in reverse order. In this case, the algorithm must swap
every pair of adjacent elements. It will require n-1 swaps, and requires multiple swaps to
bubble up the next element to its correct position. The inner loop will have to perform n-1,
n-2, n-3, …, 1 total comparisons across a total of n-1 passes, so there are O(n^2) total
comparisons. There are also going to have to be O(n^2) total swaps as each pass must
swap adjacent elements. This results in an overall time complexity of O(n^2).
Average Case:
This means that the list is in random order. In this case, it can be assumed that about half
of the elements are in the wrong position. The number of comparisons and swaps is
somewhere in between the best and worse cases. While fewer swaps and comparisons
will be made, the complexity will remain closer to the worst case scenario. In this case,
Comparisons, Swaps, and the overall time complexity should be O(n^2).
If there are not enough buckets, then the algorithm becomes less e cient as the buckets
will contain more elements, leading to more time spent on sorting individual buckets,
reducing performance. Fewer buckets require more elements per bucket, and the time
complexity will tend towards O(nlogn), where it could be O(n) in the best case. In terms of
accuracy, there can be a slight reduction in accuracy as elements can be too widely varied
in the same bucket, which reduces the e ectiveness of the algorithm. This is generally a
small impact, however. If there are too many buckets, then the algorithm becomes less
e cient as there are fewer elements per bucket. This creates more memory overhead, as
there is a high likelihood of empty or 1-element buckets. The largest de ciency that arises
with more buckets is the memory overhead, decreasing performance. In terms of
accuracy, there should be no impact on using too many buckets. With two many buckets,
there will be few elements per bucket, with some even having 0, or 1 element. Here, there
is no sorting to be done on these buckets, and the algorithm simply iterates over the input,
which results in no accuracy impact.
With a non-uniform dataset, the e ectiveness is signi cantly reduced as there will be some
buckets with extremely large amounts of elements and others with none. This bring the time
complexity up, as well as the space complexity as there is a large amount of buckets with
very few elements. There are a few things we could to do to handle this type of data:
1. Dynamic Bucket Allocations: We could de ne buckets based o of the statistics of the
data, such as the mean or median so that there can be a more balanced distribution of
data.
2. Increasing the Number of Buckets: If even more buckets are used, this will reduce the
number of buckets with over owed elements with the downside of increasing the
overhead of managing too many buckets.
3. Weighted Bucket Ranges: Instead of using a set increment for the bucket sizes, we could
use some with more possible data points, and others with fewer. This reduces the
amount of buckets with to few elements, and also prevents other buckets from getting to
large.
4. Reprocess the Data: To better distribute the data, we could normalize the data via
exponentiation or logarithmic manipulation such that the values are more evenly
distributed.
Bucket Sort can be most e cient (compared to Quick Sort and Merge Sort) on evenly or
uniformly distributed ranges. It achieves near-linear time complexity. It can also pull through
in scenarios were the data is dense and small with respect to its range. Here, we can
arrange data into perfectly laid out buckets and the algorithm can shine.
Part 2) Insertion Sort on LinkedList
Algorithm:
1. Initialize the algorithm
A. Maintain a dummy node pointing to the head to allow for handling
edge cases like inserting at the start of the list.
B. Iterate through the original list one node at a time
2. Insert into Sorted List
A. For each node in the unsorted part of the list, the correct position
of each element should be found and placed in the correct portion
of the sorted part of the list
3. Update Pointers
A. Reset the pointers of the nodes to maintain the algorithms
structure, and repeat until there are no more elements left in the
unsorted part of the list
Pseudocode: Time Complexity:
Each node is visited exactly once,
Fifties.in ii
contributing to O(n). T nd the
correct position, the algorithm
memtis n traverses part of the sorted
Dummy rode portion, resulting to a worst case
dummy new NodeC time complexity of O(n^2) for the
entire list. Thus, the time
dummy next head complexity of this implementation
Track currentnode andloop is O(n^2) only because there is n
nodes that need to be checked,
current head e ectively n times each. Since the
while current next is not null algorithm is in place, there is no
if currentsnextnode well currentswal memory overhead and the space
complexity is given as O(1).
Grab to nodeto bemoved
tohs.it current next
current.net tolns_tonext
Findcorrectpositionfortools t
position dummy
while positionsnextnode al a tolsantsval
position positionnext
Inset thenode
tolsentnext position next
position next to Isant
else
current current next