0% found this document useful (0 votes)
43 views13 pages

Selection Quick

One of the basic problems of Computer Science is sorting a List of items. It refers to the arrangement of numerical or Alphabetical or character data in statistical order. Bubble, Insertion, Selection, Merge, and Quick sort are most common Ones and they all have different performances based on the Size of the list to be sorted, with complexity of O (n^2) for bubble and insertion sort and O (n*logn) for merge, heap & quick sort, As the size of a list increases, Some of the sorting algorithm turns

Uploaded by

Nesmah Zy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views13 pages

Selection Quick

One of the basic problems of Computer Science is sorting a List of items. It refers to the arrangement of numerical or Alphabetical or character data in statistical order. Bubble, Insertion, Selection, Merge, and Quick sort are most common Ones and they all have different performances based on the Size of the list to be sorted, with complexity of O (n^2) for bubble and insertion sort and O (n*logn) for merge, heap & quick sort, As the size of a list increases, Some of the sorting algorithm turns

Uploaded by

Nesmah Zy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Al-Balqa Applied University

Faculty of Engineering

Computer Engineering Department

Improved Selection Sort & Quick Sort Algorithms

Prepared By:
Areej Ameen
Asmaa Al-anati
Esraa Alswaeer
Layan Mouallah

Supervised By:
ENG. Ibtehal Mishal
1.Abstract: • The extent up to which the given input
One of the basic problems of Computer sequence is Already sorted.
Science is sorting a List of items. It refers • The probable constraints on the given
to the arrangement of numerical or input values
Alphabetical or character data in • The system architecture on which the
statistical order. Bubble, Insertion, sorting Operation will be performed.
Selection, Merge, and Quick sort are most • The type of storage devices to be used:
common Ones and they all have different main Memory or disks [2].
performances based on the Size of the list One of the sorting methods is selection
to be sorted, with complexity of O (n^2) sort an in place comparison –based
for bubble and insertion sort and O
algorithm that is divided in to two parts,
(n*logn) for merge, heap & quick sort, As
the size of a list increases, Some of the the sorted part which is empty in the first
sorting algorithm turns to perform better and the unsorted one, searching the
than Others. As The size of dataset unsorted for the minimum /greater value
increases, there is always the chance of swap it with the leftmost /rightmost value
Duplication or some form of redundancies in the unsorted then moving it to the
occurring in the List. Sorting is considered sorted, repeatedly doing the same to the
as a fundamental operation in Computer remaining values in the unsorted part
Science as it is used as an intermediate until all values are sorted.in enhanced
step in many programs. For example, the selection (ESSA),the concept is to
binary search algorithm (one of the fastest
memorize the last maximum value
search algorithms) requires that data
position and starting the search from
must be sorted before the search could be
done accurately at all Times. Data is there to avoid wasting time in repeated
generally sorted to facilitate the process search.
of Searching, As a result of its vital or key
role in computing, several techniques for In the other hand ,quick sort method is a
sorting have been proposed. The Bubble, divide and conquer method ;breaks the
insertion, selection, merge and problem to sub problems finding solution
quick, the formal definition of the sorting to each then combine it using recursive
problem is as follows: calls , the concept of quick sort is to
divide the list into two parts greater and
Input: A sequence having n numbers in
some random order less than a value usually the most right
(a1, a2, a3, an) one called pivot ,using two pointers
Output: A permutation (a‟1, a‟2, moving in opposite direction, left to right
a‟3, a‟n) of the input Sequence such that pointer move until it point to less than
A‟1 ≤ a‟2 ≤ a‟3 ≤ ….. a‟n. [1] pivot value while right to left pointer
move until it points to greater than pivot
value ,then the two pointer swap values
this continue until two pointers cross
1. Introduction: directions then pivot is swapped with the
Sorting method of a list elements is closer pointer value ,after this break the
linked to efficient data management list in pivot point to two list each of them
and usage, with the arise of data size symbolize sub-problem and apply the
and the need of repeated access, it is previous steps until list is sorted.
important to use faster less complex
sorting method, according to Jadoon et
al. (2011) choosing sorting method In this study our main purpose is to spot
depends on various factor such as : how two different complexity level
• The size of the list (number of sorting algorithm works, as comparing
elements to be sorted). the simplest method (selection sort)
which average complexity is O(n2)with algorithm and n represents the size of the
a highly quick efficient one (quick input data values.
sort) which has an average complexity
of O(n log n) . In this Notation, the O
represents the complexity of the

Example of Selection Sort :

59 41 31 41 26 58

1- We take the first value as minimum value .


2- Compare the next value with minimum value.
3- Swap if its smallest.
4- Repeat step 2-3 until the i equals n.

The sorted array is:


26 31 41 41 58 59
2. Improved Selection Sort Algorithm (ISSA)
Existing selection sort (SSA) algorithm:

ARRAY

Unsorted part Sorted part

In the (SSA) partition the list into two main logical parts, the sorted part and the unsorted part.
Any iteration picks a value form the unsorted and places it in the sorted list, making the sort
partition grow in size while the unsorted partition shrinks for each iteration, The process is
terminated when the number of items or the size of the unsorted is one (1). The procedure to select
a value to be moved to the sorted list will return minimum value or maximum value in the
unsorted partition, which will be swapped to position the item correctly.

There are many types of improved selection sorting algorithms and all works better than the
Selection Sort Algorithm:

Optimized Selection Sort Enhance Selection Sort Hybrid Selection Sort


Algorithm (OSSA) Algorithm (ESSA) Algorithm (HSSA)
3) 1) “startsAlgorithm
sorting the(OSSA)
array from 2) 2) The concept of the (ESSA) 1) 3) “The (HSSA) uses a single
both ends. In a single iteration, the is to memorize the location Boolean variable „FLAG‟ to
smallest and largest elements in the of the past maximum and signal the termination of
unsorted part of the array are Start searching from that execution based on the Order of
searched and swapped [1]. The point in the subsequent the list, a[i-1] >= a[i] >=a[i+1] .
array is logically Partition into iteration. The arrangement The best scenario Is when the
three parts: lower-sorted, of the elements of the list list is already ordered, here the
influences the Time greatly. algorithm Terminate during
unsorted, upper-Sorted. The
The same set of data may the first pass, hence will have a
search for the maximum and
take different times to be run time of O(n). What this
minimum is done in the unsorted sorted as a result of their means is that, when data is not
partition and the minimum is arrangement. The average ordered, the Algorithm behaves
moved to the Lower-sorted and the Case of the algorithm is generally like the old selection
maximum to the upper-sorted. All however O(n2). sort Algorithm”.[2]
Values in the upper-sorted are
greater or equal to the values
in The lower-sorted. The process is
continued until the whole List or
array is sorted .The algorithm is
able to half the Run time of the
selection sort O(n2)/2.” [1].

lower-sorted unsorted upper-Sorted


The principle of Improved selection sort:

1.” Initialize i to 1
2. Repeat steps 3-5 until the i equals n.
3. Search from the beginning of the unsorted part of the list to the end.
4. Enqueue the locations of all values that are the same as the Maximum value.
5. Use the indices on the queue to perform swapping.

List – A[n]
Queue – Q[n] “[3]

EX: A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8]
A[n] 1 2 3 1 2 4 5 2 1
Q[n]
PASS 1

A[n] 1 2 3 1 2 4 5 2 1
Q[n] A[3] A[8]

PASS 2

A[n] 1 1 1 2 2 4 5 2 3
Q[n] A[7]
PASS 3

A[n] 1 1 1 2 2 2 5 4 3
Q[n] A[8]
PASS 4

A[n] 1 1 1 2 2 2 3 4 5
The list is sorted at the end of the fourth iteration or pass. The existing selection sort
will take more time to sort the same list.

The run time of the ISSA depends on the number of distinct values that are found in the list to be
sorted. If the number of distinct values is big or equal to n, then the run time of the algorithm can be
approximated as O(n2). However, if the number is very small, the algorithm completes the sorting in
the order of O(n).
The code of ISSA and Comlplicty:

Number of Run Time Big-O


Distinct Values The principle of work of the
1 T=n O(n)
)ISSA) is based on the basis
of repetition, which is
evident by obtaining one
2 T= 2n O(n) value, but it is repeated, the
Run time it will take here is
3 T=3n O(n)
n, So O(n)
but if the values are many and
... ... ... repeated, this will take Run
time=n2, so O(n2)

n-2 T = (n-2)n O(n2)

n T = n2 O(n2)
Worst and best Case of Run time:

Figure 1: The figure is a


relationship between the run
time of list of size n and
number of distinct values using
ISSA. The relationship is directly
proportional, if the number of
distinct values increase, the run
time increases linearly.

Figure 2: illustrates the relationship


between the number of distinct values
in a list and the time needed to sort it.
The number is illustrated as a ratio of
the size of the list (n). If the number of
distinct values is half the size of the
list, then the algorithm will take about
half the time the old selection sort
algorithm takes. From figure 2, as the
number of distinct values decreases,
the run time for the sorting also
decrease.
Analyze the performances of the various selection sort algorithms:

0% 
The number of iterations in 10% 
the group was determined in 20% 
1000 values terms of percentages and 11 30% 
40% 
different sets of values
50% 
%60 
%70 
%80 
%90 
%100 

The result:

From the table the performance of the improved selection sort


algorithm (ISSA) recorded best performance when the
percentages of redundancies exceed 50% .
3. Quick Sort Algorithm

Quick Sort: The Quick Sort algorithm works on the principle of (Divide and Conquer). The
idea of this principle is summarized in dividing the problem to be solved into several smaller
problems in size
(we mean big and small here the number of elements that are addressed) and similar to the big
problem in principle (small problems are usually independent of each other), and after solving
each small problem on sharpness. Comes the stage of assembling the solved parts, so that
together they are a solution to the complete problem.

The basic stages that each algorithm passes through are:


1. The Divide Step: In this stage, the size of the small
problems is determined to which the big problem is divided.

2. The Conquer Step: Here, small problems are


solved separately.

3. The Combine Step: Here, solutions to small problems are


assembled, which together form the solution of the original problem.

After explaining how Divide and Conquer works, let's explain how the Quick Sort
algorithm works:

First:
Pivot Selection: Selecting an element
from the array. This chosen element is
called the Pivot. Usually, this element
is in the middle, at the beginning, or at
the end of the array.

Second:
Partitioning: After the first step, the
second step of arranging the elements
of the array begins, so that all the
elements that are smaller than the
selected pivot are arranged before it
and the largest is placed after it.
The principle of Quick Sort:

1. Determine pivot. “3"

2. Start pointers at left and right


Since 4 > 3, stop.

3. Since 5 > 3, shift right


pointer since 3 == 3, stop.

4. Swap values at pointers.

5. Move pointers one more step.

The code of Quick Sort:


Void quicksort (int a[], int left, int right) {
int partition (int a[], int left, int right) {
int i = left, j = right;
int tmp;
int pivot = a[k];
while (i <= j) {
while (a[i] < pivot) i++;
while (a[j] > pivot) j--;
if (i <= j){
tmp = a[i]; a[i] a[j]; a[j] = tmp;
i++; j--; }}};
return i;
Worst Case of Quicksort:

The worst case of quick sort: In the worst case, the pivot element position may
be either First or last. In that case the elements are always divided in 1: (n-1)
proportions. The recurrence relation for such a proportional division would be:

Thus, the worst case running time of quick sort is O(n²)

T(n) = T(1)+ T(n-1)+ c.n= O(n²)


O(n)*O(n) = O(n²)

Best Case of Quicksort:


“The Best case of quick sort: We cut the array size in half each time,
So the depth of the recursion in log2 n
At each level of the recursion, all the partitions at that level do work that is
linear in n.

So, the Best case running time of quick sort is O(n log2 n ) “[4]

O( log2 n )* O(n)= O(n log2 n ).


T(n)= 2T(n/2)+O(n)= O(n log2 n )
Empirical Evaluation:
“shows Quick Sort algorithm
significantly faster (about 3
times faster) than Merge Sort
algorithm for array size from 4k
to 10k and less faster (about I
time) when array size is less
than 3k. All in all, quicksort
beats merge sort all the
cases.”[5]

4. Conclusion
Selection sort in its modified method may perform better than even the quick
sort,but that depends heavily on limiting distinct values to a very small number
as the strength of the algorithm appears where there is more redundancies and
repetition ,in this case modified selection sort may have complexity of O(n) which
is ideal for relatively large data ,compared to quick sort algorithm which has a
complexity of O(n*logn)in best case which is approached by choosing pivot as a
median to decrease time of comparison iteration .

5. Summary

Space Time: complexity Time:

Selection & ISSA =O(1) Selection & ISSA =O(n)

Quick sort =O(log n ) Quick sort =O(n log n )


) )
6. References:

[1] Improved Selection Sort Algorithm Paper.


[2] Improved Selection Sort Algorithm Paper.
[3] Improved Selection Sort Algorithm Paper.
[4] https://en.ppt-online.org/77315 .
[5] Quick sort Algorithm Paper.

You might also like