0% found this document useful (0 votes)
10 views127 pages

Algorithms 93 Slides Combined

Uploaded by

omarahmad12318
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views127 pages

Algorithms 93 Slides Combined

Uploaded by

omarahmad12318
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 127

Algorithms Design & Analysis

An Introduction to Algorithms
• Definition
• Algorithm Characteristics
• Algorithm Properties

Prof. Oqeili Saleh Lectures


An Introduction to Algorithms
Prof. Oqeili Saleh Lectures

 The word “Algorithm” comes from the name of Persian


author, “ Abu Jaafar Mohammed Ibn Mousa Al Khawarsmi”
(825 A.D.) who wrote a textbook on mathematics.
• The word Algorithm means “a process or set of rules to be followed in calculations or other
problem-solving operations”. Therefore, Algorithm refers to a set of rules/instructions that
step-by-step define how a work is to be executed in order to get the expected results

• Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a


certain order to get the desired output. Algorithms are generally created independent of
underlying languages, i.e., an algorithm can be implemented in more than one programming
language.

• An algorithm is a well-ordered collection of unambiguous and effectively computable


operations that when executed produces a result and halts in a finite amount of time.
Why We Study Algorithms?

• Easier to solve and code the problem


• The programming process is a complicated one. You
must first understand the program specifications, of
course, and then you need to organize your thoughts
and create the program. This is a difficult task when
the program is not trivial (i.e. easy). You must break
the main tasks that must be accomplished into
smaller ones in order to be able to eventually write
fully developed code. Writing algorithm WILL save
you time later during the construction & testing
phase of a program's development.
• To be familiar with different strategies and approaches
• To Find out which one of the known algorithms
for solving different problems.
solves the problem in an efficient manner.
Algorithm Characteristics

 Correctness
 Clear and Unambiguous: Algorithm should be clear and unambiguous. Each of its steps should be clear in all
aspects and must lead to only one meaning.

 Well-Defined Inputs: Number, Meaning, Type.

 Well-Defined Outputs: Number, Meaning, Type.

 Finite-ness: The algorithm must be finite, i.e. it should not end up in an infinite loops or similar.
o It should solve the problem in a finite time (reasonable as well)

 Feasible: The algorithm must be simple, generic and practical, such that it can be executed upon will the
available resources. It must not contain some future technology, or anything.

 Language Independent: The Algorithm designed must be language-independent, i.e. it must be just plain
instructions that can be implemented in any language, and yet the output will be same, as expected.
Characteristics of an Algorithm

• Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their inputs/outputs should be clear and must lead to only one meaning.

• Input − An algorithm should have 0 or more well-defined inputs.

• Output − An algorithm should have 1 or more well-defined outputs and should match
the desired output.

• Finiteness − Algorithms must terminate after a finite number of steps.

• Feasibility − Should be feasible with the available resources.

• Independent − An algorithm should have step-by-step directions, which should be


independent of any programming code
Algorithm Properties

• Algorithms are well-ordered.


Since an algorithm is a collection of operations or
instructions, we must know the correct order in which to
execute the instructions. If the order is unclear, we may
perform the wrong instruction or we may be uncertain
which instruction should be performed next. This
characteristic is especially important for computers. A
computer can only execute an algorithm if it knows the
exact order of steps to perform.
Algorithm Properties

• Algorithms have unambiguous operations. Primitive (operation)

Each operation in an algorithm must be sufficiently clear so that it


does not need to be simplified. Basic operations used for writing
algorithms are known as primitive operations or primitives. When
an algorithm is written in computer primitives, then the algorithm
is unambiguous, and the computer can execute it
Algorithm
Properties
• Algorithms have effectively computable operations

Each operation in an algorithm must be doable, that is, the operation must
be something that is possible to do.

For computers, many mathematical operations such as division by zero or


finding the square root of a negative number are impossible. These
operations are not effectively computable so they cannot be used in
writing algorithms.
Algorithm
Properties
• Algorithms Produce a Result.

The result of the algorithm must be produced after the


execution of a finite number of operations

– Can the user of the algorithm observe a result produced


by the algorithm?

– A result can be a sign, a sound, an alarm, a number, a


message of error, etc.
Algorithm Properties
 It halts in a finite amount of time.
– Infinite loop
o The algorithm has no provisions to terminate
o A common error in the designing of algorithms
o Do not confuse, "not finite" with "very, very large".
Notation of an
Algorithm
Algorithms Design & Analysis

Algorithm Phases

Prof. Oqeili Saleh Lectures


Algorithm
Phases
Algorithm
Design Phase Phases
• The first stage is to identify the
problem and fully understand it.

• Consultation with people interested in


similar problems
Algorithm
Algorithm Analysis Phases
• Analysis of an Algorithm is the theoretical study of
computer program performance and resource usage

• Many solution algorithms can be derived for a given


problem.

The next step is to analyze those proposed


solution algorithms and implement the best
suitable solution.
Algorithm
Phases

• Implement
Writing and coding the algorithm
Algorithm
Phases

• Experiment
Experiment with different variables
Algorithm
Phases

• The design and analysis of algorithms is a circular process.


Algorithm
Phases
• The Quality of good Algorithm
 Time
 Memory
 Accuracy
 Sequence
 Generality
Operations in an Algorithm

• Sequential operations.
A sequential operation carries out a single well-defined task. When
that task is finished, the algorithm moves on to the next operation.
• Conditional operation.
A conditional operation is the “question-asking” instructions of an algorithm.
It asks a question and then select the next operation to be executed according
to the question answer

• Iterative operations.
An Iterative operation is a “looping” instruction of an algorithm. It tells us not
to go on to the next instruction, but, instead, to go back and repeat the
execution of a pervious block of instructions
Algorithm Analysis and
Design
Method for Developing an
Algorithm

1. Define the problem: State the problem you are


trying to solve in clear and concise terms.
2. List the inputs (information needed to solve the
problem) and the outputs (what the algorithm will
produce as a result)
3. Describe the steps needed to convert or
manipulate the inputs to produce the outputs.
Start at a high level first, and keep refining the steps
until they are effectively computable operations.
4. Test the algorithm: choose data sets and verify that
your algorithm works!
 In order to write an algorithm, the following things are needed as a pre-
requisite:

• The problem that is to be solved by this algorithm.


• The constraints of the problem that must be considered while solving
the problem.
• The input to be taken to solve the problem.
• The output to be expected when the problem is solved.
• The solution to this problem, in the given constraints.
• Then the algorithm is written with the help of above parameters such
that it solves the problem.
Algorithms Design & Analysis

Representing Algorithms

Prof. Oqeili Saleh Lectures


Representing
Algorithms
• How to Express Algorithms in a Clear, Precise, and Unambiguous manner?

• How to Represent Algorithms?

In Terms of Natural Language


In Terms of Formal Programming Language
In Terms of Pseudocode
Flowcharts
(1) In Terms of Natural Language

Advantages Disadvantages

 Familiar  Verbose
 Imprecise -> Ambiguity
 Rarely used for complex or
technical algorithms
(2) In Terms of Formal
Programming Language

Advantages Disadvantages

 Can be too low-level for algorithm design.


 Precise  Unambiguous
 It has syntactic details which are not important
 Will ultimately program in these
at the algorithm design phase.
languages
 Not familiar for the person who is not
interested in this programming language.
(3) In Terms of
Pseudocode
1. Get array list and its size n
2. Assign max = list[1]
3. For i = 2 to n Do
1. IF (list[i] > max) THEN
1. max = list[i]
2. End If
4. End For
5. Display max

Advantages Disadvantages

 A middle-ground compromise.  Compared with flowchart it is difficult


 Resembles many popular programming languages. to understand the program logic

 Relatively free of grammatical rules.


 Only well-defined statements are included “A
Programming language without details”.
What is a Pseudocode?
Pseudocode

• Consists of natural language-like statements that precisely


describe the steps of an algorithm or program.
• Statements describe actions.

• Focuses on the logic of the algorithm or program.


avoids language-specific elements.

• Written at a level so that the desired programming code can


be generated almost automatically from each statement.

• Steps are numbered. Subordinate numbers and/or


indentation are used for dependent statements in selection
and repetition structures.
Pseudocode Language Constructs
Pseudocode Language Constructs
Pseudocode Language Constructs
Pseudocode Example
• Express an algorithm to get two numbers from the
user (dividend and divisor), testing to make sure
that the divisor number is not zero, and displaying
their quotient using pseudocode.

Solution
1. Get dividend, divisor
2. IF divisor = 0 THEN
1) Display error message “divisor must be non-zero”
2) Exit algorithm
3. Compute quotient = dividend/divisor
4. Display dividend, divisor, quotient
(4) Flowcharts
 A flowchart is a graphical representation of an algorithm. Once the
flowchart is drawn, it becomes easy to write the program in any high-level
language.
Flowcharts

• Benefits of Flowcharts • Limits of Flowcharts


Makes Logic Clear Difficult to use for large Programs
Effective Analysis Difficult to Modify
Easy to Code and Test
Flowchart
Constructs
Decision
Sequence
Flowchart Constructs
Loop
Flowchart
Constructs
Switch Case
Flowchart Constructs

for(i = 2; i <= 6; i = i + 2) {
printf("%d\t", i + 1);

• Start of a repeated block. A for statement or similar. The


end of the loop body connects back to the loop symbol.
How many times the loop executes depends on what is
written inside the loop symbol.
Flowchart Constructs
• loops are often drawn by the decision symbol. In some diagrams, you can encounter a
notation using the Preparation symbol or the Loop limit symbol.
Other Flowchart symbols
Example 1: Calculate Profit and Loss
Example 2:Draw a flowchart to find the largest of three numbers A, B, and C.
Example 3: Avg of the first n natural numbers
Example 4
Draw a Flowchart to find the sum 1+3+5+7 +. .
Up to N numbers
Example 5:Find all the roots of a quadratic equation
Start
Read a, b, c
D = B2 – 4ac

r = -b/(2a) N Y X1 = (-b + )/2a


D≥0
c = /(2a) X2 = (-b - )/2a

X1 = r +
X2 = r -

Display X1, X2 End


Example 6: Find the Fibonacci series till term≤1000
Example 5: Algorithm to create a Fibonacci series

Step 1 : Start the program


Step 2 : Initialize the variables , i = 0, j=1,
k=0 and fib=0
Step 3 : If i<N, where satisfies the condition
N=1000
Step 4 : Then move to o/p fibonacci (1) = fib
Step 5 : Perform the calculation fib= j+k
j=k
k=fib
i=i+1
Step 6 : The loop executes continuously until
the condition satisfies
Find the Fibonacci series till term≤1000
Algorithms Design & Analysis

Efficiency of Algorithms

Prof. Oqeili Saleh Lectures


Space Efficiency
For large quantities of data space/memory used should be
analyzed.

The major components of space are:

 instruction space
 data space
 run-time stack space
Analysis of Time Complexity
Running time depends upon:
 Compiler used
 R/w speed to Memory and Disk
 Machine Architecture : 32 bit vs 64
 Input size (rate of growth of time)

When analyzing for time complexity we can take two approaches:

1. Estimation of running time – Random Access Machine (hypothetical Model Machine )


By analysis of the code we can do:
a. Operation counts - select operation(s) that are executed most frequently and determine
how many times each is done.
b. Step counts - determine the total number of steps, possibly lines of code, executed by
the program.

2. Order of magnitude/asymptotic categorization - gives a general idea of performance.


Random-Access Machine (RAM).
• Memory consists of an infinite array of cells.
• Each cell can store at most one data item (bit, byte, a record, ..).
• Any memory cell can be accessed in unit time.
• Instructions are executed sequentially.
• All basic instructions take unit time:
• Load/Store
• Arithmetics (e.g. +, −, ∗, /)
• Logic (e.g. >)
• Running time of an algorithm is the number of RAM instructions it
executes

By analysis of the code we can do:


a. Operation counts - select operation(s) that are executed
most frequently and determine how many times each is
done.
b. Step counts - determine the total number of steps,
possibly lines of code, executed by the program.
RAM Model
Single Processor
Sequential Execution
1 Time unit / Operation:
(assignment, +, - , *, / , logical operations)

list_sum (A, n)
{
Cost # of times
Sum = 0 1 1
for i = 1 to n-1 2 n+1
sum = sum + 2 n
A(i) 1 1
return sum
}
T(list_sum) = 4n + 4
RAM model is not realistic:

• Memory is finite (even though we often imagine it to be infinite


when we program).

• Not all memory accesses take the same time (cache, main
memory, disk).

• Not all arithmetic operations take the same time (e.g.


multiplications are expensive
Time Efficiency – Asymptotic Notation
In Asymptotic Analysis, the performance of an algorithm is
evaluated in terms of input size (we don’t measure the actual
running time). We calculate, how does the time (or space)
taken by an algorithm increases with the input size.
3 cases should be considered:
1. worst case
Example
2. average case
3. best case

Usually, worst case is considered since it gives an upper


bound on the performance that can be expected.

Best case is rarely considered

The average case is usually harder to determine - more data


dependent and is often the same as the worst case.
Big-O Notation

Given f(n) and g(n) - functions defined for


positive integers
Then
f(n) = O(g(n))

If there exists a c (c ≥1) such that:


f(n) ≤ c g(n)
for all sufficiently large positive
integers n
(n ≥ n0 )

The Big O notation defines an upper bound of an


algorithm, it bounds a function only from above.
Example1: Example 2:

f(n) = 4n +12 f(n) =3n2 + 3n + 10


Function Big-O Name
1 O(1) constant

log n O(log n) logarithmic


n O(n) linear

n log n O(n log n) n log n


n2 O(n2) quadratic
n3 O(n3) cubic

2n O(2n) exponential
n! O(n!) factorial
Steps for finding Big-O runtime:

1.Figure out what the input is and what n


represents.
2.Express the maximum number of
operations, the algorithm performs in
terms of n.
3.Eliminate all excluding the highest order
terms.
4.Remove all the constant factors.
Ω Notation
Big O notation provides an asymptotic upper bound on a function,
Ω notation provides an asymptotic lower bound.
Ω Notation can be useful when we have lower bound on time complexity of an
algorithm.

Ω (g(n)) = {f(n): there exist positive constants c


and n0 such that
0 <= c*g(n) <= f(n) for all n >= n0}.

Since the best case performance of an algorithm


is generally not useful, the Omega notation is the
least used notation among all three.
Θ Notation

The theta notation bounds a functions from above and


below, so it defines exact asymptotic behavior.

Θ(g(n)) = {f(n): there exist positive constants


c1, c2 and n0 such that:
0 <= c1*g(n) <= f(n) <= c2*g(n)
for all n >= n0}

The above definition means, if f(n) is theta of g(n),


then the value f(n) is always between c1*g(n) and
c2*g(n) for large values of n (n >= n0).
A simple way to get Theta notation of an expression is to
drop low order terms and ignore leading constants.

2n3+7n2+2000=Θ(n3)
Dropping lower order terms is always Ok since there will always be a n 0
after which Θ(n3) has higher values than Θn2) regardless of the constant.
Function Big-O Name
1 O(1) Constant
log n O(log n) logarithmic
n 1/2 O(n 1/2) Square Root

n O(n) linear
n log n O(n log n) n log n
n2 O(n2) quadratic
n3 O(n3) cubic

2n O(2n) exponential
nc O( n c) Polynomial
n! O(n!) factorial
Example 1

for (i =0, i< n, i++)


{
Block of Sts;
}

Example 2

for (i =n, i>0, i--)


{
Block of Sts;
}
Examples 3,4 Time complexity of nested loops is equal to the number of
times the innermost statement is executed.

for (int i = 1; i <=n; i += c) for (int i = n; i > 0; i -= c)


{ {
for (int j = 1; j <=n; j += c) for (int j = i+1; j <=n; j += c)
{ {
// statements; // statements;
} }
}
Example 5

for (i =0, i*i< n, i++) i*i < n


{ i2 > -n
Block of Sts; i2 = n
} i = n1/2

Example 6

for (i =0, i< n, i =i*2)


2 k >= n
{
2k=n
Block of Sts; K = log 2 n
}
Example 7

for (i =0, i< n, i++)


{
for (j =0, j< n, j*3)
Block of Sts;
}

O(n log3 n)
Example 8

int count = 0; How many times count++ will run?


for (int i = n; i > 0; i /= 2) When i=n, it will run n times.
for (int j = 0; j < i; j++) When i=n/2, it will run n/2 times.
count++; When i=n/4, it will run n/4 times and so on.

Total number of times count++ will run is


n+ n/2 +n/4 +...+1 = 2∗n.
Searching Algorithms

Prof. Oqeili Saleh Lectures


Searching Algorithms

there are three popular algorithms available:


• Linear Search
• Binary Search
• Jump Search

In Linear search, we search an element or value in a given


array by traversing the array from the starting, till the
desired element or value is found.

Binary Search is useful when there are large number of elements in


an array and they are sorted.
Jump search can be implemented by skipping some
fixed number of array elements or jumping ahead by
fixed number of steps in every iteration.
Python Program for Linear search

1. list1 = [8, 2, 7, 17, 12, 54, 21, 64, 12, 32] def search(arr, x):
2. print('List has the items: ', list1)
3. searchItem = int(input('Enter a number to search for: ')) for i in range(len(arr)):
4. found = False
if arr[i] == x:
5. for i in range(len(list1)): return i
6. if list1[i] == searchItem:
7. found = True return -1
8. print(searchItem, ' was found in the list at index ', i )

9. break
10.if found == False:
11. print(searchItem, ' was not found in the list!')
Binary search

Binary search is a search algorithm that finds the position of a


target value within a sorted array.

A binary search begins by comparing the middle element of


the array with the target value. If the target value matches
the middle element, its position in the array is returned. If
the target value is less or more than the middle element, the
search continues the lower or upper half of the array
respectively with a new middle element, eliminating the
other half from consideration.
# Python3 code to implement iterative Binary
Search.
# It returns location of x in arr if present, else
returns -1
arr = [ 1,2, 7, 9, 11, 50 ]
x=9
def binarySearch(arr, left, right, x):

# Function call
while left <= right:
result = binarySearch(arr, 0, len(arr)-1, x)
mid = left + (right - l) // 2;

if result != -1:
if arr[mid] == x:
print ("Element is present at index % d" %
return mid
result)
else:
elif arr[mid] < x:
print ("Element is not present in array")
left = mid + 1
else:
right = mid - 1

# element was not present


return -1
#Python Program for Binary search using Recursion
def binary(a, fir, las, term):
mid=int((fir+las)/2)
if term>a[mid]: x=[1,2,3,6,8,11,16,20,45]
binary(a, mid, las, term) term=20
binary(x, 0, len(x),
elif term<a[mid]: term)
binary(a,fir, mid, term)
elif term==a[mid]:
print("Number found at", mid+1)
else:
print("Number is not there in the array")
Sorting Methods
Bubble Sort

Prof. Oqeili Saleh Lectures


Bubble sort (sinking sort.)

 Bubble sort is a simple sorting


algorithm which compares the
adjacent elements in an array and
swaps them if they are in the
wrong order.
#Python Program for Bubble Sort
arr=[7,5,9,3,6,2,1] Output:
Sorted Array : [1, 2, 3, 5, 6,7,9]
n=len(arr)
for i in range(n):
for j in range(0, n-i-1): O( )

if arr[j] > arr[j+1]:


temp = arr[j+1]
arr[j+1] = arr[j]
arr[j] = temp
print("Sorted Array : ", arr )
Sorting Methods
Insertion Sort

Prof. Oqeili Saleh Lectures


Insertion Sort

The basic idea of insertion sort is that one element from the
input elements is consumed in each iteration to find its correct
position i.e., the position to which it belongs in a sorted array.

It iterates the next element in the array (It compares the current element with the
largest value in the sorted array):

 If the current element is greater than the largest value in the sorted array, then it
leaves the element in its place and moves on to the next element.

Else

 It finds its correct position in the sorted array and moves it to that position. This is
done by shifting all the elements, which are larger than the current element, in the
sorted array to one position ahead.
Insertion Sort Steps

 The first step involves the comparison of the element in question


with its adjacent element.

 If the element in question can be inserted at a particular


position, then space is created for it by shifting the other
elements one position to the right and inserting the element at
the suitable position.

 The above procedure is repeated until all the element in the


array are sorted.
Insertion Sort
Pseudocode

INSERTION-SORT(A)
for i = 1 to n
key ← A [i]
j←i–1
while j > = 0 and A[j] > key
A[j+1] ← A[j]
j←j–1
End while
A[j+1] ← key
End for
# Insertion Sort-Python
def insertionSort(arr):
# Traverse through 1 to len(arr)
for i in range(1, len(arr)):
key = arr[i]
# Shift array elements greater than the key to the right
#These elements are: arr[0..i-1.
j = i-1
while j >= 0 and key < arr[j] :
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key Complexity of Insertion Sort

# test the program

arr = [5, 6, 1, 9, 3, 4]
insertionSort(arr)
for i in range (len(arr)):
print ("% d" % arr[i])
Sorting Algorithms
Merge Sort

Prof. Oqeili Saleh Lectures


Merge Sort

• In Merge Sort, the given unsorted array with n elements, is


divided into n subarrays, each having one element, because a
single element is always sorted in itself. Then, it repeatedly
merges these subarrays, to produce new sorted subarrays, and
in the end, one complete sorted array is produced.
Merge Sort
• Merge Sort follows the rule of Divide and Conquer to sort a given set of
numbers/elements, recursively, hence consuming less time.
Divide and Conquer
If we can break a single big problem into smaller sub-problems, solve the smaller
sub-problems and combine their solutions to find the solution for the original big
problem, it becomes easier to solve the whole problem.

• The concept of Divide and Conquer involves three steps:

1. Divide the problem into multiple small problems.


2. Conquer the subproblems by solving them. The idea is to break down the
problem into atomic subproblems, where they are actually solved.
3. Combine the solutions of the subproblems to find the solution of the actual
problem.
Example
Arr = [13, 6, 3, 11, 9, 2, 4, 5, 8]
How Merge Sort Works?
• merge sort utilizes divide-and-conquer rule to break the problem into sub-problems,
the problem in this case being, sorting a given array.

• In merge sort, we break the given array midway, for example if the original array had
100 elements, then merge sort will break it down into two subarrays with 50 elements
each.
 These subarrays are repeatedly broken into smaller subarrays, until
we have multiple subarrays with single element in them.

• An array with a single element is already sorted, so once we break the original array
into subarrays which has only a single element, we have successfully broken down
our problem into base problems.

• Next, all these sorted subarrays should be merged - step by step to form one single
sorted array.
Algorithm Steps

Given arr.
• left start index, right last index.
• find the middle of the array: mid = (left + right ) / 2.
• break the array into two subarrays:
• ( left to mid ) and (mid + 1 ) to right
• Continue the process of breaking into halves until reaching single
elements.
• Merge the subarrays.

Example
Arr = [13, 6, 3, 11, 9, 2, 4, 5, 8]
Merge Sort

MergeSort(arr[], left, right)


If right > left
1. Find the middle point to divide the array into two halves:
middle m = (left +right)/2
2. Call mergeSort for first half:
Call mergeSort(arr, left, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, right)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, left, m, right)
MergeSort(arr[], left, right)
If right > left
1. Find the middle point to divide the array into two halves:
middle m = (left +right)/2
2. Call mergeSort for first half:
Call mergeSort(arr, left, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, right)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, left, m, right)
Arr = [13, 6, 3, 11, 9, 2, 4, 5, 8]
Complexity Analysis of Merge Sort

• Merge Sort is quite fast and has a time complexity of O(n*log n). It is also a
stable sort, which means the "equal" elements are ordered in the same order
in the sorted list.

Whenever we divide a number into half in every step, it can be


represented using a logarithmic function, which is log n .

• To merge the subarrays, made by dividing the original array of n


elements, a running time of O(n) will be required.

Worst Case Time Complexity [ Big-O ]: O(n*log n)


#Merge Sort- Python
def mergeSort(arr):
print("Splitting ", arr) while i < len(left):
if len(arr)>1: arr[k]=left[i]
mid = len(arr)//2 i=i+1
left = arr[:mid] k=k+1
right = arr[mid:]
while j < len(righ):
mergeSort(left) arr[k]=right[j]
mergeSort(right) j=j+1
i=j=k=0 k=k+1
while i < len(left) and j < len(right): print("Merging ",arr)
if left[i] < right[j]:
arr[k]=left[i] arr= [8,45,13,20,55,21,16,21,9]
i=i+1 mergeSort(arr)
else: print(arr)
arr[k]=right[j]
j=j+1
k=k+1
Sorting Algorithms
Quick Sort

Prof. Oqeili Saleh Lectures


Quick Sort is also called partition-exchange sort. This algorithm
divides the list into three main parts:
• Elements less than the Pivot element
• Pivot element(Central element)
• Elements greater than the pivot element

• Pivot element can be any element from the array, it can be the first
element, the last element or any random element.
For example:

In the array: {49, 36, 63, 16, 17, 8, 6, 24}, Pivot -24
So after the first pass, the list will be changed like this.
{6 8 17 16 24 63 36 49}
Hence after the first pass, pivot will be set at its position. All the elements smaller than the
pivot will be on the left of it and all the elements larger than the pivot - to the right.
Now 6 8 17 14 and 63 36 49 are considered as two separate subarrays, and same recursive logic
will be applied on them, and we will keep doing this until the complete array is sorted.
uick Sort Functions
def partition(arr, low, high):
i = ( low-1 ) # index of smaller element
# QuickSort main function
pivot = arr[high]
def quickSort(arr, low, high):
for j in range(low , high):
if low < high:
if arr[j] < pivot: # If current element < pivot
pi = partition(arr, low, high)
i = i+1
quickSort(arr, low, pi-1)
arr[i], arr[j] = arr[j], arr[i]
quickSort(arr, pi+1, high)
arr[i+1], arr[high] = arr[high], arr[i+1]
return ( i+1 )

< than pivot PIVOT > than pivot


arr
Arr = 3 9 5 14 15 8 6 2 4 7 St# i j 0 1 2 3 4 5 6 7 8 9

low high pivot 2 -1 - 3 9 5 14 15 8 6 2 4 7


0 9 7 4 -1 0 3 9 5 14 15 8 6 2 4 7

6 0 0 3 9 5 14 15 8 6 2 4 7
1. def partition(arr, low, high): 4 0 1 3 9 5 14 15 8 6 2 4 7
2. i = ( low-1 ) 4 0 2 3 9 5 14 15 8 6 2 4 7
3. pivot = arr[high] T
6,7 1 2 3 5 9 14 15 8 6 2 4 7
4. for j in range(low , high):
4 1 3 3 5 9 14 15 8 6 2 4 7
5. if arr[j] < pivot:
4 1 4 3 5 9 14 15 8 6 2 4 7
6. i = i+1
4 1 5 3 5 9 14 15 8 6 2 4 7
7. arr[i], arr[j] = arr[j], arr[i]
8. arr[i+1], arr[high] = arr[high], arr[i+1] 4 1 6 3 5 9 14 15 8 6 2 4 7
T
9. return ( i+1 ) 6,7 2 6 3 5 6 14 15 8 9 2 4 7

4 2 7 3 5 6 14 15 8 9 2 4 7
T 6,7 3 7 3 5 6 2 15 8 9 14 4 7

4 3 8 3 5 6 2 15 8 9 14 4 7
T 6,7 4 8 3 5 6 2 4 8 9 14 15 7
pi = partition(arr, low, high)
4 4 9 3 5 6 2 4 8 9 14 15 7

8 4 9 3 5 6 2 4 9 14 15 8
5 is returned to the calling st. 7
Complexity Analysis of Quick Sort
• if partitioning leads to almost equal subarrays, then the running time is
the best, with time complexity as O(n*log n).

• if partitioning leads to unbalanced subarrays, then the running time is the


worst case, which is O(n2)

To avoid this, pivot can be picked at random

Worst Case Time Complexity [ Big-O ]: O(n2)


Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)

 Quick sort is not a stable sorting technique, so it


might change the occurrence of two similar
elements in the list while sorting.
# QuickSort-Python program, Pivot –the last element
def partition(arr, low, high):
i = ( low-1 ) # index of smaller element
pivot = arr[high]
for j in range(low , high):
if arr[j] < pivot: # If current element < pivot # Driver code for testing
i = i+1 arr = [3, 7, 5, 8, 9, 1, 2, 11, 6]
arr[i], arr[j] = arr[j], arr[i] n = len(arr)
arr[i+1], arr[high] = arr[high], arr[i+1] quickSort(arr,0,n-1)
return ( i+1 ) print ("Sorted array is:", arr)
# QuickSort main function
def quickSort(arr, low, high):
if low < high:
pi = partition(arr, low, high)
quickSort(arr, low, pi-1)
quickSort(arr, pi+1, high)
Quick Sort Vs Merge Sort
Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element
as pivot and partitions the given array around the picked pivot. There are many
different versions of quickSort that pick pivot in different ways. Always pick first
element as pivot.
• Always pick last element as pivot (implemented below)
• Pick a random element as pivot. Merge Sort Quick Sort
• Pick median as pivot. Both Divide and Conquer
Not in-place In-Place

Space Complexity Space Complexity


O(n) O(log n)
O(nlogn) O(n2)

Avg. Case: O(nlogn)


Counting
sort

Prof. Oqeili Saleh Lectures


Counting
sort
Counting sort is a sorting technique based on keys between a specific range. It works by
counting the number of objects having distinct key values (kind of hashing). Then doing
some arithmetic to calculate the position of each object in the output sequence
For simplicity, consider the data in the range 0 to 9.
Input data: 1, 4, 1, 2, 7, 5, 2
1) Take a count array to store the count of each unique object.
Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 2 0 1 1 0 1 0 0

2) Modify the count array such that each element at each index
stores the sum of previous counts.
Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 4 4 5 6 6 7 7 7

The modified count array indicates the position of each object in


the output sequence.

3) Output each object from the input sequence followed by


decreasing its count by 1.
Process the input data: 1, 4, 1, 2, 7, 5, 2. Position of 1 is 2.
Put data 1 at index 2 in output. Decrease count by 1 to place
next data 1 at an index 1 smaller than this index.
Counting Sort
Counting sort is a sorting technique based on keys between a specific range.
This sorting technique is based on the frequency/count of each element to
be sorted and works using the following steps:

 Input: Unsorted array A[] of n elements in the range of 0 to k


(n and k are positive integers)
Step 1: The count/frequency of each distinct element in A is
computed and stored in another array, say count, of size k+1. Let x
be an element in A such that its frequency is stored at count[x].

Step 2: Update the count array so that element at each index, say i,
is equal to:
Count[i] = count[x] ; 0≤ x ≤i

The updated count array gives the index of each element of array A in the sorted
sequence. Assume that the sorted sequence is stored in an output array, say B, of size n.

Step 3: Add each element from input array A to B as


follows:

1
n-
a. Set i=0 and c = A[i]

i=
i ll
)t
b. Insert c into B[x] ; x = (count[c] – 1) .

(d
o
c. Decrement count[c] by 1
)t
(a
ps

d. Increment i by 1
tes
at
pe
Re

Step 4: print Aarray B


Example
Input array: 1, 7, 2, 3, 2, 2, 1, 4, 7, 5, 3, 9, 4
1) Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 3 2 2 1 0 2 0 1
2) Modify the count array such that each element at each
index stores the sum of previous counts.
Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 3 2 2 1 0 2 0 1
Count: 0 2 5 7 9 10 10 12 12 13 Modified count
The modified count array indicates the position of each object in the output array.
3) Output each object from the input sequence followed by decreasing its count by 1.
Process the input array: 1, 7, 2, 3, 2, 2, 1, 4, 7, 5, 3, 9, 4.
Position of 1 is 2. Put data 1 at index 2 in output. Decrease count by 1 to place
next data 1 at an index 1 smaller than this index.

1, 7, 2, 3, 2, 2, 1, 4, 7, 5, 3, 9, 4
Count: 0 2 5 7 9 10 10 12 12 13 Positions of input array elements in the sorted array
Sorted array; 1 1 2 2 2 3 3 4 4 5 7 7 9
counting algortithm - Implementation 2
• We need just the count array
• No need to scan the input array
• The sorted array is generated directly from the
count/frequency of each element by expansion the
occurrences of each element

Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 3 2 2 1 0 2 0 1

Sorted array: 1 1 2 2 2 3 3 4 4 5 7 7 9

1, 7, 2, 3, 2, 2, 1, 4, 7, 5, 3, 9, 4
1 1 2 2 2 3 3 4 4 5 7 7 9
# Python Program – counting sort
def count_sort(array, k):
m=k+1
count = [0] * m

for x in array:
# count occurences of numbers
count[x] += 1
i=0
for x in range(m):
for j in range(count[x]):
array1[i] = x
i += 1
return array
print (count_sort( [1, 2, 5, 3, 2, 1, 4, 8,2, 3, 2, 1], 8 ))
Time Complexity: O(n+k) where n is the number of elements in
input array and k is the range of input.

Auxiliary Space: O(n+k)

Disadvantages of counting sort:


 Array elements should be positive
 If the array size(n) is small and the largest number (k) is high
Algorithms Design & Analysis

Sorting Algorithms – Odd-Even Sort

Prof. Oqeili Saleh Lectures


Odd-Even Sort ( Brick Sort)
• An Odd-Even Sort (or brick sort is) is a variation of bubble-sort. It’s a
simple sorting algorithm, which was developed for use on parallel
processors with local interconnection.

• It works by comparing all odd/even indexed pairs of adjacent elements


in the list and, if a pair is in the wrong order the elements are
swapped. The next step repeats this for even/odd indexed pairs. Then
it alternates between odd/even and even/odd steps until the list is
sorted.

• Time Complexity (Worst Case): O(n2) where, n is the number of elements in


the input array.
Odd-Even Sort
0 1 2 3 4 5 6 7
8 6 3 1 7 2 4 5
Odd-Even Sort
# Python Program Odd-Even
def OddEvenSort(arr, n):
notsorted = 0
while notsorted== 0:
notsorted = 1
for i in range(1, n-1, 2):
if arr[i] > arr[i+1]:
arr[i], arr[i+1] = arr[i+1], arr[i]
notsorted = 0

for i in range(0, n-1, 2):


if arr[i] > arr[i+1]:
arr[i], arr[i+1] = arr[i+1], arr[i]
notsorted = 0
return
Selection
sort
• Selection sort is an algorithm that selects the smallest element
from an unsorted array in each iteration and places that
element at the beginning of the unsorted array.

• Selection sort is an in-place comparison-based algorithm in


which the list is divided into two parts, the sorted part at the left
end and the unsorted part at the right end. Initially, the sorted
part is empty, and the unsorted part is the entire list.

• The smallest element is selected from the unsorted array and


swapped with the leftmost element, and that element becomes a
part of the sorted array. This process continues moving unsorted
array boundary by one element to the right.
Algorithm Steps

Step 1 − Set MIN to location 0


Step 2 − Search the minimum element in the array
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until the array is sorted
Selection
sort
7 8 3 6 5 9 1
Selection
sort
7 8 3 6 5 9 1
Selection
procedure selection sort
list : array of items sort
n : size of list

for i = 1 to n - 1
/* set current element as minimum*/
min = i selectionSort(array, size)
repeat (size - 1) times
/* check the element to be minimum */ set the first unsorted element as the minimum
for each of the unsorted elements
for j = i+1 to n if element < currentMinimum
if list[j] < list[min] then set element as new minimum
min = j; swap minimum with first unsorted position
end if end selectionSort
end for

/* swap the minimum element with the current element*/


if indexMin != i then
swap list[min] and list[i]
end if
end for

end procedure
Selection procedure selection sort O( )
sort arr: array of items
n : size of the array

for i = 1 to n - 1
min = i
for j = i+1 to n
if arr[j] < arr[min] then
min = j;
end if
end for

if indexMin != i then
swap arr[min] and arr[i]
end if
end for

end procedure
# Selection sort in Python

def selectionSort(array, size):

for step in range(size):


min_idx = step

for i in range(step + 1, size):

# to sort in descending order, change > to < in this line


# select the minimum element in each loop
if array[i] < array[min_idx]:
min_idx = i

# put min at the correct position


(array[step], array[min_idx]) = (array[min_idx], array[step])

data = [-2, 45, 0, 11, -9]


size = len(data)
selectionSort(data, size)
print('Sorted Array in Ascending Order:')
print(data)

You might also like