0% found this document useful (0 votes)
2 views

DSA - Module 5

The document provides an overview of various algorithmic techniques, focusing on brute force methods such as selection sort, bubble sort, and sequential search. It also discusses exhaustive search for combinatorial problems like the Traveling Salesman Problem, and introduces divide and conquer strategies with examples like merge sort and binary search. Additionally, it outlines the Master Theorem for analyzing the time complexity of divide and conquer algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

DSA - Module 5

The document provides an overview of various algorithmic techniques, focusing on brute force methods such as selection sort, bubble sort, and sequential search. It also discusses exhaustive search for combinatorial problems like the Traveling Salesman Problem, and introduces divide and conquer strategies with examples like merge sort and binary search. Additionally, it outlines the Master Theorem for analyzing the time complexity of divide and conquer algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Algorithms

Mrs. Pushpanjali Y Jillanda


Department of MCA, DSATM
Brute Force
Introduction to Brute Force
▪ It is a straight forward approach to solving a problem usually
directly based on the problems statement and definitions of the
concepts involved
▪ Ex:
▪ Computing an
▪ Computing n!
▪ Sequential search
Approaches in Brute force
Technique
▪ Unlike some other strategies, brute force is applicable to a very
wide variety of problems
▪ It is used for many elementary but important algorithmic tasks
such as computing sum of n numbers, finding largest element in
a list and so on
▪ For some important problems the brute force approach yields
reasonable algorithms of at least some practical value with no
limitation on instant size
Approaches in Brute force
Technique
▪ The expense of designing a more efficient algorithm may be
unjustifiable if only a few instances of problem need to be
solved and a brute force algorithm can solve those instances
with acceptable speed
▪ Even if too inefficient in general, a brute-force algorithm can
still be useful for solving small-size instances of a problem.
▪ Finally, a brute-force algorithm can serve an important
theoretical or educational purpose, ex: as a yardstick with
which to judge more efficient alternatives for solving a problem.
Selection Sort
Approach
▪ We start by scanning the entire list to find the smallest element and
exchange with the first element.
▪ We put the smallest element in the final position of the sorted list .i.e.
a[0]
▪ Then we scan the list starting with the second element, to find the
smallest among the n-1 elements and exchange it with the second
element.
▪ We put the second smallest element in its final position .i.e. a[1].
▪ On the ith pass which we number from 0 to n-2, the algorithm
searches for the smallest item among the last n-i elements and swaps
it with a[i]
Algorithm
SelectionSort(A[0….n-1])
// Sorts given array using selection sort.
// Input: An array A[0…..n-1] orderable elements.
// Output: An array A[0…..n-1] sorted in ascending order.
for i←0 to n-2 do
min←i
for j←i+1 to n-1 do
if A[j]<A[min]
min←j
swap A[i] and A[min]
Example: Sort 10 5 2 0 4
First Iteration:
Index Values
i 0 1 2 3 4 min j
10 5 2 0 4 0 1
10 5 2 0 4 1 2
0 10 5 2 0 4 2 3
10 5 2 0 4 3 4
0 5 2 10 4
Example: Sort 10 5 2 0 4
Second Iteration:
Index Values
i 0 1 2 3 4 min j
0 5 2 10 4 1 2
0 5 2 10 4 2 3
1
0 5 2 10 4 2 4
0 2 5 10 4
Example: Sort 10 5 2 0 4
Third Iteration:
Index Values
i 0 1 2 3 4 min j
0 2 5 10 4 2 3
0 2 5 10 4 2 4
2
0 2 5 10 4 3
0 2 4 10 5
Example: Sort 10 5 2 0 4
Fourth Iteration:
Index Values
i 0 1 2 3 4 min j
0 2 4 10 5 3 4
3 0 2 4 10 5 4
0 2 4 5 10

Final Sorted Array: 0 2 4 5 10


Analysis
▪ Input Size: Number of elements n
▪ Basic Operation: Comparison A[j] < A[min]
𝑛−2 𝑛−1

𝐶 𝑛 = ෍ ෍ 1
𝑖=0 𝑗=𝑖+1

𝑛−2

𝐶 𝑛 = ෍ 𝑛 − 1 − (𝑖 + 1 + 1]
𝑖=0

𝑛−2
𝑛(𝑛 − 1)
𝐶 𝑛 = ෍ (𝑛 − 1 − 𝑖) =
2
𝑖=0
Bubble Sort
Approach
▪ In this technique the two successive items a[j] and a[j+1]
exchanged whenever a[j]>a[j+1]
▪ By doing it repeatedly we end up “bubbling up” the largest
element to the last position on the list
▪ The next pass bubbles up the second largest element, and so on
until, after n-1 passes, the list is sorted
Algorithm
BubbleSort(A[0…n-1])
// Sorts given array using bubble sort.
// Input: An array A[0…..n-1] orderable elements.
// Output: An array A[0…..n-1] sorted in ascending order.
for i←0 to n-2 do
for j←0 to n-2-i do
if A[j+1]<A[j]
swap A[j+1] and A[j]
Example: Sort 2 -1 4 1 0
First Iteration:
Index
i 0 1 2 3 4 j
0 2 -1 4 1 0 0

Index
i 0 1 2 3 4 j
0 -1 2 4 1 0 1
Analysis
▪ The number of key comparisons for the bubble sort is same for
all array of size n.
𝑛−2 𝑛−2−𝑖 𝑛−2

෍ ෍ 1 = ෍ [ 𝑛 − 2 − 𝑖 − 0 + 1]
𝑖=0 𝑗=0 𝑖=0

𝑛−2
𝑛(𝑛 − 1)
෍ (𝑛 − 1 − 𝑖) = 𝜖 Θ(𝑛2 )
2
𝑖=0
Sequential Search
Approach
▪ The algorithm simply compares successive elements of a given
list with a given search key until either a match is encountered
(successful search) or the list is exhausted without finding a
match (unsuccessful search).
▪ A simple extra trick is often employed in implementing
sequential search
▪ If we append the search key to the end of the list, the search for
the key will have to be successful, and therefore we can
eliminate a check for the list’s end on each iteration of the
algorithm.
Algorithm
SequentialSearch(A[0…n-1],k)
// Searches the given array using selection sort.
// Input: An array A[0…..n-1] and a key element k to be searched.
// Output: If found, returns the position where the element was found //else returns -1.
A[n]k
i0
while A[i] ≠ k do
ii+1
if i<n
return i
else
return -1
Exhaustive Search
Approach
▪ Its is simply a brute force approach for combinatorial problems
▪ List all the potential solution to the problem.
▪ No repetition of solution.

▪ Evaluate solutions one by one.


▪ Discard infeasible solutions and keep track of the best solution found so
far.
▪ When search ends you will get optimal solution
Examples
▪ Travelling Salesman Problem (TSP)
▪ Knapsack Problem
▪ Assignment Problem
TSP by Exhaustive Search
▪ Given n cities with known distances between each pair, find the
shortest tour that passes through all the cities exactly once
before returning to the starting city.
▪ Alternatively: find the shortest Hamiltonian circuit in a
weighted connected graph
TSP by Exhaustive Search
▪ In graph theory, a Hamiltonian circuit is a
cycle that visits every vertex in a graph
exactly once and returns to the starting
vertex
▪ Starting and Ending: A Hamiltonian circuit
must begin and end at the same vertex.
▪ Hamiltonian circuits have applications in
areas like transportation and logistics,
where they can be used to optimize
delivery routes.
TSP by Exhaustive Search
TSP by Exhaustive Search
▪ It is easy to see that a Hamiltonian circuit can be defined as a
sequence of n+ 1 adjacent vertices
▪ Here the first vertex of the sequence is the same as the last one while
the other n-1 vertices are distinct
▪ The above figure reveals three pairs of tours that differ only by the
tour’s direction
▪ Hence, we can cut the number of vertex permutations by half
▪ This cannot improve the efficiency very much, because the
permutation needed will still be (n-1)! /2
▪ This indicates it is suitable when the n is small
String Matching
Approach
▪ Here we have given a string of n characters called the text and a
string of m characters(m≤n) called pattern, find a substring of
the text that matches the pattern.
Algorithm
BruteForceStringMatching (T[0…n-1],p[0…m-1])
//Implements String matching
//Input: text array T of n characters, and pattern array p of m characters.
//Output: Position of first character of pattern if successful otherwise -1
for i←0 to n-m do
j←0
while j<m and P[j]=T[i+j]
j←j+1
if j=m
return i
return -1
Analysis
▪ Align the pattern against the first m characters of the text and start
matching the corresponding pairs of characters from left to right
until either all the m pairs of the characters match (then the
algorithm can stop) or a mismatching pair is encountered.
▪ In the next case the pattern is shifted one position to the right and
character comparisons are resumed.
▪ Starting again with the first character of the pattern and its
counterpart in the text.
▪ We do the comparison up to n-m position beyond that position there
are not enough characters that match the entire pattern
Tracing of String Matching
▪ Text: “WAIT AND WATCH”
▪ Pattern: “WAT”

▪ j=m=3
▪ return 9
▪ i.e., The starting position of the substring in
the given string
▪ Pattern P is present in the text T starting at
position 9
Analysis
▪ In this example we can see that the algorithm shifts the pattern
almost always after a single character comparison
▪ The worst case may be when the algorithm may have to make
all m comparison before shifting the pattern, and this can
happen for each of the n-m+1 tries.
▪ Then in the worst case the efficiency is Θ(nm)
Analysis
𝑛−𝑚 𝑚 𝑛−𝑚

෍ ෍1 = ෍ 𝑚 −0+1
𝑖=0 𝑗=0 𝑖=0

𝑛−𝑚 𝑛−𝑚

= ෍ 𝑚+ ෍ 1
𝑖=0 𝑖=0
⇒𝑚 𝑛−𝑚+1 + 𝑛−𝑚+1
⇒ 𝑚+1 𝑛−𝑚+1
⇒ 𝑚2 − 𝑚𝑛 − 𝑛 − 1
⇒ 𝑪𝒘𝒐𝒓𝒔𝒕 = 𝜣(𝒏𝒎)
In case of searching in random text, it has been shown to be linear. i.e., 𝜣 𝒏 + 𝒎 =
𝜣(𝒏)
Divide and Conquer
Divide and Conquer
▪ Divide-and-Conquer is probably well-known algorithm design
technique.
▪ It is well suited for parallel computations, in which one problem
can be solved simultaneously by its own processor.
Divide and Conquer - Approach
▪ Divide instance of problem into two
or more smaller instances
▪ Solve smaller instances recursively
▪ Obtain solution to original (larger)
instance by combining these
solutions
Divide and Conquer - Approach
▪ As shown in the figure the case of dividing a
problem into two sub problems, by far the most
far widely occurring case
▪ These are designed in such away that these can
run in single processor.
▪ Appling this method for sum of n natural numbers
we can solve this by
▪ Dividing the array of n numbers into two n/2 sub
array.
▪ Recursively divide the array two equal parts
▪ We know that when there is only one number the
sum is the number only.
▪ Then add their value to get the sum in equations.
Divide and Conquer - Approach
▪ In this problem the input of size n can be divided into several
instances of size n/b, with a of them needing to be solved.
(Here, a and b are constants; a≥1 and b>1
▪ Assuming that size n is a power of b; to simplify our analysis,
we get the following recurrence for the running time T(n):
𝒏
𝑻 𝒏 = 𝒂. 𝑻 + 𝒇(𝒏)
𝒃
▪ Where 𝒇(𝒏) is the time spent in dividing the problem into
smaller ones and then combining their solutions.
▪ 𝑻 𝒏 depends on the values of the constants a and b and the
order of growth of the function 𝒇(𝒏)
Divide and Conquer - Examples
▪ Merge Sort
▪ Quick Sort
▪ Binary Search
▪ Karatsuba Algorithm – Fast algorithm for multiplying large integers
by dividing the numbers into smaller parts and recursively
multiplying them
▪ Strassen’s Algorithm – Algorithm for matrix multiplication that
reduces the number of scalar multiplications needed by dividing
matrices into submatrices
▪ Master Theorem – Tool to analyze the time complexity of divide and
conquer algorithm
Master Algorithm
▪ If 𝑓 𝑛 𝜖 Θ 𝑛𝑑 where 𝑑 ≥ 0 in the general divide and conquer
𝑛
recurrence 𝑇 𝑛 = 𝑎𝑇 + 𝑓(𝑛) with 𝑎 ≥ 1 and 𝑏 ≥ 1 then,
𝑏

Θ 𝑛𝑑 𝑖𝑓 𝑎 < 𝑏 𝑑
𝑇 𝑛 ∈ Θ 𝑛𝑑 log 𝑛 𝑖𝑓 𝑎 = 𝑏 𝑑
Θ 𝑛log𝑏 𝑎 𝑖𝑓 𝑎 > 𝑏 𝑑
Master Algorithm - Example
▪ Consider the problem of computing the sum of n numbers
recursively
𝑎0 + 𝑎1 + 𝑎2 + ⋯ + 𝑎𝑛−1
= 𝑎0 + ⋯ + 𝑎(𝑛Τ2 )−1 + (𝑎𝑛Τ2 + ⋯ + 𝑎𝑛−1 )

𝐴 𝑛 = 1. 𝐴 𝑛ൗ2 + 1

𝑎 = 2, 𝑏 = 2 𝑎𝑛𝑑 𝑑 = 0
𝑎 > 𝑏𝑑
∴ 𝐴 𝑛 𝜖 Θ 𝑛log𝑏 𝑎 = Θ(𝑛)
Merge Sort
Approach
▪ Perfect example of a successful application of the divide-and-
conquer technique
▪ It sorts a given array A [0…n-1] by dividing it into two halves
A[0... (𝑛Τ2) -1] and A[(𝑛Τ2) …n-1]
▪ Sort each of them recursively
▪ Merging the smaller sorted arrays into a single sorted one
Algorithm
MergeSort(A[0…n-1])
//Sorts array A[0…n-1] by recursive Merge Sort
//Input: An array A[0…n-1] of orderable elements
//Output: Array A[0…n-1] sorted in ascending order
if n>1
copy A[0…(𝑛Τ2)-1] to B[0…(𝑛Τ2)-1]
copy A[(𝑛Τ2)…n-1] to C[0…(𝑛Τ2)-1]
MergeSort(B[0…(𝑛Τ2)-1])
MergeSort(C[0…(𝑛Τ2)-1])
Merge(B,C,A)
Algorithm
Merge (B[0…p-1],C[0…q-1],A[0…p+q-1])
//Merges two sorted array into one sorted array
//Input: 2 Sorted Arrays B[0…p-1] and C[0…q-1]
//Output: Sorted Array A[0…p+q-1] sorted in ascending order
i0; j0; k0
while i<p and j<q do
if B[i] ≤ C[j]
A[k]B[i]; ii+1
else
A[k]C[j]; jj+1
kk+1
if i=p
Copy C[j…q-1] to A[k…p+q-1]
else
Copy B[i…p-1] to A[k…p+q-1]
Analysis
▪ Two pointers (array indices) are initialized to point to the first
elements of the arrays being merged
▪ Then the value of the index compared and smaller of them is
added to new array being constructed
▪ Now the index of the smaller element array is incremented to
point to its immediate successor in the array it was copied from
▪ This operation is continued until one of the two given arrays is
exhausted
▪ The elements of the remaining array will be copied to the end
of the new array
Example
8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
Efficiency of Merge Sort
▪ Assuming n is a power of 2
𝐶 𝑛 = 2. 𝐶 𝑛Τ2 + 𝐶𝑚𝑒𝑟𝑔𝑒 𝑛 for 𝑛 > 1, 𝐶 1 = 0
▪ For the worst case
𝐶𝑚𝑒𝑟𝑔𝑒 𝑛 = 𝑛 − 1
This is because after every key comparison the total element to
be processed will be reduced by 1.
As this is the worst case the smallest number may come from the
alternating arrays. So total comparison is n-1
𝐶𝑤𝑜𝑟𝑠𝑡 𝑛 = 2. 𝐶𝑤𝑜𝑟𝑠𝑡 𝑛Τ2 + 𝑛 − 1 for 𝑛 > 1, 𝐶𝑤𝑜𝑟𝑠𝑡 1 = 0
𝐶𝑤𝑜𝑟𝑠𝑡 𝑛 = Θ(𝑛 log 𝑛)
Quick Sort
Approach
▪ Divide and conquer technique
▪ Picks an element as pivot and partitions the given array around
the picked pivot element by placing the pivot element in its
correct position in the sorted array
Approach
▪ Choose a pivot: Select an element from the array as pivot. The
choice of pivot can vary (first element, last element, random element
or mid element)
▪ Partition the array: Rearrange the array around the pivot. After
partitioning, all elements smaller than the pivot will be on its left,
and all elements greater than the pivot will be on its right. The pivot
is then in its correct position, and we obtain the index of the pivot.
▪ Recursive Calls: Recursively apply the same process to the two
partitioned sub-arrays (left and right of the pivot)
▪ Base Case: The recursion stops when there is only one element left
in the sub-array, as a single element is already sorted
Algorithm
QuickSort(A[l…r])
//Sorts a subarray by quick sort technique
//Input: Subarray A[l…r] of A[0…n-1] defined by left l and right r
//Output: Subarray A[l…r] in sorted order
if l<r
sPartition(A[l…r]) //s is a split position
QuickSort(A[l…s-1])
QuickSort(A[s+1…r])
Analysis
▪ Left to Right Scan: if A[i]<=p i=i+1
▪ It starts with the second element. it skips all those elements which are
smaller than the pivot and stops when found the bigger or equal
element
▪ Right to Left Scan: if A[j]>=p j=j-1
▪ It starts with the last element of the sub array. It skips all those elements
which are bigger smaller than the pivot and stops when found the
smaller or equal element
▪ Three cases may arise depending on whether or not the indices
are crossed over or not.
Analysis – Case 1
▪ if(i<j) i.e., i and j have not crossed:
▪ We simply exchange a[i] and a[j] and continue scanning by
incrementing i and decrementing j
Analysis – Case 2
▪ if(i>j) i.e., i and j have crossed over:
▪ Then we have partitioned the array after exchanging the pivot
with the A[j].
Analysis – Case 3
▪ if(i==j) i.e., when the scanning indices stops while pointing to
the same element
▪ Here the value they are pointing to must be equal to p.
Algorithm
Partition(A[l…r])
//Partitions a sub array by using its first element as a pivot
//Input: A Sub array A[l…r] of A[0...n-1], defined by its left and right indices l and r (l<r)
//Output: A partition of A[l…r], with the split position returned as this function’s value
p  A[l]
il; jr+1
while(true)
repeat ii+1 until A[i]≥p
repeat ji-1 until A[i]≤p
if (i<j)
Swap(A[i],A[j])
else
Swap(A[l],A[j])
return j
Binary Search
Approach
▪ A binary search is a simple technique which can be applied if
the items to be compared are either in ascending or descending
order
▪ Very efficient algorithm for searching in sorted array
Algorithm
BinarySearch(A[0…n-1],k)
//Implements non-recursive Binary Search
//Input: A sorted array A[0…n-1] and search key k
//Output: An index of the array’s element that is equal to k or -1 //if search is unsuccessful
l0; hn-1
while l≤h do
m(l+h)/2
if k = A[m]
return m
else if k < A[m]
hm-1
else
lm+1
return -1
Analysis
▪ To analyze the efficiency of binary search we count the number
of key comparisons made
▪ In best case: 𝐶𝑏𝑒𝑠𝑡 𝑛 = Ω(1)
▪ In worst case: 𝐶𝑤𝑜𝑟𝑠𝑡 𝑛 = 1 + 𝐶𝑤𝑜𝑟𝑠𝑡 𝑛Τ2 , 𝐶𝑤𝑜𝑟𝑠𝑡 1 = 1
▪ Solution: 𝐶𝑤𝑜𝑟𝑠𝑡 𝑛 = log 2𝑛 + 1
Example – Successful Search
Index 0 1 2 3 4 5 6 7 8 9

A[] 1 4 7 11 15 22 37 55 71 90

l=0 m=4 h=9

Key Index 0 1 2 3 4 5 6 7 8 9

22 A[] 1 4 7 11 15 22 37 55 71 90

h=9
l=5 m=7

Index 0 1 2 3 4 5 6 7 8 9

A[] 1 4 7 11 15 22 37 55 71 90

h=6
l=5 m=5

You might also like