0% found this document useful (0 votes)
14 views

Module-II (1)

Uploaded by

jeevangowda1701
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Module-II (1)

Uploaded by

jeevangowda1701
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Module-II

 Searching and Sorting: Linear Search, Insertion sort


 Divide and Conquer: Binary Search, Merge sort, Quick sort.
 Graph Traversal: Breadth First Search and Traversal, Depth First Search
and Traversal.
Searching
 Searching algorithms are methods or procedures used to find a
specific item or element within a collection of data.
 These algorithms are widely used in computer science and are
crucial for tasks like searching for a particular record in a
database, finding an element in a sorted list, or locating a file
on a computer.
 Linear Search:

Searching is the process of finding some particular element in

the list. If the element is present in the list, then the process is

called successful, and the process returns the location of that

element; otherwise, the search is called unsuccessful.


 Linear search is also called as sequential search
algorithm. It is the simplest searching algorithm.
 In Linear search, we simply traverse the list completely
and match each element of the list with the item whose
location is to be found.
 If the match is found, then the location of the item is
returned; otherwise, the algorithm returns NULL.
Sequential search algorithm
 The steps used in the implementation of Linear Search are
listed as follows -
 First, we have to traverse the array elements using a for loop.
 In each iteration of for loop, compare the search element with
the current array element, and -
◦ If the element matches, then return the index of the
corresponding array element.
◦ If the element does not match, then move to the next
element.
 If there is no match or the search element is not present in the
given array,
Working of Linear search
 Let the elements of array

Let the element to be searched is K = 41

 Now, start from the first element and compare K with


each element of the array. are –
 The value of K, i.e., 41, is not matched with the first element
of the array. So, move to the next element. And follow the
same process until the respective element is found.
 Now, the element to be searched is found. So algorithm will return
the index of the element matched.
Program:
For(i=0;i<n;i++)
{
If(A[i]==k)
{
System.out.println(i);
Break;
}}
If(i==n)
{
System.out.println(“Element not found”);
}
Analysis
 Best Case Complexity - In Linear search, best case occurs
when the element we are finding is at the first position of the
array. The best-case time complexity of linear search is O(1).
 Average Case Complexity - The average case time
complexity of linear search is O(n+1)/2.
 Worst Case Complexity - In Linear search, the worst case
occurs when the element we are looking is present at the end
of the array. The worst-case in linear search could be when the
target element is not present in the given array, and we have to
traverse the entire array. The worst-case time complexity of
linear search is O(n).
 Average Case Complexity - The average case time
complexity of linear search is O(n+1)/2.
 Sum of all the cases divided by No.of Cases
= 1+2+3+……+n/ n
Decrease and Conquer
Decrease and Conquer
1. Reduce problem instance to smaller instance of the
same problem and extend solution
2. Solve smaller instance
3. Extend solution of smaller instance to obtain solution
to original problem
 Also referred to as inductive or incremental approach

12
Examples of Decrease and Conquer
 Decrease by one:
◦ Insertion sort
◦ Graph search algorithms:
 DFS
 BFS
 Topological sorting
◦ Algorithms for generating permutations, subsets

 Decrease by a constant factor


◦ Binary search
◦ Fake-coin problems
◦ multiplication à la russe
◦ Josephus problem

 Variable-size decrease
◦ Euclid’s algorithm
◦ Selection by partition

13
Basic Idea

14
Insertion Sort
 Tosort array A[0..n-1], sort A[0..n-2] recursively and
then insert A[n-1] in its proper place among the sorted
A[0..n-2]
 Usually implemented bottom up (nonrecursively)

 Example: Sort 6, 4, 1, 8, 5

6|4 1 8 5
4 6|1 8 5
1 4 6|8 5
1 4 6 8|5
1 4 5 6 8
Pseudocode of Insertion Sort
Decrease by One:
Insertion Sort

Insertion Sort
Algorithm to sort n elements:
Sort n-1 elements of the array
Insert the n-th element

Complexity: (n2) in the worst and the average case, and


(n) on almost sorted arrays.

17
Analysis of Insertion Sort

 Time efficiency
Cworst(n) = n(n-1)/2  Θ(n2)
Cavg(n) ≈ n2/4  Θ(n2)
Cbest(n) = n - 1  Θ(n) (also fast on almost sorted arrays)

 Space efficiency: in-place

 Stability: yes

 Best elementary sorting algorithm overall


◦ Binary insertion sort
Divide and Conquer
Three Steps of The Divide and Conquer
Approach
The most well known algorithm design strategy:
1. Divide the problem into two or more smaller
subproblems.

2. Conquer the subproblems by solving them


recursively.

3. Combine the solutions to the subproblems into the


solutions for the original problem.
A Typical Divide and Conquer
Case

a problem of size n

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to
the original problem
Divide and Conquer Examples
 Sorting: mergesort and quicksort

 Tree traversals

 Binary search

 Matrix multiplication-Strassen’s algorithm


Steps
 Step1: The merge sort algorithm iteratively divides an array into
equal halves until we achieve an atomic value. In case if there are an
odd number of elements in an array, then one of the halves will have
more elements than the other half.
 Step2: After dividing an array into two subarrays, we will notice that
it did not hamper the order of elements as they were in the original
array. After now, we will further divide these two arrays into other
halves.
 Step3: Again, we will divide these arrays until we achieve an atomic
value, i.e., a value that cannot be further divided.
 Step4: Next, we will merge them back in the same way
as they were broken down.
 Step5: For each list, we will first compare the element
and then combine them to form a new sorted list.
 Step6: In the next iteration, we will compare the lists
of two data values and merge them back into a list of
found data values, all placed in a sorted manner.
Mergesort
Algorithm:
 The base case: n = 1, the problem is naturally solved.
 The general case:
◦ Divide: Divide array A[0..n-1] in two and make copies of each half
in arrays B[0.. n/2 - 1] and C[0.. n/2 - 1]
◦ Conquer: Sort arrays B and C recursively using merge sort
◦ Combine: Merge sorted arrays B and C into array A as follows:
(Example: 1 3 7, 2 4 5 9 )
 Repeat the following until no elements remain in one of the arrays:
 compare the first elements in the remaining unprocessed portions of the arrays
 copy the smaller of the two into A, while incrementing the index indicating the
unprocessed portion of that array
 Once one of the arrays is exhausted, copy the remaining unprocessed
elements from the other array into A.
The Mergesort Algorithm
ALGORITHM Mergesort(A[0..n-1])
//Sorts array A[0..n-1] by recursive mergesort
//Input: An array A[0..n-1] of orderable elements
//Output: Array A[0..n-1] sorted in nondecreasing order

Mergesort(A,lb,ub)
{
If(lb<ub)
{
Mid=(lb+ub)/2;
Mergesort(A,lb,mid);
Mergesort(A,mid+1,ub);
Merge(A,lb,mid,ub);
}
Merge Algorithm
Merge(A,lb,mid,ub) If(i>mid)
{ {
I=lb; While(j<=ub)
j=mid+1; {
k=lb; b[k]=a[j];
while(i<=mid&& j<=ub) j++;
{ k++;
If(a[i]<=a[j]) }
{ else
b[k]=a[i]; While(i<=mid)
i++; {
} b[k]=a[i];
else i++;
{ k++;
b[k]=a[j]; }
j++; For(k=lb;k<=ub;k++)
} {
K++; a[k]=b[k];
} }
}
Mergesort Examples
 83297154
 72164
Merge Sort
General Divide and Conquer recurrence

The Master Theorem


the time spent on solving a subproblem of size n/b.

T(n) = aT(n/b) + f (n), where f (n) ∈


Θ(nk)
1. a < b T(n) ∈ Θ(nk)
k

2. a = b T(n) ∈ Θ(nk log n )


k

3. a > b T(n) ∈ Θ(nlogba)


k
the time spent on dividing the problem
into smaller ones and combining their solutions.
Analysis
 C(n) = 2C(n/2) + Cmerge(n) for n>1
 If n=1, C(1) =0
 Best Case Complexity: The merge sort algorithm has a best-case
time complexity of Θ(n*log n) for the already sorted array.
 Average Case Complexity: The average-case time complexity for
the merge sort algorithm is O(n*log n), which happens when 2 or
more elements are jumbled, i.e., neither in the ascending order nor
in the descending order.
 Worst Case Complexity: The worst-case time complexity is
also O(n*log n), which occurs when we sort the descending order
of an array into the ascending order.
 Space Complexity: The space complexity of merge sort is O(n).
The Divide, Conquer and Combine Steps
in Quicksort
 Divide: Partition array A[l..r] into 2 subarrays, A[l..s-1] and A[s+1..r]
such that each element of the first array is ≤A[s] and each element of
the second array is ≥ A[s]. (computing the index of s is part of
partition.)
◦ Implication: A[s] will be in its final position in the sorted array.

 Conquer: Sort the two subarrays A[l..s-1] and A[s+1..r] by recursive


calls to quicksort
 Combine: No work is needed, because A[s] is already in its correct
place after the partition is done, and the two subarrays have been sorted.
Quicksort
 Select a pivot w.r.t. whose value we are going to divide the sublist.
(e.g., p = A[l])
 Rearrange the list so that it starts with the pivot followed by a ≤ sublist
(a sublist whose elements are all smaller than or equal to the pivot)
and a ≥ sublist (a sublist whose elements are all greater than or equal
to the pivot ) (See algorithm Partition in section 4.2)
 Exchange the pivot with the last element in the first sublist(i.e., ≤
sublist) – the pivot is now in its final position
 Sort the two sublists recursively using quicksort.

A[i]≤p A[i] ≥ p
The Quicksort Algorithm

ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: A subarray A[l..r] of A[0..n-1],defined by its
left and right indices l and r
//Output: The subarray A[l..r] sorted in nondecreasing
order
if l < r
s  Partition (A[l..r]) // s is a split position
Quicksort(A[l..s-1])
Quicksort(A[s+1..r]
The Partition Algorithm
ALGORITHM Partition (A[l ..r])
//Partitions a subarray by using its first element as a pivot
//Input: A subarray A[l..r] of A[0..n-1], defined by its left and right indices l and r (l <
r)
//Output: A partition of A[l..r], with the split position returned as this function’s value
P A[l]
i l; j  r + 1;
Repeat
repeat i  i + 1 until A[i]>=p //left-right scan
repeat j j – 1 until A[j] <= p//right-left scan
if (i < j) //need to continue with the scan
swap(A[i], a[j])
until i >= j //no need to scan
swap(A[l], A[j])
return j
Quicksort Example

i j

P A[l]
L=0,r=7, if(0<7) true i l; j  r + 1;
Repeat
Pivot= A[l] = 15 repeat i  i + 1 until A[i]>=p
repeat j j – 1 until A[j] <= p
i=0, j=8 if (i < j)
swap(A[i], a[j])
until i >= j
//no need to scan
swap(A[l], A[j])
return j
P=15
i j
A[i]>=p false so increment i value

i j

A[i]>=p True so place i value

i j
A[j] <= p False so decrement j value

i j
A[j] <= p False so decrement j value

i j
A[j] <= p True so place j value
P=15
i j
If 1< 5 true so swap A[i] and A[j]
0 1 2 3 4 5 6 7
15 10 13 27 12 22 20 25

i j

A[i]>=p false so increment i value


0 1 2 3 4 5 6 7
15 10 13 27 12 22 20 25

i j
A[i]>=p false so increment i value
0 1 2 3 4 5 6 7
15 10 13 27 12 22 20 25

i j
A[i]>=p
true so place i value
0 1 2 3 4 5 6 7
15 10 13 27 12 22 20 25

i j
A[j] <= p True so place j value
0 1 2 3 4 5 6 7
15 10 13 27 12 22 20 25

i j

If i<j , 3< 4 true then swap a[i] and a[j]


0 1 2 3 4 5 6 7
15 10 13 12 27 22 20 25

i j

A[i]>=p false so increment i value


0 1 2 3 4 5 6 7
15 10 13 12 27 22 20 25

ij
A[i]>=p true so place i value
0 1 2 3 4 5 6 7
15 10 13 12 27 22 20 25

ij
A[j] <= p False so decrement j value
0 1 2 3 4 5 6 7
Here i >j so
15 10 13 12 27 22 20 25 swap a[l]
and a[j]
j i
0 1 2 3 4 5 6 7
12 10 13 15 27 22 20 25

So now left half of the array having less than the


pivot and right half of the array having greater than
pivot element.
0 1 2
12 10 13
P=12
i j

A[i]>=p false so increment i value

0 1 2
12 10 13

i j

A[i]>=p false so increment i value


0 1 2
12 10 13

ij
A[i]>=p true so place i value
0 1 2
12 10 13 P=12
ij

A[j] <= p False so decrement j value

0 1 2
12 10 13

j i

A[j] <= p True so place j value

Check if i<j , 2<1 false so swap A[l] and A[j]

0 1 2
10 12 13
4 5 6 7
27 22 20 25 P= 27
Efficiency of Quicksort

Based on whether the partitioning is balanced.


 Best case: split in the middle — Θ( n log n)
◦ C(n) = 2C(n/2) + Θ(n) //2 subproblems of size n/2 each
 Worst case: sorted array! — Θ( n2)
◦ C(n) = C(n-1) + n+1 //2 subproblems of size 0 and n-1 respectively
 Average case: random arrays — Θ( n log n)
Analysis
T (n) =T(1)+T(n-1)+n
T (1) is time taken by pivot element.
T (n-1) is time taken by remaining element except for pivot
element.
N: the number of comparisons required to identify the exact
position of itself (every element)
If we compare first element pivot with other, then there will be 5
comparisons.
It means there will be n comparisons if there are n items.
Relational Formula for Worst Case:
T(n)=(n-1) T(1) + T(n-(n-1))+(n-(n-2))+(n-(n-3))+(n-(n-4))+n

T (n) = (n-1) T (1) + T (1) + 2 + 3 + 4+............n


T (n) = (n-1) T (1) +T (1) +2+3+4+...........+n+1-1
[Adding 1 and subtracting 1 for making AP series]

T (n) = (n-1) T (1) +T (1) +1+2+3+4+........ + n-1


T (n) = (n-1) T (1) +T (1) + -1

Stopping Condition: T (1) =0


Because at last there is only one element left and no comparison is required.

T (n) = (n-1) (0) +0+ -1

Worst Case Complexity of Quick Sort is T (n) =O


(n2)
Example
44 33 11 55 77 90 40 60 99 22 88
Binary Search

 In Binary Search technique, we search an element in a sorted array by recursively

dividing the interval in half.


 Firstly, we take the whole array as an interval.
 If the search Element (the item to be searched) is less than the item in the middle

of the interval, We discard the second half of the list and recursively repeat the

process for the first half of the list by calculating the new middle and last element.
 If the search Element (the item to be searched) is greater than the item in the

middle of the interval, we discard the first half of the list and work recursively on

the second half by calculating the new beginning and middle element.
 Repeatedly, check until the value is found or interval is empty.
Binary Search
Binary Search – an Iterative Algorithm
ALGORITHM BinarySearch(A[0..n-1], K)
//Implements nonrecursive binary search
//Input: An array A[0..n-1] sorted in ascending order and
// a search key K
//Output: An index of the array’s element that is equal to K
// or –1 if there is no such element
l  0, r  n – 1
while l  r do //l and r crosses over can’t find K.
m (l + r) / 2
if K = A[m] return m //the key is found
else
if K < A[m] r m – 1 //the key is on the left half of the array
else lm+1 // the key is on the right half of the array
return -1
Binary Search -- Efficiency

 What is the recurrence relation?


C(n) = C(n / 2) + 2

 Efficiency
C(n)  Θ( log n)
Graph Traversal

 Graph traversal is the process of visiting all the vertices (nodes) in a


graph in a systematic way. It involves exploring or traversing each
vertex in a graph by following the edges that connect the vertices.
There are two common methods for graph traversal
1. Breadth-First Search (BFS)
2. Depth-First Search (DFS).
 BFS explores all the neighboring nodes at the current depth before

moving on to nodes at the next depth level, while DFS explores the
deepest vertices of a graph before backtracking to explore other
vertices.
 Graph traversal is used in many graph algorithms, such as finding the

shortest path between two vertices, checking if a graph is connected,


finding cycles in a graph, and more. By visiting all the vertices in a
graph, graph traversal helps to uncover the structure and properties of
the graph, which can be used to solve various problems.
Types of Graph Traversal

There are two common methods for graph traversal:


 Breadth-First Search (BFS)
 Depth-First Search (DFS)
Breadth-First Search (BFS)

 Breadth-First Search (BFS) is a graph traversal algorithm that


explores all the neighboring nodes at the current depth before moving
on to nodes at the next depth level. It starts at the root node (or any
other randomly chosen node) and explores all the vertices in the
graph.
 The BFS algorithm uses a queue data structure to keep track of the
nodes to visit.

Breadth-First Search (BFS) Algorithm


 Step 1: Define a queue with the same number of nodes as the graph.

 Step 2: Start traversal from starting point and put it into the Queue

and count it as visited.


 Step 3: Enqueue all the neighboring nodes of the visited node that

have not been visited yet.


 Step 4: When there are no more vertices to visit .
Dequeue that node (front Node) from the queue.
 Step 5: Repeat steps 3 and 4 until queue is empty.
 Step 6: When the queue is empty, make the final
spanning tree by removing unused lines from the
graph.
Consider the following Graph to perform
Breadth First Traversal (BFS)
Depth First Search Traversal

 We use the following steps to implement DFS


traversal...
 Step 1 - Define a Stack of size equal to the numbers of

vertices in the graph..


 Step 2 - Choose any vertex as starting point for

traversal. Visit that vertex and push it on to the Stack.


 Step 3 - Push any non-visited adjacent vertex of a top

vertex into the stack.


 Step 4 - Repeat step 3 until there is no new vertex to

visit from the top of the stack.


Depth First Search Traversal
 Step 5 - Backtrack and remove one vertex from the
stack if there are no new vertices.
 Step 6 - Repeat steps 3, 4 and 5 until stack becomes

Empty.
 Step 7 - When stack is empty, remove unused graph

edges to create final spanning tree.


Consider the following graph for example of DFS

Step 1: Select the vertex 1 as a starting point (Visit 1). Push 1 into the Stack
Step 2: Visit any non-visited adjacent vertex of 1. Push 2 in
the stack
Step 3: Visit any non-visited adjacent vertex
of 2. Push 3 in the stack
Step 4: There is no vertex to visit from 3. Now
Backtrack from 3 and POP 3 from Stack
Step 5: Visit any non-visited
adjacent vertex of 2. Push 5 in the stack.
Step 6: Visit any non-visited adjacent vertex of 5. Push 4 in the
stack.
Step 7: There is no unvisited vertex from 4. Now
start backtracking and POP 4 from stack.
Step 8: There is no unvisited vertex from 5.
Now start backtracking and POP 5 from stack.
Step 9: There is no unvisited vertex from 2. Now start
backtracking and POP 2 from stack.
Step 10: There is no unvisited
vertex from 1. POP 1 from stack.
Step 11: Stack is empty now. Stop DFS Traversal.
Final Spanning tree after DFS traversal is given
below
Graph Traversal

Many problems require processing all graph vertices (and


edges) in systematic fashion

Graph traversal algorithms:

◦ Depth-first search (DFS)

◦ Breadth-first search (BFS)


Depth-First Search (DFS)
Visits graph’s vertices by always moving away from last
visited vertex to an unvisited one, backtracks if no
adjacent unvisited vertex is available.

 Recurisve or it uses a stack


◦ a vertex is pushed onto the stack when it’s reached for the first
time
◦ a vertex is popped off the stack when it becomes a dead end,
i.e., when there is no adjacent unvisited vertex

 “Redraws” graph in tree-like fashion (with tree edges and


back edges for undirected graph)
Decrease by One:
Graph Search

Depth-first search algorithm:


dfs(v)
process(v)
mark v as visited
for all vertices i adjacent to v not visited
dfs(i)

Complexity: Θ(V2) if represented by an adjacency table, Θ(|V| + |E|) if


represented by adjacency lists

74
Pseudocode of DFS
Example: DFS traversal of undirected graph

a b c d

e f g h

DFS traversal stack: DFS tree:

1 2 6 7
a b c d
abgcdh
abgcd
abgcd
abgc
abfe

abg
abf
abf

e f g h
ab

ab


a

4 3 5 8
Red edges are tree
edges and white edges
are back edges.
Graph Search: DFS

77
DFS
 DFS can be implemented with graphs represented as:
◦ adjacency matrices: Θ(|V|2). Why?
◦ adjacency lists: Θ(|V|+|E|). Why?

 Yields two distinct ordering of vertices:


◦ order in which vertices are first encountered (pushed onto
stack)
◦ order in which vertices become dead-ends (popped off stack)

 Applications:
◦ checking connectivity, finding connected components
◦ checking acyclicity (if no back edges)
◦ finding articulation points and biconnected components
◦ searching the state-space of problems for solutions (in AI)

You might also like