0% found this document useful (0 votes)
16 views77 pages

Unit 2 - R23

This document discusses the design and analysis of algorithms, focusing on brute force and exhaustive search strategies. It explains the brute force approach, its practical applications in sorting and searching algorithms like Selection Sort and Bubble Sort, and provides examples of their implementations and complexities. Additionally, it covers brute-force string matching and the closest-pair problem, highlighting the efficiency and limitations of these methods.

Uploaded by

windowsbabu1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views77 pages

Unit 2 - R23

This document discusses the design and analysis of algorithms, focusing on brute force and exhaustive search strategies. It explains the brute force approach, its practical applications in sorting and searching algorithms like Selection Sort and Bubble Sort, and provides examples of their implementations and complexities. Additionally, it covers brute-force string matching and the closest-pair problem, highlighting the efficiency and limitations of these methods.

Uploaded by

windowsbabu1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Design and Analysis of

Algorithms
UNIT 2

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Brute Force and Exhaustive Search

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• After introducing algorithm analysis,
the discussion shifts to algorithm
design strategies.
Introduction • The chapters will explore various
strategies, starting with brute force
and its special case, exhaustive
search.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• Brute force is a direct problem-solving

Brute Force approach based on the problem


statement and concept definitions.

Defined • It emphasizes computational power


rather than intellectual insight.

• Often described as "Just do it!" as it


directly follows the problem's
definition.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Importance of Brute Force:
Example: Exponentiation Problem: • Practical Value:
• For critical problems like sorting,
• Problem Statement: searching, matrix multiplication, and
• Illustrate brute force with the string matching, brute force yields
exponentiation problem: practical algorithms, especially for
computing an for a (nonzero) and small instances.
n (nonnegative integer).
• Brute Force Approach: • It provides reasonable solutions
• Directly compute an by multiplying regardless of the instance size.
a by itself n times.
• Handling Small Problems:
• Despite general inefficiency, brute
• Examples encountered earlier
force is useful for solving small-size
include consecutive integer checking instances efficiently.
for GCD and definition-based matrix
multiplication. • It can offer practical solutions for
specific cases.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Sorting Problem:
• Focus on applying the brute-force
approach to the sorting problem.

Selection Sort and • Sorting involves arranging a list of n


orderable items in nondecreasing
Bubble Sort order.

• Numerous sorting algorithms exist,


but exploring two basic ones:
Selection Sort and Bubble Sort.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Brute Force Approach for Sorting

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Basic Idea:
• Start with scanning the entire list to
find the smallest element and
exchange it with the first element.

Selection Sort
• Iterate through the list, finding the
smallest among the remaining
elements and placing it in its final
position.

• Repeat this process until the entire list


is sorted.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


ALGORITHM
Analysis:
SelectionSort(A[0..n−1])
• Input size is n, and the basic operation
for i ← 0 to n−2 do
is key comparison A[j]<A[min].
min ← i
• The number of key comparisons is
for j ← i+1 to n−1 do
given by
if A[j] < A[min] min ← j
• Solving the sum: making selection
swap A[i] and A[min]
sort an O(n2) algorithm

Example: Illustration using the list


https://visualgo.net/en/sorting
=
https://algo-verse.vercel.app/selection_sor
t.html
=

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● The analysis shows that the number of
key comparisons made by the selection ● The brute-force nature of selection sort
sort algorithm grows quadratically with lies in its simplicity and directness. It
the size of the input, making it a O(n2) doesn't involve complex calculations or
algorithm. advanced strategies.

● While the number of key comparisons is ● The algorithm doesn't exploit any
O(n2), the number of key swaps is O(n), specific patterns in the data; instead, it
making it more efficient in this aspect just systematically selects the smallest
compared to some other sorting element and places it in its correct
algorithms. position.

● The space complexity of the selection ● While not the most efficient sorting
sort algorithm is O(1), commonly algorithm, selection sort serves as a
referred to as constant space. basic illustration of the brute-force
Regardless of the size of the input array, approach to problem-solving.
the additional memory used by the
algorithm remains constant.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Basic Idea:
• Compare adjacent elements of the list.
Bubble Sort •

If they are out of order, swap them.
Repeat this process until the entire list
is sorted.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


ALGORITHM BubbleSort(A[0..n−1])
// Sorts a given array by bubble sort Analysis:
// Input: An array A[0..n − 1] of orderable • The number of key comparisons is
elements given by
// Output: Array A[0..n − 1] sorted in non-
decreasing order
for i ← 0 to n−2 do =
for j ← 0 to n−2−i do
if A[j + 1] < A[j] swap A[j] and A[j + 1] =

=
Example: Illustration using the list
https://visualgo.net/en/sorting • The algorithm grows quadratically
with the size of the input.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Number of Key Swaps:
• The number of key swaps (Sworst​(n)) is
also O(n2), making it inefficient for
Example:
worst-case inputs.
Consider the input list: [1, 2, 3, 4, 5]. In the
• enhanced version, the algorithm would detect
Although the basic version of bubble
that the list is already sorted after the first
sort is not efficient, it can often be
pass (no swaps were made), and it would
improved with modest effort.
terminate early without unnecessary additional
• passes.
Specifically, the algorithm can be
enhanced by stopping early if a pass
● This enhancement helps to improve the
makes no exchanges (sorted list).
efficiency of bubble sorting in cases where
This optimization reduces the
the list is nearly sorted or already sorted.
running time on some inputs.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Efficiency Based on Brute Force:
• In terms of key swaps, selection sort is
Comparison: more efficient than bubble sort,
• Both selection sort and bubble sort especially for worst-case inputs.
have the same worst-case time
complexity O(n2). • However, both algorithms are
considered inefficient for large
• The number of key swaps for bubble datasets compared to more advanced
sort can be higher than that of sorting algorithms.
selection sort, making bubble sort less
efficient in terms of swaps. • No comparison-based sorting
algorithm has a best-case time of
O(1).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Brute Force Approach for Searching

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


ALGORITHM SequentialSearch2(A[0..n], K)

A[n] ← K // Append the search key to the end


Sequential Search of the list
i←0
Sequential search is a straightforward
algorithm for finding a target element in a while A[i] ≠ K do
i←i+1
list. It checks each element in the list one by
one until a match is found or the entire list is
if i < n then
searched. return i
else
https://algo-verse.vercel.app/linear_search.html return −1

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Enhancement for Sorted Lists:
• If the list is sorted, the search can be
optimized by stopping as soon as an
Explanation: element greater than or equal to the
1. Initialization: Append the search key K search key is encountered.

to the end of the array A to act as a


Efficiency:
sentinel. • The efficiency of sequential search
2. Search Loop: Use a while loop to iterate remains linear (O(n)) in both the worst and
average cases. While it is simple, its
through the array elements until a match
efficiency can be inferior, especially for
is found (i.e., A[i]=K).
large datasets.
3. Check: If a match is found (i.e., i<n),
return the index i where the match Sentinel Technique:
• Appending the search key as a sentinel
occurred.
simplifies the algorithm by eliminating the
4. Unsuccessful Search: If the loop
need for an additional check to reach the
completes without finding a match, return end of the list.
-1.
• This algorithm provides a basic illustration
of the brute-force approach, emphasizing
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept simplicity but not necessarily the most
efficient performance.
Sentinel
● A sentinel, in the context of
algorithms and data structures, is a
special value that is used to represent
Search Best Worst Not the end of a data structure or to
Method Case Case Found
simplify algorithm implementation.
(Found (Last
at Start) Element)
● It serves as a signal or marker, often
Standard O(1) O(n) O(n) + 1
Sequential (Extra taking a unique value that is unlikely
Check) to be confused with actual data.
Sentinel O(1) O(n) O(n) (No
Search Extra • If the target is present, both methods
Check) behave the same.

• Sentinel is useful only when the


element is not found!

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


ALGORITHM BruteForceStringMatch(T[0..n -
1], P[0..m - 1])
// Implements brute-force string matching
// Input: An array T[0..n - 1] of n characters
representing a text and
// an array P[0..m - 1] of m characters
Brute-Force String representing a pattern
// Output: The index of the first character in
Matching the text that starts a
// matching substring or -1 if the search is
unsuccessful

for i ← 0 to n - m do
j←0
while j < m and P[j] = T[i + j] do
j←j+1
if j = m return i # i returns index of starting
position
return -1
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Algorithm Overview:

Worst-case scenario: In the worst-case


Given a text T of length n and a pattern P of
scenario, the algorithm may have to
length m, the algorithm aims to find the
perform m comparisons for each of the (n -
index of the first occurrence of P within T.
m + 1) tries. Here, n is the length of the
text, and m is the length of the pattern.
If a match is found, the algorithm returns the
• For our example:
index of the starting character of the
• n = 10 (length of text)
matching substring. If no match is found, it
• m = 3 (length of pattern)
returns -1.
We have 10 - 3 + 1 = 8 possible
starting positions for the pattern within
● Looping through Text:
the text.
The algorithm iterates over each character
In the worst case, the algorithm may
T[i] of the text T, where i ranges from 0 to n -
have to perform 3 comparisons for each
m.
starting position, leading to a total of 3
* 8 = 24 comparisons.
This ensures that the algorithm doesn't
exceed the boundary where the remaining
This demonstrates the worst-case time
characters in T are fewer than the characters
complexity of m* (n-m+1) = O(nm).
in P.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Average-Case Scenario: In typical
scenarios, shifts occur after very few
comparisons, leading to a much better
average-case time complexity of O(n). ● The best-case scenario for the Brute-
Force String Matching algorithm occurs
● For example, n this case, the text length when the pattern is found at the very
(n) is 10, and the pattern length (m) is 3. beginning of the text.

● The input size can be expressed as a ● In this case, the algorithm only needs to
combination of both the text length and perform a single comparison between
the pattern length, such as (n, m) or (10, each character of the pattern and the
3). corresponding character in the text.

As the text and pattern lengths increase, the ● Therefore, the best-case time complexity
input size increases accordingly. is O(m), where m is the length of the
pattern.
The number of comparisons made by the
algorithm grows linearly with the size of the
input.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Closest-Pair by Brute Force

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


ALGORITHM BruteForceClosestPair(P)
// Finds the distance between the two closest
points in the plane by brute force
// Input: A list P of n (n≥ 2) points p1(x1,
y1), ..., pn(xn, yn)
// Output: The distance between the closest
Closest-Pair Problem pair of points

d ← ∞ // Initialize minimum distance to


infinity
for i ← 1 to n - 1 do
for j ← i + 1 to n do
d ← min(d, sqrt((xi - xj)^2 + (yi - yj)^2))
// Update minimum distance

return d // Return the minimum distance

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Distance Calculation: For each pair of points
(pi, pj), calculate the Euclidean distance
Initialization: Set the initial value of the between them using the formula:
minimum distance d to infinity. This is
because any actual distance between points
will be smaller than infinity.
Here, (xi, yi) and (xj, yj) are the coordinates of
points pi and pj respectively.
Nested Loop:
● Iterate over each pair of points in the
Update Minimum Distance: Compare the
list P.
calculated distance with the current minimum
● The outer loop (for i ← 1 to n - 1)
distance d. If the calculated distance is
iterates over each point in the list
smaller than d, update d to be the calculated
except for the last one.
distance.
● The inner loop (for j ← i + 1 to n)
iterates over the points following the
Return Minimum Distance: After
current point i.
completing the loop, return the final value of
d, which represents the distance between the
two closest points in the list.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Example: P = [(1, 2), (3, 4), (5, 6), (7, 8)] 5. For (3,4) and (7, 8), calculate the
distance: =
1. Initialize d to infinity.
2. Iterate over each pair of points: 6. For (5, 6) and (7, 8), calculate the
distance: =
1. For (1, 2) and (3, 4), calculate the
distance: =
Return the minimum distance ‘d’, which is
2. For (1, 2) and (5, 6), calculate the
distance: =
In this example, the closest points are (1,
3. For (1, 2) and (7, 8), calculate the
2) and (3, 4), (3,4) and (5, 6), (5, 6) and
distance: =
(7, 8), the distance between them is =
4. For (3,4) and (5, 6), calculate the
2.8284
distance: =

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Exhaustive Search

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


It involves systematically generating and
evaluating every possible candidate
solution within the problem domain.

Exhaustive Search Although conceptually simple,


implementing exhaustive search often
requires algorithms for generating specific
combinatorial objects, such as
permutations, combinations, or subsets.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Traveling The Traveling Salesman Problem (TSP) is a
classic optimization problem where the

Salesman Problem
objective is to find the shortest possible
tour that visits each city exactly once and
returns to the starting city.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Modeling the Problem:
3. Computing Tour Lengths:
1. The TSP can be represented as a
1. For each permutation (potential tour),
weighted graph, where each vertex
calculate the total length of the tour by
represents a city, and the edges
summing the distances between
between vertices represent the
consecutive cities.
distances between cities.
2. This involves traversing the edges of the
2. The objective is to find the shortest
graph according to the permutation and
Hamiltonian circuit in the graph,
summing the weights (distances) of the
which is a cycle that visits each
edges.
vertex exactly once.

4. Finding the Shortest Tour:


2. Generating Tours:
3. Compare the lengths of all generated
1. To find all possible tours, we need to
tours and select the tour with the
generate all permutations of the
shortest length as the optimal solution.
intermediate cities.
4. This shortest tour represents the shortest
2. Assume we have n cities, labeled
route that visits each city exactly once
1,2,3,...,n, excluding the starting city
and returns to the starting city.
(which is also the ending city).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Nodes - cities
● P-Q-R-S-P = 22
3 ● P-Q-S-R-P = 15
P Q ● P-R-Q-S-P = 27
● P-R-S-Q-P = 15
8 9
6 4 ● P-S-Q-R-P = 27
● P-S-R-Q-P = 22
R S
2
● https://visualgo.net/en/tsp
Edges-distance

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Efficiency Considerations:
• The number of permutations of n cities
(excluding the starting city) is (n−1)!.

• Even for moderate values of n, the


number of permutations becomes
prohibitively large.
Limitations:
• For example, with just 10 cities, there • Despite reducing the number of
are 9!=362,8809!=362,880 possible permutations, exhaustive search remains
tours to consider, making exhaustive impractical for large instances due to the
search impractical for larger instances. exponential growth in the number of
possible tours.
• The time complexity of the exhaustive
search approach for solving the Traveling
Salesman Problem (TSP) is O((n−1)!,
where n is the number of cities
(excluding the starting city).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


The Knapsack Problem involves selecting

Knapsack Problem a subset of items from a given set of items


with known weights and values, such that
the total weight of the selected items does
not exceed a given capacity, and the total
value of the selected items is maximized.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Generate all subsets: The exhaustive 3. Identify feasible subsets: Check if
search approach starts by generating each subset satisfies the constraint of not
all possible subsets of the given set of exceeding the knapsack capacity. Only
items. For example, if there are n items, consider feasible subsets, i.e., their total
there will be 2n possible subsets. weight does not exceed the knapsack
capacity.
2. Compute subset properties: For
each generated subset, compute the 4. Find the subset with the maximum
total weight and total value of the items value: Among the feasible subsets, find the
in that subset. This involves summing subset with the maximum total value. This
up the weights and values of the subset represents the most valuable set of
individual items in the subset. items that can fit into the knapsack without
exceeding its capacity.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Generate all subsets:
1. Possible subsets include {}, {1}, {2},
{3}, {4}, {1,2}, {1,3}, {1,4}, {2,3},
Let's consider an example to illustrate
{2,4}, {3,4}, {1,2,3}, {1,2,4},
this approach: {1,3,4}, {2,3,4}, {1,2,3,4}

Given items: 4 items 2. Compute subset properties:


• Item 1: Weight = 7, Value = $42 1. For each subset, calculate the total
• Item 2: Weight = 3, Value = $12 weight and total value.
• Item 3: Weight = 4, Value = $40
• Item 4: Weight = 5, Value = $25 3. Identify feasible subsets:
1. Check if the total weight of each
subset does not exceed the knapsack
● Knapsack capacity: 10
capacity (10).

4. Find the subset with maximum


value:
1. Among the feasible subsets, find the
one with the maximum total value.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● These calculations show the total
weight and total value for each
subset. Feasible subsets, i.e., those
with a total weight not exceeding the
knapsack capacity, are marked as
feasible.

● The subset {3,4} has the maximum


total value of $65 and is feasible
within the knapsack capacity.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● The time complexity of the brute-force
approach to the knapsack problem is
exponential, specifically Ω(2n), where n is
the number of items. This is because we In the provided example of the knapsack
need to generate and evaluate all problem, there are 4 items given.
possible subsets of the items, and there Therefore, n = 4.
are 2n such subsets.

Now, let's calculate 2^n:


● As for space complexity, it also grows
● 2^4 = 16
exponentially with the number of items
since we need to store information about ● So, there are 16 possible subsets of
all possible subsets. Therefore, the space the given items.
complexity is also Ω(2n).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Decrease and Conquer

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Decrease-and-conquer techniques
offer a structured way to solve
problems by progressively reducing
their size until reaching a base
case that can be directly solved.

Decrease and Conquer ● Each variation provides a different


strategy for reducing the problem
size, leading to different time
complexities and efficiency levels.

● There are two main approaches:


top-down (recursive) and bottom-
up (iterative).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Variations of Decrease-and-Conquer:

• Decrease-by-a-Constant: In this
variation, the size of the problem instance Example: Decrease-by-a-Constant
is reduced by the same constant on each (Decrease-by-One):
iteration of the algorithm. Typically, this
constant is equal to one. Example: Factorial Calculation
Factorial of a number n is defined as: n!
• Decrease-by-a-Constant-Factor: Here, =n×(n−1)!
the problem instance is reduced by the • At each step, the problem size is reduced
same constant factor on each iteration. by 1.
Typically, this factor is equal to two. • If n=5, we compute 5! by calling 4!, then
3!, etc., until we reach the base case.
• Variable-Size Decrease: In this variety,
the size-reduction pattern varies from one
iteration of the algorithm to another.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example: Decrease-by-a-Constant-Factor
(Decrease-by-Half or fixed fraction):
Examples of Variable-Size Decrease:
Example: Binary Search
Binary search works on a sorted array and • Euclid's algorithm for computing the
repeatedly reduces the search space by half. greatest common divisor is provided
as an example, where the size-
Algorithm Steps:
reduction pattern varies depending
1. Find the middle element of the array.
on the input values.
2. If it matches the target, return the index.
3. If the target is smaller, search the left half.
4. If the target is larger, search the right half. Steps for gcd(48, 18):
1. gcd(48,18)→18 and remainder 12.
The array size is halved at each step. 2. gcd(18,12)→12 and remainder 6.
If we start with 8 elements, the sequence of 3. gcd(12,6)→ 6 and remainder 0 (Base
search sizes would be Case).
8 → 4 → 2 → 1.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Insertion Sort is a simple sorting algorithm that builds
the final sorted array one element at a time.

Here's a step-by-step explanation of how it works:

1. Initialization: Assume we have an array A of size


n, containing unsorted elements. We start with the
second element of the array (index 1) since a
single element is already considered sorted.
Decrease by one technique
2. Insertion:
● For each element starting from the second one,
we consider it the key and compare it with the
elements left in the sorted subarray.
Insertion Sort
● We move elements greater than the key to the
right to make space for the key element in its
correct position.

3. Sorting: We repeat this process for each


element in the array until all elements are in their
correct sorted positions.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept https://algo-verse.vercel.app/insertion_sort.html


Algorithm InsertionSort(A[0..n-1]):
● The worst case, the time complexity is
Input: An array A[0..n-1] of n orderable elements
Output: Array A[0..n-1] sorted in nondecreasing
Θ(n2), indicating that the growth rate is
order exactly n2.

for i ← 1 to n - 1 do: ● In the best-case scenario for insertion


// Select the element to be inserted sort, where the array is already sorted,
key ← A[i] the inner loop only runs once for each
element in the array. The time complexity
// Find the correct position for the key within is Θ(n).
the sorted subarray A[0..i-1]
j←i-1
● In the case of insertion sort, we observe
while j ≥ 0 and A[j] > key do:
// Shift elements of A[0..i-1] that are greater that the total number of comparisons and
than key to the right shifts, which contribute to the time
A[j + 1] ← A[j] complexity, grows quadratically with the
j←j-1 input size.

// Insert key into its correct position ● Therefore, we use the Θ notation to
A[j + 1] ← key express the precise growth rate of the
algorithm's time complexity as n2.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Algorithms for Generating Combinatorial Objects

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● The minimal-change requirement in
generating permutations ensures that
Generating Permutations each permutation in the sequence can
be obtained from its immediate
predecessor by exchanging just two
adjacent elements.
Minimal Change algorithm
● This property is advantageous
because it simplifies the process of
generating permutations and ensures
that we cover all possible
permutations without redundancy.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Start: We start with the initial permutation,
which is usually the sequence of integers
from 1 to n.
Insert 3 into 21 left to right:
• Now, we insert the next element (3)
Insert 2 into 1 right to left:
into each position in the permutation
• To generate the next permutation, we
[2, 1].
insert the next element (2) into each
• However, this time we start from the
possible position in the current
left and move towards the right.
permutation.
• This alternation in direction ensures
• We start from the right and move
that we cover all possible
towards the left.
permutations while satisfying the
• This ensures that we maintain the
minimal-change requirement.
minimal-change requirement.

Repeat steps 2-4 until we generate all


Insert 3 into 12 right to left: Similarly,
permutations.
we insert the next element (3) into each
position in the permutation [1, 2]. Again,
we move from right to left.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


For n distinct elements, the total number of
permutations is: P(n)=n!

Insertio Permutation Action Time Complexity: O(n!)


n • Total Permutations: There are n! distinct
permutations for n elements.
Insert 1 1
• Minimal Change: Each new permutation
is generated by making a single
Insert 2 12 21 Insert 2 from
adjacent swap, which is an O(1)
Right to Left
operation.
12 123 132 Insert 3 from • Overall Complexity: Since there are n!
312 Right to Left permutations and each requires only
constant time to generate the next, the
Insert 3 total time complexity becomes:
21 321 231 Insert 3 from
213 Left to Right O(n!)×O(1)=O(n!)

Space Complexity: O(n) Required for storing


the current permutation and additional
metadata like direction indicators (if
applicable).
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
1. Initialization:
1. Start with the initial permutation,
which is the sequence of integers from
1 to n, arranged in ascending order.

2. Finding Mobile Elements:


1. Identify the mobile elements in the
current permutation.
Johnson-Trotter algorithm 2. An element is mobile if it is greater
than the adjacent element to which its
arrow points.

3. Finding the Largest Mobile Element:


1. Among the mobile elements, find the
largest one, denoted by k.

4. Swapping Elements:
1. Swap the element k with the adjacent
element to which its arrow points.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
5. Reversing Direction: ● Let's illustrate these steps with an
1. Reverse the direction of the example for n = 3:
arrows for all elements greater ● Initial permutation: [1, 2, 3]
than k.
2,3 are mobile so choose the
6. Generating Permutations: ● 1 2 3 greater
2. Add the new permutation to the 2,3 are swapped & 3 is mobile
● points to 1
list of permutations generated. 1 3 2
3 ,1 are swapped and 2 is
mobile points to 1 so swap 2,1
7. Repeat: ● 3 1 2
2 is mobile but 3 is greater so
3. Repeat steps 2 to 6 until there reverse direction of 3 & 3
are no more mobile elements in ● 3 2 1 point to 2 so swap
3 is mobile & pointing to 1 so
the permutation. swap
● 2 3 1
No more mobile elements so
stop
● 2 1 3
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Start with an empty set, which
represents the subset with no
elements.

● Add each element of the original set


Generating Subsets to the existing subsets one by one.

● For each new element, create new


subsets by adding that element to
Bottom Up Approach each existing subset.

● Combine the new subsets with the


existing ones to form a larger set of
subsets.

● This process is repeated until all


subsets are generated.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• For a set with n elements, we aim
to generate all 2n subsets.

Bit Manipulation Approach • Each subset can be represented


using a binary string of length n,
where each bit indicates whether
the corresponding element is
included (1) or excluded (0) from
the subset.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Steps:
1. Initialize: Start with an empty
Example:
subset (all bits are 0). • For a set {a, b, c}, the process
2. Iteration: For each bit string
would generate subsets:
representing a subset: • ∅ (000)
1. Incremental Addition: To • {a} (100)
generate the next subset, find • {b} (010)
the rightmost 0 bit and change • {a, b} (110)
it to 1.
• {c} (001)
2. Repeat this process until all bits
• {a, c} (101)
are 1 (indicating the last
• {b, c} (011)
subset).
• {a, b, c} (111)

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Initialization:
• Similar to the bit manipulation
approach, start with an empty
subset.

Binary Reflected Gray Code Steps:


Approach 1. Generate Gray Code: Use a
recursive algorithm to generate the
binary reflected Gray code of order
n.
1. At each step, construct the next
bit string by modifying the
previous bit string in such a way
that only one bit changes.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Key Differences:
Example:
• For a set {a, b, c}, the Gray code 1. Transition Between Values:
approach would generate subsets: 1. In regular binary strings, adjacent
• ∅ (000) values can differ in multiple bits.
• {a} (001) 2. In Gray code, adjacent values
• {a, b} (011) differ by only one bit.
• {b} (010)
• {b, c} (110) 2. Transition Complexity:
• {a, b, c} (111) 1. Transitioning between consecutive
• {a, c} (101) values in Gray code is smoother
• {c} (100) and involves fewer bit changes
compared to regular binary
strings.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Binary Bit String: In a regular binary
bit string, each bit can independently
change from 0 to 1 or vice versa. For Gray Code: In Gray code, adjacent values
change by only one bit at a time. For example:
example:
• Decimal 0: 000 (binary) / 000 (Gray code)
• Decimal 0: 000 (binary) • Decimal 1: 001 (binary) / 001 (Gray code)
• Decimal 1: 001 (binary) • Decimal 2: 010 (binary) / 011 (Gray code)
• Decimal 2: 010 (binary) • Decimal 3: 011 (binary) / 010 (Gray code)
• Decimal 3: 011 (binary) • Decimal 4: 100 (binary) / 110 (Gray code)
• Decimal 5: 101 (binary) / 111 (Gray code)
• Decimal 4: 100 (binary)
• Decimal 6: 110 (binary) / 101 (Gray code)
• Decimal 5: 101 (binary) • Decimal 7: 111 (binary) / 100 (Gray code)
• Decimal 6: 110 (binary)
• Decimal 7: 111 (binary)

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1. Initialize pointers: Start with two
Decrease by Constant Factor pointers, left and right, which
represent the indices of the array's
(Decrease-by-Half) boundaries. Initially, the left is set to
the first index (0) and the right is set
to the last index (n - 1), where n is the
size of the array.
Binary Search
2. Find the midpoint: Calculate the
midpoint index mid as (left + right) / 2.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


3. Check the midpoint value:
1. If the value at index mid matches
Time Complexity:
the target value, the search is
successful, and the index mid is
returned. • In each step of binary search, the
2. If the value at index mid is greater size of the search space is halved.
than the target value, update right • Therefore, the time complexity of
to mid-1, effectively discarding the binary search is O(log n), where n is
right half of the array. the number of elements in the array.
3. If the value at index mid is less than • This is because the algorithm
the target value, update left to mid divides the array in half at each
+ 1, effectively discarding the left
step, resulting in a logarithmic time
half of the array.
complexity.
• This makes binary search extremely
4. Repeat the process: Continue steps 2
and 3 until either the target value is found efficient, especially for large arrays.
or left becomes greater than right,
indicating that the target value is not
present in the array.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● The fake coin problem involves
identifying a single fake coin
among a collection of identical-
looking coins, with the assumption
that the fake coin is lighter than
Fake Coin Problem the genuine ones.

● The goal is to design an efficient


algorithm to detect fake coins
using a balance scale that
compares sets of coins.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Scenario 1: Balanced Scale:
• If the scale balances, it means the
fake coin is not in either of the piles.
In this case, we take the remaining 6
coins (the ones we didn't weigh) and
Initial Approach (Dividing into Two
repeat the process.
Piles):
• We divide the remaining 6 coins into
• We start by dividing the coins into two piles of 3 coins each and weigh
two piles of approximately equal them on the balance scale.
size: one pile containing 6 coins
and the other containing 6 coins. Scenario 2: Unbalanced Scale (Left Side
Lighter):
• We place each pile on one side of • If the scale tips to one side, say the
the balance scale. left side, it indicates that the fake coin
is among the 6 coins on the left side
of the scale.

• We then take these 6 coins and repeat


the process recursively, dividing them
into two piles and weighing them until
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
we find the fake coin.
To solve the recurrence relation
W(n)=W(n/2)+1 for n>1 with the initial
Solution with Binary Search: condition W(1)=0 for n=12, we can follow
• Using the recurrence relation these steps:
W(n)=log2​n, we can determine the
maximum number of weighing
needed in the worst case. 1. Substitute n=12 into the recurrence
relation.
• For 12 coins, log2​12≈3.58, meaning 2. Continue substituting n/2 until n
that at most 3 weighing will be becomes 1, and count the number of
required to find the fake coin. times we perform this operation.
3. Add 1 for each substitution to account
for the +1 term in the recurrence
relation.
4. Sum up all the added 1s to get the
total number of weighings.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Improvement with Three Piles:
• Instead of dividing the coins into • Divide-by-2 Approach: Each weighing
eliminates half of the remaining coins.
two equal piles, we can divide
• Time Complexity: O(log⁡2n) since the
them into three piles of 4 coins
problem size reduces by half in each
each.
step.
• Weigh two of the piles against each
• Comparing with Divide-by-3:
other. If one side is lighter, we
• Divide by 3 → O(log⁡3n) (faster)
know the fake coin is in that pile,
• Divide by 2 → O(log⁡2n) (slower than
reducing the search space by a
divide by 3 but still much faster than
larger factor than in the binary
linear search O(n)
approach W(n)=log3​n.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● The Russian peasant multiplication
method is a technique for multiplying
two positive integers using only
halving, doubling, and addition
operations.
Russian Peasant Application
● The algorithm relies on the
observation that multiplying by 2 and
dividing by 2 are equivalent
operations when dealing with
integers.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


computing 50×65:

1. Start with n=50 and m=65, and set the


Here's how the algorithm works step by product to 0.
step: 2. Since 50 is even, double 65 to get 130,
and halve 50 to get 25.
1. Initialize: Start with the two 3. 25 is odd, so add 130 to the product
integers you want to multiply, n and (which is currently 0), double 130 to get
m, and set the product initially to 0. 260, and then halve 25 to get 12. product
= 0+130= 130

2. Repeat Until n=1:


Repeat the process:
1. If n is even, double m and halve
1. 12 is even, so double 260 to get 520,
n.
and halve 12 to get 6.
2. If n is odd, add m to the product, 2. 6 is even, so double 520 to get 1040,
double m, and then halve n. and halve 6 to get 3.
3. 3 is odd, so add 1040 to the product
3. Terminate: When n=1, stop. The (which is currently 130), double 1040
product is the final value of m. to get 2080, and halve 3 to get 1.
product = 130+1040 = 1170

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept 1 is the termination condition, so we add 2080
Here's a formal statement of the
problem:

● Suppose there are n people


numbered from 1 to n standing in a
Josephus Problem circle.

● Starting with person 1, every k-th


person is eliminated until only one
person remains.

● The problem is to determine the


position of the survivor.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


1 K=2

K=2

2 3
6
5

3
5
1
K=2
4

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Even n Case (n=2k):
1. In this case, after the first pass,
For example, if n=6 and k=2, the
every second person is
elimination process goes as follows:
eliminated, leaving behind k
1. Person 1 remains.
survivors.
2. Person 2 is eliminated.
3. Person 3 remains.
2. The survivor's position among the
4. Person 4 is eliminated.
remaining k people will be
5. Person 5 remains.
denoted by J(k).
6. Person 6 is eliminated.

3. To find the survivor's position


● Amongst 1 (skipped), 3
among the original 2k people, we
(eliminated), 5 (skipped).
double J(k) and subtract 1, since
● 1 (eliminated), 5 (skipped).
the position numbering changes
● So, the survivor is person 5.
after the first pass.

4. So, for even n, we have the


recurrence relation: J(2k)=2J(k)−1
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Odd n Case (n=2k+1): Closed-Form Solution:

1. After the first pass, every second person • Interestingly, we can represent n in
is eliminated, leaving k survivors.
binary form.
Additionally, the person in the first
position is eliminated.
• The survivor's position J(n) is obtained
2. The survivor's position among the by cyclically shifting the binary
remaining k people will be denoted by representation of n one bit to the left.
J(k).
• For example, if n=6, represented in
3. To find the survivor's position among the
binary as 110, the survivor's position
original 2k+1 people, we double J(k) and
add 1, since the position numbering is 101, which is 5.
changes after the first pass and we
eliminate the person in the first position. • Similarly, if n=7, represented in binary
as 111, the survivor's position is 111,
4. So, for odd n, we have the recurrence which is 7.
relation: J(2k+1)=2J(k)+1

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● The selection problem involves finding the
kth smallest element in a list of n numbers.
This problem is often referred to as finding
the kth order statistic. If k=1 or k=n, it's
trivial to find the smallest or largest
element respectively by simply scanning
Variable-Size-Decrease the list.

Algorithms ● For a more interesting case, let's consider


k=n/2. This asks us to find an element that
is not larger than half of the list's elements
Computing a Median and the and not smaller than the other half. This
middle value is called the median and is a
Selection Problem
crucial concept in statistics.

● One naive approach to find the kth


smallest element is to sort the entire list
and then select the kth element. However,
this is inefficient because we only need to
find the kth smallest element, not to order
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept the entire list.
● Instead, we can use the idea of
partitioning the list around a chosen
pivot value p. Lomuto's partitioning Lomuto Partition (A[L…..R])
algorithm is one way to do this. Pivot = A[L]
current = L
● We choose p (usually the first element) for i = L+1…R
and rearrange the list so that all {
elements smaller than or equal to p are If (A[i] < pivot)
on the left, p itself is in the middle, and {
all elements larger than or equal to p current ++
are on the right.
Swap (A[i],
A[current])
● The variable-size-decrease aspect of
}
Lomuto partitioning comes from the fact
}
that the size of the array segments
Swap (A[L], A[current])
being partitioned decreases with each
return current
recursive call until they reach a base
case, at which point the algorithm
terminates.
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
● Lomuto's partitioning algorithm is a
Determining that we need to find any k-th specific method for partitioning an array
smallest element typically depends on the around a pivot element, which is a crucial
problem statement or requirements. In many step in various algorithms, including
cases, you may need to find: Quickselect.

• The smallest element: This is when ● Quickselect is an algorithm for finding


k=1. the k-th smallest element in an unsorted
list. It utilizes partitioning (often Lomuto's
• The largest element: This is when k=n partitioning) to recursively narrow down
(where n is the size of the array). the search space until it finds the desired
element.
• A median element: This is when k is
approximately n/2. ● In essence, Lomuto's partitioning is a
subroutine used within Quickselect to
• Any arbitrary k-th smallest element: efficiently partition the array, but
For example, finding the 3rd smallest Quickselect encompasses the entire
element in a list. process of finding the k-th smallest
element using this partitioning technique.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● Interpolation search is a searching
algorithm used to find a specific value
within a sorted array.

● It differs from binary search by taking into


Interpolation Search account the value of the search key to
estimate the position of the target element.

● Let's break down the steps of the


interpolation search algorithm with an
example:

Suppose we have a sorted array ‘arr’ and we


want to find the index of a specific value target
within this array.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


3. Search Iteration:
Compare the target value with the value at
1. Initialize Variables:
the calculated index:
1. Set left = 0 and right = arr.length -
1. If arr[index] == target, return index
1.
(target found).
2. Define the target value target.
2. If arr[index] < target, update left =
index + 1 (search in the right half).
2. Interpolation Formula:
3. If arr[index] > target, update right
1. Calculate the index using the
= index - 1 (search in the left half).
interpolation formula:
4. Repeat:
1. Repeat steps 2 and 3 until the target
index = left + ((target - arr[left]) * (right -
value is found or the search space is
left)) / (arr[right] - arr[left])
exhausted (left > right).
5. Result:
This formula estimates the index where
2. If the target value is found, return the
the target might be located based on the
index.
assumption of linearly increasing array
3. If the target value is not found, return -1
values.
(indicating the value is not present in the
array).
Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept
Let's illustrate this with an example:
3. Search Iteration:
● Suppose we have the sorted array arr = [1, 1. Compare arr[6] with target:
3, 5, 7, 9, 11, 13, 15, 17, 19] and we want to 1. arr[6] = 13 == target: Return
find the index of the value target = 13. index = 6.
4. Result:
1. Initialization:
2. The index of the target is 6.
1. left = 0, right = 9.
2. target = 13.
2. Interpolation Formula: So, the interpolation search algorithm
successfully found the index of the value
index = left + ((target - arr[left]) * (right - left)) / 13 in the array.
(arr[right] - arr[left])

index = 0 + ((13 - 1) * (9 - 0)) / (19 - 1)


= 0 + (12 * 9) / 18
=6

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


• The Game of Nim or Nim game is a
mathematical combinatorial game in
which we have a pile of some objects,
say coins in this case, and two players
Game of NIM who will take turns one after the other.

• In each turn, a player must pick at


least one object(coin in this case), and
may remove any number of
objects(coins) given that they all
belong to the same pile.

• The game is won by the one who picks


the last object(coin).

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example 1
● Given piles of coins are:
Piles[]=2,5,6
● Let us look at the step-by-step
process in which the game will
proceed if Player A starts first.
● As shown in the example, Player
A takes the last coin and is
the winner.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example 2
● Given piles of coins are:
Piles[]=4,1,5
● Let us look at the step-by-step
process in which the game will
proceed if Player A starts first.
● As shown in the example, Player
B takes the last coin and is
the winner.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


● From the example that we saw above, we Lemma:
can say that the winner of the game ● If both the player’s A and B are
surely depends on the configuration of playing the game optimally, which
the piles and the player who starts first. means that they don’t make any
moves that may hamper their
● Let us now introduce a new term
chances of winning.
called Nim Sum which will play an
important role in finding out the final
● In that case, if the Nim sum of the
solution and we'll see how.
initial configuration is non-zero,
Nim-Sum the player making the first move
● At any point of time during the duration is guaranteed to win the game and
of the game, we define the Nim sum as if the Nim sum of the initial
the cumulative Exclusive-OR (XOR) value configuration is zero the second
of all the coins in each pile. player will win the game.

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept


Example 1 Example 2
● Initial configuration is 2 5 6 ● Initial configuration is 4 1 5
● 2 0 1 0 ● 4 1 0 0
● 5 1 0 1 ● 1 0 0 1
● 6 1 1 0 ● 5 1 0 1
XOR 0 0 1 XOR 0 0 0

Since Num sum is Non-zero player Since the Num sum is zero second
who makes first move wins player wins

Prepared by M.V.Bhuvaneswari, Asst.Prof, CSE (AI&ML,DS) Dept

You might also like