0% found this document useful (0 votes)
22 views103 pages

DAA by ADI-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views103 pages

DAA by ADI-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
You are on page 1/ 103

Design and Analysis

of Algorithms

Prof. Dr. K ADISESHA


Design and Analysis 2
of Algorithms
Introduction

Algorithm
Characteristics

Problem Solving

Algorithm Design

Algorithm Analysis

Asymptotic Notations
3
INTRODUCTION

Algorithm:
A finite set of instruction that specifies a sequence of operation is to be carried out in
order to solve a specific problem or class of problems is called an Algorithm.
 The word algorithm comes from the name of a Persian author, Abu Ja'far Mohammed
ibn Musa al Khowarizmi (c. 825 A.D.), who wrote a textbook on mathematics.
 Algorithms are generally created independent of underlying languages.
 The notion of the algorithm illustrates some important points:
 The non-ambiguity requirement for each step of an algorithm cannot be compromised.
 The range of inputs for which an algorithm works has to be specified carefully.
 There may exist several algorithms for solving the same problem.
 Algorithms for the same problem can be based on very different ideas and can solve
the problem with dramatically different speeds.
Prof. Dr. K. Adisesha
4
INTRODUCTION

Algorithm:
A finite set of instruction that specifies a sequence of operation is to be carried out in
order to solve a specific problem or class of problems is called an Algorithm.
 Steps for writing an algorithm:
 An Algorithm is a procedure. It has two part: head and body section.
 The Head section consists of keyword Algorithm and Name of the algorithm with
parameter list. E.g. Algorithm name1(p1, p2,…,p3)
 In the body of an algorithm various programming constructs like if, for, while and
some statements like assignments are used. The compound statements may be
enclosed with { and } brackets.
 Comments are written using // at the beginning.
Prof. Dr. K. Adisesha
5
Algorithm

Euclid’s algorithm:
Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m
mod n), where m mod n is the remainder of the division of m by n, until m mod n is
equal to 0.
 Euclid’s algorithm for computing gcd(m, n) in simple steps
Step 1: If n = 0, return the value of m as the answer ALGORITHM Euclid_gcd(m, n)
and stop; otherwise, proceed to Step 2. //Computes gcd(m, n) by Euclid’s algorithm
//Input: Two non-zero integers m and n
Step 2: Divide m by n and assign the value of the //Output: Greatest common divisor of m and n
remainder to r. while n ≠ 0 do
Step 3: Assign the value of n to m and the value of r r ←m mod n
to n. Go to Step 1. m←n
n←r
Prof. Dr. K. Adisesha return m
6
Algorithm

Euclid’s algorithm:
Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m
mod n), where m mod n is the remainder of the division of m by n, until m mod n is
equal to 0.
 Euclid’s algorithm is based on repeated application of equality
 gcd(m,n) = gcd(n, m mod n)
 Example:
 Euclid’s algorithm for computing gcd(60, 24) in simple steps
 gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

Prof. Dr. K. Adisesha


7
Algorithm

Consecutive integer checking algorithm:


Other methods for computing gcd(m,n). Compute the product of all the common prime
factors and return it as gcd(m,n)
 Algorithm gcd(m,n)
Step 1: Assign the value of min{m,n} to t
Step 2: Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3: Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4: Decrease t by 1 and go to Step 2
 This is slower than Euclid’s algorithm
Prof. Dr. K. Adisesha
8
Algorithm

Characteristics of an Algorithm:
An algorithm should have the following characteristics −.
 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their inputs/outputs should be clear and must lead to only one meaning.
 Input: Zero / more quantities are externally supplied.
 Output: At least one quantity is produced.
 Definiteness: Each instruction is clear and unambiguous.
 Finiteness: If the instructions of an algorithm is traced then for all cases the algorithm
must terminates after a finite number of steps.
 Efficiency: Every instruction must be very basic and runs in short time.

Prof. Dr. K. Adisesha


9
Algorithm

Algorithmic Problem Solving:


A sequence of steps involved in designing and analyzing an algorithm is shown in the
figure below.
 Understanding the Problem
 Read the problem’s description carefully to understand
the problem statement completely.
 Ask questions for clarifying the doubts about the
problem.
 Identify the problem types and use existing algorithm to
find solution.
 Input (instance) to the problem and range of the input
get fixed.
Prof. Dr. K. Adisesha
10
Algorithm

Algorithmic Problem Solving:


A sequence of steps involved in designing and analyzing an algorithm is shown in the
figure below.
 Analyzing an Algorithm
 For an algorithm the most important is efficiency. In fact,
there are two kinds of algorithm efficiency. They are:
 Time efficiency: indicating how fast the algorithm runs.
 Space efficiency: indicating how much extra memory it
uses.
 So factors to analyze an algorithm are:
 Time efficiency of an algorithm
 Space efficiency of an algorithm
 Simplicity of an algorithm
Prof. Dr. K. Adisesha
 Generality of an algorithm
11
Algorithm

Problem Types :
The most important problem types are:
 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problems
 Geometric problems
 Numerical problems

Prof. Dr. K. Adisesha


12
Algorithm

Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Divide & Conquer Approach
 Greedy Approach
 Dynamic Programming Approach
 Randomization Approach
 Approximation Approach
 Recursive Approach
 Branch and Bound Approach

Prof. Dr. K. Adisesha


13
Algorithm

Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Divide and Conquer Approach: It is a top-down approach. The algorithms which
follow the divide & conquer techniques involve three steps:
 Divide the original problem into a set of subproblems.
 Solve every subproblem individually, recursively.
 Combine the solution of the subproblems (top level) into a solution of the whole original
problem.
 Greedy Technique: Greedy method is used to solve the optimization problem in which
we are given a set of input values, which are required either to be maximized or
minimized (known as objective), i.e. some constraints or conditions.
Prof. Dr. K. Adisesha
14
Algorithm

Design Strategies:
There are various types of strategies in order to design algorithms for various
problems. Some of them are listed as follows:
 Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all
possible small problems and then combine them to obtain solutions for bigger problems.
 Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be
bounded, has to be divided into at least two new restricted subproblems.
 Randomized Algorithms: A randomized algorithm is defined as an algorithm that is
allowed to access a source of independent, unbiased random bits, and it is then allowed to
use these random bits to influence its computation.
 Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the
right one. It is a depth-first search of the set of possible solution.
Prof. Dr. K. Adisesha
15
Algorithm

Algorithm Analysis:
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following .
 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming factors, like processor speed, are constant and have
no effect on the implementation.

 A Posterior Analysis − This is an empirical analysis of an algorithm. The selected


algorithm is implemented using programming language. In this analysis, actual
statistics like running time and space required, are collected.

Prof. Dr. K. Adisesha


16
Algorithm

Analysis of Algorithm Efficiency:


The efficiency of an algorithm can be in terms of time and space.
 The algorithm efficiency can be analyzed by the following ways.
 Analysis Framework.
 Asymptotic Notations and its properties.
 Mathematical analysis for Recursive algorithms.
 Mathematical analysis for Non-recursive algorithms.

Prof. Dr. K. Adisesha


17
Algorithm

Algorithm Complexity:
Step Count Method− The step Count method is also called as Frequency Count method.
 we count the number of times each instruction is executed.

Prof. Dr. K. Adisesha


Algorithm

 Calculate
N

i
i 1
3 1
2
1
2N+2
3 4N
 Lines 1 and 4 count for one unit each 4 1
 Line 3: executed N times, each time four units
 Line 2: (1 for initialization, N+1 for all the tests, N for all the increments) total 2N + 2
 total cost: 6N + 4  O(N)
19
Algorithm

Analysis of Algorithms:
To analyse the feasibility of an algorithm designed, we calculate the complexity of it.
This is represented in three notations, called asymptotic notations. Asymptotic
notations are as follows −
 Worst-Case Scenario − Big Oh and Little Oh Notations
 Best-Case Scenario − Big Omega and Little Omega Notations
 Average-Case Scenario − Theta Notation

Prof. Dr. K. Adisesha


20
Algorithm

Asymptotic Analysis of an algorithm:


Asymptotic analysis of an algorithm refers to Asymptotic analysis refers to computing the
running time of any operation in mathematical units of computation of its run-time
performance.
 The Asymptotic analysis of an algorithm falls under three types:
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

Prof. Dr. K. Adisesha


21
Algorithm

Asymptotic Notations of an algorithm:


Following are the commonly used asymptotic notations to calculate the running time
complexity of an algorithm..
 The Asymptotic Notations of an algorithm falls under three types:
 Big Oh Notation, Ο− It measures the worst case time complexity.
 Omega Notation, Ω− It measures the best case time complexity.
 Theta Notation, θ− It measures the Average case time complexity.

Prof. Dr. K. Adisesha


22
Algorithm

Big Oh Notation, ( Ο )of an algorithm:


The notation Ο(n) is the formal way to express the upper bound of an algorithm's
running time.
 It measures the worst case time complexity or the longest amount of time an algorithm
can possibly take to complete.
 For example, for a function f(n)
 It is represented as follows
 f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case
 Example:
 3n+2=O(n) as 3n+2≤4n for all n≥2
 3n+3=O(n) as 3n+3≤4n for all n≥3
Prof. Dr. K. Adisesha
23
Algorithm

Omega Notation, Ω of an algorithm:


The notation Ω(n) is the formal way to express the lower bound of an algorithm's
running time. It measures the best case time complexity or the best amount of time an
algorithm can possibly take to complete
• For example, for a function f(n)
• It is represented as follows
F (n) ≥ k* g (n) for all n, n≥ n0
f (n) =8n2+2n-3≥8n2-3
=7n2+(n2-3)≥7n2 (g(n))
Thus, k1=7
• Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Prof. Dr. K. Adisesha
24
Algorithm

Theta Notation, θ of an algorithm:


The notation θ(n) is the formal way to express both the lower bound and the upper
bound of an algorithm's running time.
 For example, for a function f(n)
 It is represented as follows
 k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
 Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
k1=3, k2=4, and n0=2

 θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Prof. Dr. K. Adisesha
25
Algorithm

Common Asymptotic Notations:


Following is a list of some common asymptotic notations.
 constant − Ο(1)
 logarithmic − Ο(log n)
 linear − Ο(n)
 n log n − Ο(n log n)
 quadratic − Ο(n2)
 cubic − Ο(n3)
 polynomial − nΟ(1)
 exponential − 2Ο(n)

Prof. Dr. K. Adisesha


26
Algorithm

Compare the Rates of Change of Two Functions Using Limits:


In order to determine the relationship between f(n) and g(n), it is often usefuly to
examine L’Hopital’s rule. Let f(x) and g(x) be differential functions with derivatives f’(x)
and g’(x), respectively, such that

Prof. Dr. K. Adisesha


27
Algorithm

Compare the Rates of Change of Two Functions Using Limits:


Example: Given F(n)=5n+3, prove that f(n)=O(n)

Since is constant, f(n) is of same growth order of g(n)

Prof. Dr. K. Adisesha


28
Recurrence Relation

Recurrence:
Recursion is a fundamental concept in computer science and mathematics that
allows functions to call themselves, enabling the solution of complex problems
through iterative steps.
 There are four methods for solving Recurrence:
 Substitution Method
 Iteration Method
 Recursion Tree Method
 Master Method
 For Example, the Worst Case Running Time T(n) of the
MERGE SORT Procedures is described by the recurrence.
Prof. Dr. K. Adisesha
29
Recurrence Relation

Iteration Methods:
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.
 Example: Consider the Recurrence:
T (n) = 1 if n=1
= 2T (n-1) if n>1 Repeat the procedure for i times
 Solution: T (n) = 2i T (n-i)
T (n) = 2T (n-1) Put n-i=1 or i= n-1 in (Eq.1)
= 2[2T (n-2)] = 22T (n-2) T (n) = 2n-1 T (1)
= 4[2T (n-3)] = 23T (n-3) = 2n-1 .1 {T (1) =1 .....given}
= 8[2T (n-4)] = 24T (n-4) (Eq.1) = 2n-1
Prof. Dr. K. Adisesha
30
Recurrence Relation

Recursion Tree:
A recursion tree is a graphical representation that illustrates the execution flow of a
recursive function. It provides a visual breakdown of recursive calls.
 Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it.
// find factorial of a number
/* How function calls are made,
factorial(n) { Factorial(5) [ 120 ]
if n is less than 2: // Factorial of 0, 1 is 1 5 * Factorial(4) ==> 120
return n 4. * Factorial(3) ==> 24
else // Recursive step 3 * Factorial(2) ==> 6
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)... 2 * Factorial(1) ==> 2
} 1
*/
Prof. Dr. K. Adisesha
31
Brute Force Approach

Brute force approach:


A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem.
 The brute force algorithm tries out all the possibilities till a satisfactory solution is
not found.
 Such an algorithm can be of two types:
 Optimizing: In this case, the best solution is found. To find the best solution, it
may either find all the possible solutions. For example: Finding the best path for
the travelling salesman problem.
 Satisficing: It stops finding the solution as soon as the satisfactory solution is
found. Or example, finding the travelling salesman path which is within 10% of
optimal.
Prof. Dr. K. Adisesha
32
Brute Force Approach

Brute force approach:


A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem.
 The following are the advantages of the brute-force algorithm:
 This algorithm finds all the possible solutions, and it also guarantees that it finds
the correct solution to a problem.
 This type of algorithm is applicable to a wide range of domains.
 It is mainly used for solving simpler and small problems.
 It can be considered a comparison benchmark to solve a simple problem and
does not require any particular domain knowledge.

Prof. Dr. K. Adisesha


33
Brute Force Approach

Linear search:
Linear Searching: Also called Sequential Searching.
 It is used to find out the location of the data item if it exists in the given collection of
data items.
 Example Searching element 33 from the array of elements:

 The complexity of linear search is therefore O(n).


Prof. Dr. K. Adisesha
34
Brute Force Approach
Linear search:
Linear Searching: Also called Sequential Searching.

Prof. Dr. K. Adisesha


35
Brute Force Approach
Linear search:
Time and Space Complexity of Linear Search Algorithm:
 Time Complexity:
 Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)
 Worst Case: In the worst case, the key might be present at the last index i.e.,
opposite to the end from which the search has started in the list. So the worst-case
complexity is O(N) where N is the size of the list.
 Average Case: O(N)
 Auxiliary Space: O(1) as except for the variable to iterate through the list, no other
variable is used.
Prof. Dr. K. Adisesha
36
Brute Force Approach

Sorting in Arrays:
A Sorting Algorithm is used to rearrange a given array or list elements according to a
comparison operator on the elements:
 The comparison operator is used to decide the new order of element in the respective
data structure.
 Types of Sorting Algorithms are:
 Bubble Sort  Heap Sort
 Selection Sort  Radix Sort
 Insertion Sort  Bucket Sort
 Merge Sort  Shell Sort
 Quick Sort
Prof. Dr. K. Adisesha
37
Brute Force Approach

Bubble Sort in Arrays:


Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in wrong order.

Prof. Dr. K. Adisesha


38
Brute Force Approach

Selection Sort:

Prof. Dr. K. Adisesha


39
Brute Force Approach

Selection Sort:

Prof. Dr. K. Adisesha


40
Brute Force Approach

Selection Sort:
Complexity Analysis of Selection Sort
Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the
 First pass, it will do n-1 comparisons;
 Second pass, it will do n-2; in the third pass,
 it will do n-3 and so on.
Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.
Prof. Dr. K. Adisesha
41
Exhaustive search Approach

Exhaustive search:
Exhaustive search, also known as brute-force search or generate and test, is a problem-
solving technique that involves systematically checking all possible candidates to see if
they meet the problem's requirements.
Exhaustive Search:
 Traveling Salesman Problem
 Knapsack Problem
 Assignment Problem
 Graph Traversals

Prof. Dr. K. Adisesha


42
Exhaustive search Approach

Traveling Salesman Problem:


The Travelling Salesman Problem (TSP) is a classic algorithmic problem in the field of
computer science and operations research, focusing on optimization. It seeks the
shortest possible route that visits every point in a set of locations just once.
 To solve the TSP using the brute-force approach, you
must calculate the total number of routes and then
draw and list all the possible routes.
 Calculate the distance of each route and then choose
the shortest one — this is the optimal solution.

Prof. Dr. K. Adisesha


43
Exhaustive search Approach

Traveling Salesman Problem:


To solve the TSP using the brute-force approach, you must calculate the total number
of routes and then draw and list all the possible routes just once.

 Calculate the distance of each route and then choose the


shortest one — this is the optimal solution.
Prof. Dr. K. Adisesha
44
Exhaustive search Approach

Knapsack Problem:
In the knapsack problem, you need to pack a set of items, with given values and sizes
(such as weights or volumes), into a container with a maximum capacity.
 If the total size of the items exceeds the capacity, you can't pack them all. In that case,
the problem is to choose a subset of the items of maximum total value that will fit in the
container.
 The data includes the following:
 Weights: A vector containing the weights of the items.
 Values: A vector containing the values of the items.
 Capacities: A vector with just one entry, the capacity of the knapsack.

Prof. Dr. K. Adisesha


45
Exhaustive search Approach

Knapsack Problem:
The Knapsack problem is an example of the combinational optimization problem. This
problem is also commonly known as the “Rucksack Problem“.
 The knapsack problem can be classified into the following types:
 Fractional Knapsack Problem
 0/1 Knapsack Problem
 Bounded Knapsack Problem
 Unbounded Knapsack Problem

Prof. Dr. K. Adisesha


46
Exhaustive search Approach

Fractional Knapsack Problem:


Given the weights and values of N items, put these items in a knapsack of capacity W to
get the maximum total value in the knapsack. In Fractional Knapsack, we can break
items for maximizing the total value of the knapsack.

Prof. Dr. K. Adisesha


47
Exhaustive search Approach

Prof. Dr. K. Adisesha


48
Exhaustive search Approach

Fractional Knapsack Problem:


Example: Consider the example: arr[] = {{100, 20}, {60, 10}, {120, 30}}, W = 50.
Sorting: Initially sort the array based on the profit/weight ratio.
The sorted array will be {{60, 10}, {100, 20}, {120, 30}}.
Iteration:
For i = 0, weight = 10 which is less than W. So add this element in the knapsack. profit = 60 and
remaining W = 50 – 10 = 40.
For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 + 100 = 160 and
remaining W = 40 – 20 = 20.
For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the element.
Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W becomes 0.

Prof. Dr. K. Adisesha


49
Exhaustive search Approach

0/1 Knapsack Problem:


We are given N items where each item has some weight (wi) and value (vi) associated
with it. We are also given a bag with capacity W. The target is to put the items into the
bag such that the sum of values associated with them is the maximum possible.

Prof. Dr. K. Adisesha


50
Exhaustive search Approach

0/1 Knapsack Problem:


The 0/1 Knapsack Problem states that you have a backpack with a weight limit, and
you are in a room full of treasures, each treasure with a value and a weight.
 To solve the 0/1 Knapsack Problem using brute force means to:
 Calculate the value of every possible combination of
items in the knapsack.
 Discard the combinations that are heavier than the
knapsack weight limit.
 Choose the combination of items with the highest
total value.

Prof. Dr. K. Adisesha


51
Exhaustive search Approach

0/1 Knapsack Problem:


We are given N items where each item has some weight (wi) and value (vi) associated
with it. We are also given a bag with capacity W. The target is to put the items into the
bag such that the sum of values associated with them is the maximum possible.

Prof. Dr. K. Adisesha


52
Exhaustive search Approach

0/1 Knapsack Problem:


A similar dynamic programming solution for the 0-1 knapsack problem also runs in
pseudo-polynomial time. Assume w1, w2,…, wn, W are strictly positive integers.
 We can define m[i, w] recursively as
follows:

 Example:
Wi={4, 3, 2, 1}
Vi={5, 4, 3, 2 } Capacity=6
Prof. Dr. K. Adisesha
53
Exhaustive search Approach

0/1 Knapsack Problem:


Given weights and profits of N items, put these items in a knapsack of capacity W. The
possible solutions in such a way that there are no remaining items left whose weight is
less than the remaining capacity of the knapsack. With maximum profit.
Input: Profits[] = {60, 100, 120, 50} Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 40 Weights[] = {10, 20, 30, 40}, W = 50
Output: Output:
10: 60, 20: 100, 10: 60, 20: 100,
10: 60, 30: 120, 10: 60, 30: 120,
Maximum Profit = 180 20: 100, 30: 120,
Explanation: Maximum profit from all the possible Maximum Profit = 220
solutions is 180 Explanation: Maximum profit from all the possible
solutions is 220
Prof. Dr. K. Adisesha
54
Exhaustive search Approach

Graphs Traversal Algorithms:


The graph is one non-linear data structure. That is consists of some nodes and their
connected edges. The edges may be director or undirected.
 This graph can be represented as G(V, E). The following graph can be represented as
G({A, B, C, D, E}, {(A, B), (B, D), (D, E), (B, C), (C, A)})
 The graph has two types of traversal algorithms.
 Breadth First Search
 Depth First Search.

Prof. Dr. K. Adisesha


55
Exhaustive search Approach

BFS algorithm:
Breadth-first search is a graph traversal algorithm that starts traversing the graph
from the root node and explores all the neighboring nodes.
 The applications of breadth-first-algorithm are given as follows -
 BFS can be used to find the neighboring locations from a given source location.
 In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the
neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this
process to find "seeds" and "peers" in the network.
 BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages.
 BFS is used to determine the shortest path and minimum spanning tree.
 BFS is also used in Cheney's technique to duplicate the garbage collection.
Prof. Dr. K. Adisesha
56
Exhaustive search Approach

BFS algorithm:
Breadth-first search is a graph traversal algorithm that starts traversing the graph
from the root node and explores all the neighboring nodes.
 The steps involved in the BFS algorithm to explore a graph are given as follows -
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT.
Prof. Dr. K. Adisesha
57
Exhaustive search Approach

Depth First Search:


The depth-first search (DFS) algorithm starts with the initial node of graph G and goes
deeper until we find the goal node or the node with no children.
 Applications of DFS algorithm
 The applications of using the DFS algorithm are given as follows -
 DFS algorithm can be used to implement the topological sorting.
 It can be used to find the paths between two vertices.
 It can also be used to detect cycles in the graph.
 DFS algorithm is also used for one solution puzzles.
 DFS is used to determine if a graph is bipartite or not.

Prof. Dr. K. Adisesha


58
Exhaustive search Approach

Depth First Search:


The depth-first search (DFS) algorithm starts with the initial node of graph G and goes
deeper until we find the goal node or the node with no children.
The step by step process to implement the DFS traversal is given as follows -
1) First, create a stack with the total number of vertices in the graph.
2) Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
3) After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top
of the stack.
4) Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.
5) If no vertex is left, go back and pop a vertex from the stack.
6) Repeat steps 2, 3, and 4 until the stack is empty.

Prof. Dr. K. Adisesha


59
Exhaustive search Approach

Depth First Search:


The depth-first search (DFS) algorithm starts with the initial node of graph G and goes
deeper until we find the goal node or the node with no children.
Algorithm
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Prof. Dr. K. Adisesha
60
Exhaustive search Approach

Depth First Search:


The depth-first search (DFS) algorithm starts with the initial node of graph G and goes
deeper until we find the goal node or the node with no children.

Input: Profits[] = {60, 100, 120, 50} Input: Profits[] = {60, 100, 120, 50}
Weights[] = {10, 20, 30, 40}, W = 40 Weights[] = {10, 20, 30, 40}, W = 50
Output: Output:
10: 60, 20: 100, 10: 60, 20: 100,
10: 60, 30: 120, 10: 60, 30: 120,
Maximum Profit = 180 20: 100, 30: 120,
Explanation: Maximum profit from all the possible Maximum Profit = 220
solutions is 180 Explanation: Maximum profit from all the possible
solutions is 220
Prof. Dr. K. Adisesha
61
Decrease and Conquer

Decrease and Conquer:


A problem-solving strategy known as "Decrease and Conquer" involves slicing the size
of the input at every stage of the solution procedure.
 Identical to divide-and-conquer as it breaks the problem down into smaller sub-
problems, decrease-and-conquer reduces the size of the input data at each stage rather
than increasing it.
 Algorithm
Step 1: Start Step 7: If p<mid, check p with elements left to mid
Step 2: Set p (the element whose position to be found) Step 8: Repeat steps 4 to 7 till first meets last
Step 3: Set the pointers first (at the first position) and last Step 9: Return the result
(at the last position) Step 10: Stop
Step 4: Find the middle element (mid)
Step 5: If p==mid, then print the middle element
Step 6: If p>mid, check p with elements right to mid
Prof. Dr. K. Adisesha
62
Decrease and Conquer

Binary Searching:
The binary search algorithm can be used with
only sorted list of elements.
 Binary Search first divides a large array into
two smaller sub-arrays and then recursively
operate the sub-arrays.
 Binary Search basically reduces the search
space to half at each step

Prof. Dr. K. Adisesha


63
Decrease and Conquer
Binary Searching:
The binary search algorithm can be used with only sorted list of elements.
 Example: Searching the element 57 from the array of elements

Prof. Dr. K. Adisesha


64
Decrease and Conquer
Binary Searching:
 Example:

Prof. Dr. K. Adisesha


65
Decrease and Conquer

Binary Searching:

Prof. Dr. K. Adisesha


66
Decrease and Conquer

Binary Searching Analysis:


 Input: an array A of size n, already sorted in the ascending or descending order.
 Output: analyze to search an element item in the sorted array of size n.
 Logic: Let T (n) = number of comparisons of an item with n elements in a sorted array.
Set BEG = 1 and END = n
 Compare the search item with the mid item.
 Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1
 Case 2: item ≠A [mid], then we will split the array into two equal parts of size n/2.
 Repeat the same process until a search element is found

 Time to compare the search element with mid element, with half of the selected half part of array

Prof. Dr. K. Adisesha


67
Decrease and Conquer

Insertion Sorting:
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list)
one item at a time.
 This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.

Prof. Dr. K. Adisesha


68
Decrease and Conquer

Insertion Sorting:
This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.
 This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained
which is always sorted.

Prof. Dr. K. Adisesha


69
Decrease and Conquer

Insertion Sorting:
 ALGORITHM: Insertion Sort (A, N) A is an array with N unsorted elements.
 Step 1: for I=1 to N-1
 Step 2: J = I
While(J >= 1)
if ( A[J] < A[J-1] ) then
Temp = A[J];
A[J] = A[J-1];
A[J-1] = Temp;
[End if]
J = J-1
[End of While loop]
[End of For loop]
Prof. Dr. K. Adisesha  Step 3: Exit
70
Divide and Conquer

Merging from Array:


Merging: It is used to combine the data items of two sorted files into single file in the
sorted form.
 We have sorted linear array A as below:
1 2 3 4 5 6
10 40 50 80 95 100
 And sorted linear array B as below:
1 2 3 4
20 35 45 90
 After merging merged array C is as below:
1 2 3 4 5 6 7 8 9 10
10 20 35 40 45 50 80 90 95 100
Prof. Dr. K. Adisesha
71
Divide and Conquer

Binary tree traversal:


Binary tree traversal is a process that involves visiting each node in a binary tree data
structure.
 Tree Traversal refers to the process of visiting or accessing each node of the tree
exactly once in a certain order.
 A Tree Data Structure can be traversed in following ways:
 Depth First Search or DFS
 Inorder Traversal
 Preorder Traversal
 Postorder Traversal
 Level Order Traversal or Breadth First Search or BFS
Prof. Dr. K. Adisesha
72
Divide and Conquer

Binary tree traversal (DFS):


Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Pre-order: A pre-order traversal on a tree performs the following steps starting from
the root:
1) Return the root node value.
2) Traverse the left subtree by recursively calling the pre-order function.
3) Traverse the right subtree by recursively calling the pre-order function.
 Pre-order traversal would output the node values in the following
order: A, B, D, E, C
Prof. Dr. K. Adisesha
73
Divide and Conquer

Binary tree traversal (DFS):


A pre-order traversal on a tree performs the following steps starting from the root:
function Node(data) n1.left = n3;
{ this.data = data; n1.right = n4
this.left = null; function pre_order(root, nodes) {
this.right = null; } nodes.push(root.data);
// create nodes if (root && root.left) {
var root = new Node('A'); pre_order(root.left, nodes); }
var n1 = new Node('B'); if (root && root.right)
var n2 = new Node('C'); { pre_order(root.right, nodes); }
var n3 = new Node('D'); return nodes; }
var n4 = new Node('E');
pre_order(root, []); // => [ A, B, D, E, C ]
// setup children
root.left
Prof. Dr. K.= n1;
Adisesha
74
Divide and Conquer

Binary tree traversal (DFS):


Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 In-order: An in-order traversal on a tree performs the following steps starting from the
root:
1) Traverse the left subtree by recursively calling the in-order function.
2) Return the root node value.
3) Traverse the right subtree by recursively calling the in-order function.
 In-order traversal would output the node values in the following order:
D, B, E, A, C
Prof. Dr. K. Adisesha
75
Divide and Conquer

Binary tree traversal (DFS):


An in-order traversal on a tree performs the following steps starting from the root:
function Node(data) n1.left = n3;
{ this.data = data; n1.right = n4
this.left = null; function in_order(root, nodes)
this.right = null; } { if (root && root.left)
// create nodes { in_order(root.left, nodes); }
var root = new Node('A'); nodes.push(root.data);
var n1 = new Node('B'); if (root && root.right)
var n2 = new Node('C'); { in_order(root.right, nodes); }
var n3 = new Node('D'); return nodes;
var n4 = new Node('E'); }
// setup children in_order(root, []); // => [ D, B, E, A, C ]
root.left
Prof. Dr. K. = n1;
Adisesha
76
Divide and Conquer

Binary tree traversal (DFS):


Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Post-order: A post-order traversal on a tree performs the following steps starting from
the root.
1) Traverse the left subtree by recursively calling the post-order function.
2) Traverse the right subtree by recursively calling the post-order function.
3) Return the root node value.
 Post-order traversal would output the node values in the following order:
D, E, B, C, A
Prof. Dr. K. Adisesha
77
Divide and Conquer

Binary tree traversal (DFS):


A post-order traversal on a tree performs the following steps starting from the root:
function Node(data) n1.left = n3;
{ this.data = data; n1.right = n4
this.left = null; function post_order(root, nodes)
this.right = null; } { if (root && root.left)
// create nodes { post_order(root.left, nodes); }
var root = new Node('A'); if (root && root.right)
var n1 = new Node('B'); { post_order(root.right, nodes); }
var n2 = new Node('C'); nodes.push(root.data);
var n3 = new Node('D'); return nodes;
var n4 = new Node('E'); }
// setup children post_order(root, []); // => [ D, E, B, C, A ]
root.left = n1;
Prof. Dr. K. Adisesha
78
Divide and Conquer

Binary tree traversal (BFS):


Tree traversal is the process of visiting each node in a tree, such as a binary tree or
binary search tree, exactly once. There are several effective traversal algorithms which
we will cover below.
 Level-order: A level-order traversal on a tree performs the following steps starting
from the root.
1) Add the root to a queue.
2) Pop the last node from the queue, and return its value.
3) Add all children of popped node to queue, and continue from step 2 until
queue is empty.
 Level-order traversal would output the node values in the following
order: A, B, C, D, E
Prof. Dr. K. Adisesha
79
Divide and Conquer

Binary tree traversal (BFS):


A level-order traversal on a tree performs the following steps starting from the root:
function Node(data) n1.right = n4
{ this.data = data; function level_order(root, nodes)
this.left = null; { var queue = [root];
this.right = null; } while (queue.length > 0)
// create nodes { var n = queue.shift();
var root = new Node('A'); nodes.push(n.data);
var n1 = new Node('B'); if (n.left !== null) { queue.push(n.left); }
var n2 = new Node('C'); if (n.right !== null)
var n3 = new Node('D'); { queue.push(n.right); }
var n4 = new Node('E'); } return nodes;}
// setup children level_order(root, []); // => [ A, B, C, D, E ]
root.left = n1;
Prof. Dr. K. Adisesha
root.right = n2;
80
Divide and Conquer

Advantages of Divide and Conquer:


Divide and Conquer Algorithm is a problem-solving method in Data Structures working
on recursion principle.
 Advantages of Divide and Conquer:
 Efficiency: Divide and conquer algorithms typically have a time complexity of O(n log
n), which is more efficient than many other algorithms for large datasets.
 Simplicity: Divide and conquer algorithms are often easy to understand and implement.
 Parallelizability: Divide and conquer algorithms can be easily parallelized, as each
subproblem can be solved independently.
 Cache-friendliness: Divide and conquer algorithms tend to have good cache
performance, as they access data in a predictable pattern.
Prof. Dr. K. Adisesha
81
Divide and Conquer

Disadvantages of Divide and Conquer:


Divide and Conquer Algorithm is a problem-solving method in Data Structures working
on recursion principle.
 Disadvantages of Divide and Conquer:
 Recursion overhead: Divide and conquer algorithms use recursion, which can lead to
significant overhead in terms of stack space and function calls.
 Not suitable for all problems: Divide and conquer algorithms are not suitable for all
types of problems. They are most effective for problems that can be recursively divided
into smaller subproblems.
 Limited memory efficiency: Divide and conquer algorithms can require a significant
amount of memory, as they create multiple copies of the input data.
Prof. Dr. K. Adisesha
82
Greedy Technique

Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 This technique is basically used to determine the feasible solution that may or may not
be optimal.
 The following are the characteristics of a greedy method:
 To construct the solution in an optimal way, this algorithm creates two sets where one
set contains all the chosen items, and another set contains the rejected items.
 A Greedy algorithm makes good local choices in the hope that the solution should be
either feasible or optimal.

Prof. Dr. K. Adisesha


83
Greedy Technique

Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 The components that can be used in the greedy algorithm are:
 Candidate set: A solution that is created from the set is known as a candidate set.
 Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
 Feasibility function: A function that is used to determine whether the candidate or subset can
be used to contribute to the solution or not.
 Objective function: A function is used to assign the value to the solution or the partial
solution.
 Solution function: This function is used to intimate whether the complete function has been
reached or not.
Prof. Dr. K. Adisesha
84
Greedy Technique

Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 The components that can be used in the greedy algorithm are:
 Candidate set: A solution that is created from the set is known as a candidate set.
 Selection function: This function is used to choose the candidate or subset which can be
added in the solution.
 Feasibility function: A function that is used to determine whether the candidate or subset can
be used to contribute to the solution or not.
 Objective function: A function is used to assign the value to the solution or the partial
solution.
 Solution function: This function is used to intimate whether the complete function has been
reached or not.
Prof. Dr. K. Adisesha
85
Greedy Technique

Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 Applications of Greedy Algorithm
 It is used in finding the shortest path.
 It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
 It is used in a job sequencing with a deadline.
 This algorithm is also used to solve the fractional knapsack problem.

Prof. Dr. K. Adisesha


86
Greedy Technique

Greedy Algorithm:
A greedy algorithm is an approach for solving a problem by selecting the best option
available at the moment. It works in a top-down approach.
 Types of Greedy Algorithms
 Knapsack Problem.
 Minimum Spanning Tree.
 Single-Source Shortest Path Problem.
 Job Scheduling Problem.
 Prim's Minimal Spanning Tree Algorithm.
 Kruskal's Minimal Spanning Tree Algorithm.
 Dijkstra's Minimal Spanning Tree Algorithm.
Prof. Dr. K. Adisesha
87
Greedy Technique

Knapsack Problem using Greedy Algorithm:


n this method, the Knapsack’s filling is done so that the maximum capacity of the
knapsack is utilized so that maximum profit can be earned from it.
 The knapsack problem using the Greedy Method is referred to as:
 Given a list of n objects, say {I1, I2,……, In) and a knapsack (or bag).
 The capacity of the knapsack is M.
 Each object Ij has a weight wj and a profit of pj
 If a fraction xj (where x ∈ {0…., 1)) of an object Ij is placed into a knapsack, then a profit of pjxj
is earned.
 The problem is to fill the knapsack (up to its maximum capacity M), maximizing the total profit
earned.

Prof. Dr. K. Adisesha


88
Greedy Technique

Minimum Spanning Tree (MST):


In a weighted graph, a minimum spanning tree is a spanning tree that has minimum
weight than all other spanning trees of the same graph.
 In real-world situations, this weight can be measured as distance, congestion, traffic load
or any arbitrary value denoted to the edges.
 two most important spanning tree algorithms(greedy algorithms):
 Kruskal's Algorithm
 Prim's Algorithm

Prof. Dr. K. Adisesha


89
Greedy Technique

Kruskal's Algorithm:
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input
and finds the subset of the edges of that graph which.
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the
graph
 The steps for implementing Kruskal's algorithm are as follows:
 Sort all the edges from low weight to high
 Take the edge with the lowest weight and add it to the spanning tree. If adding the
edge created a cycle, then reject this edge.
 Keep adding edges until we reach all vertices.
Prof. Dr. K. Adisesha
90
Greedy Technique

Kruskal's Algorithm:
Kruskal's algorithm is a subgraph that connects all the vertices present in the main graph
with the least possible edges and minimum cost.
 Examples: Construct a minimum spanning tree using kruskal’s algorithm for the graph given
 Solution:
 As the first step, sort all the edges in the given graph in an
ascending order and store the values in an array.
Edge BD AB CF FE BC GF AG CD DE CG
Cost 5 6 9 10 11 12 15 17 22 25
Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53
Visited array, v = {B, D, A, C, F, E, G}
Prof. Dr. K. Adisesha
91
Greedy Technique

Prim's algorithm:
Prim's algorithm to find minimum cost spanning tree uses the greedy approach.
 Algorithm
 Declare an array visited[] to store the visited vertices and, add the arbitrary root, say S, to the visited
array.
 Check whether the adjacent vertices of the last visited vertex are present in the visited[] array or not.
 If the vertices are not in the visited[] array, compare the cost of edges and add the least cost edge to the
output spanning tree.
 The adjacent unvisited vertex with the least cost edge is added into the visited[] array and the least cost
edge is added to the minimum spanning tree output.
 Steps 2 and 4 are repeated for all the unvisited vertices in the graph to obtain the full minimum spanning
tree output for the given graph.
 Calculate the cost of the minimum spanning tree obtained.
Prof. Dr. K. Adisesha
92
Greedy Technique

Prim's algorithm:
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path first
algorithms.
 The algorithm, similar to any shortest path algorithm, begins from a vertex that is set as a root
and walks through all the vertices in the graph by determining the least cost adjacent edges.
 Step 1 - Remove all loops and parallel edges
 Step 2 - Choose any arbitrary node as root node

Prof. Dr. K. Adisesha


93
Greedy Technique

Dijkstra’s Algorithm:
Dijkstra's shortest path algorithm was invented in 1956 by the Dutch computer scientist
Edsger W. Dijkstra during a twenty minutes coffee break, while out shopping with his
fiancée in Amsterdam.
 Dijkstra's algorithm is used for solving single-source
shortest path problems for directed or undirected paths.
 Dijkstra's algorithm finds the shortest path from one vertex
to all other vertices.
 It does so by repeatedly selecting the nearest unvisited
vertex and calculating the distance to all the unvisited
neighboring vertices.
Prof. Dr. K. Adisesha
94
Greedy Technique

Dijkstra’s Algorithm:
Dijkstra's algorithm is used for solving single-source shortest path problems for directed
or undirected paths.
1. Set initial distances for all vertices: 0 to source vertex, and infinity for all the other.
2. Choose the unvisited vertex with the shortest distance from the start to be current vertex.
3. For each of the current vertex's unvisited neighbor vertices, calculate the distance from
the source and update the distance if the new, calculated, distance is lower.
4. We are now done with the current vertex, so we mark it as visited. A visited vertex is not
checked again.
5. Go back to step 2 to choose a new current vertex, and keep repeating these steps until all
vertices are visited.
6.Prof.InDr.the end we are left with the shortest path from the source vertex to every other vertex.
K. Adisesha
95
Greedy Technique

Dijkstra’s Algorithm:
Given a weighted graph and a source vertex in the graph, find the shortest paths from the
source to all the other vertices in the given graph.
Input: src = 0, the graph.
Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8
Prof. Dr. K. Adisesha
96
Greedy Technique

Lower Bound Theory:


According to the lower bound theory, for a lower bound L(n) of an algorithm, it is not
possible to have any other algorithm (for a common problem) whose time complexity is
less than L(n) for random input.
 Let L(n) be the running time of an algorithm A(say), then g(n) is the Lower Bound of A if
there exist two constants C and N such that L(n) >= C*g(n) for n > N.
 Lower bound of an algorithm is shown by the asymptotic notation called Omega.
 Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:
 (a1, a2, a3), (a1, a3, a2),
 (a2, a1, a3), (a2, a3, a1),
 (a3, a1, a2), (a3, a2, a1)
Prof. Dr. K. Adisesha
97
Greedy Technique

Decision Tree Algorithms:


Decision trees are a type of machine-learning algorithm that can be used for both
classification and regression tasks.
 The algorithm works by recursively splitting the data into smaller and smaller subsets based
on the feature values.
 Before we divide into Decision Tree, we need to know about the following important terms:
 Root Node: It is the topmost node in the tree, which represents the complete dataset. It is the
starting point of the decision-making process.
 Internal Node: A node that symbolizes a choice regarding an input feature. Branching off of
internal nodes connects them to leaf nodes or other internal nodes.
 Leaf/Terminal Node: A node without any child nodes that indicates a class label or a
numerical value.
Prof. Dr. K. Adisesha
98
Greedy Technique

Decision Tree Algorithms:


A decision tree is a full binary tree that shows the comparisons between elements that are
executed by an appropriate sorting algorithm operating on an input of a given size.
Control, data movement, and all other conditions of the algorithm are ignored.
 In a decision tree, there will be an array of length n.
 So, total leaves will be n! (I.e. total number of
comparisons)
 If tree height is h, then surely
n! ≤2n (tree will be binary)

Prof. Dr. K. Adisesha


99
Greedy Technique

P Class Problem:
P-type problems are a class of problems in computational complexity theory that can be
solved by a deterministic Turing machine within a polynomial amount of time.
 This is also known as polynomial-time deterministic algorithms.
 P: the class of problems that have polynomial-time deterministic algorithms.
 A deterministic algorithm is (essentially) one that always computes the correct
answer.
 This problem is easy to understand and tractable.
 Sample Problems in P
 Searching
 Element uniqueness
 Graph connectivity
Prof. Dr. K. Adisesha
10
0
Greedy Technique

NP Class Problem:
A problem is said to be Nondeterministic polynomial time that can be solvable in
polynomial time by a nondeterministic Turing machine.
 NP stands for “Nondeterministic Polynomial-time”
 The solutions to the NP class problem are hard to find since they are being solved by a
non-deterministic machine.
 Some of the examples of NP problems are given as follows:
 Traveling Salesman
 N-Queens
 Classroom Scheduling
 Packing
 Scheduling
Prof. Dr. K. Adisesha
10
1
Greedy Technique

NP-Complete Problems:
NP-Complete (or NPC) problems are a set of problems that are well connected.
 A problem x that is in NP, if any one finds an polynomial time algorithm even for one
problem, it implies that polynomial time algorithm for all NP-Complete problems.
 NP-complete problems are a subset of the larger class of NP (nondeterministic
polynomial time) problems.
 NP problems are a class of computational problems that can be solved in polynomial
time by a non-deterministic machine and can be verified in polynomial time by a
deterministic Machine.

Prof. Dr. K. Adisesha


10
2
Greedy Technique

N Queen Problem:
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no
two queens attack each other.
 Given an integer n, return all distinct solutions to the n-queens puzzle. You may return
the answer in any order.
Input: n = 4
Output: [[".Q..","...Q","Q...","..Q."],
["..Q.","Q...","...Q",".Q.."]]

Prof. Dr. K. Adisesha


10
3
Design and Analysis of Algorithms

Thank you:

s t
b e
th e
A l l

Prof. Dr. K. Adisesha

You might also like