Unit II Search-Final Updated Version-1
Unit II Search-Final Updated Version-1
Search Algorithms
1
Outli
ne
(What is searching, state space search, State space search
examples, Types of searching )
Uninformed Search(Blind search)
Breadth first search
Depth first search
Uniform cost search
2
Searching-Examples
General Searching: This involves finding
solutions to problems.
Website Searching: It refers to finding answers
to questions on websites.
City Searching: In a city, it’s about finding the
correct path.
Chess Searching: In chess, it’s finding the next
best move.
3
Searching-Examples
4
Searching-Examples
Identify:
• Initial State
• Goal
• Actions
• Draw the state
space for the
next iteration.
6
Difference between Informed and
Uninformed Search
Uninformed Search (Blind Search Informed Search
8
Breadth-First Search(BFS)
• It is the most simple form of blind search. In this
technique the root node is expanded first, then all of its
successors are expanded and then their successors and so
on….
• It Explores all the nodes at given depth before proceeding
to the next level.
• It means that all immediate children of the nodes are
explored before any of the children’s children are
considered.
• It uses queue for implementation.
9
Breadth-First Search(BFS)
Algorithm:
i) Enter starting nodes on queue
ii) If queue is empty, then return fail and stop
iii) If first element on queue is GOAL node, then return
success and stop
ELSE
iv)Remove and expand first element from queue and place
children at end of queue
v) Goto stop (ii)
10
This is Travelling
path
11
Breadth-First Search(BFS):Exercise
12
Breadth-First Search(BFS):Exercise
13
Breadth-First Search(BFS):Exercise
14
Breadth-First Search(BFS):Exercise
15
Breadth-First Search(BFS):Exercise
16
Breadth-First Search(BFS):Exercise
17
Advantages and disadvantages of BFS
Advantages of BFS
BFS has some advantages and these are given below-
1. BFS will never get trapped exploring a blind alley
2. It is guaranteed to find a solution if one exists.
Disadvantages of BFS
BFS has certain disadvantages also. They are given below
3. Time complexity and space complexity are both i.e exponential
type.
This is a very big hurdle.
2. All nodes are to be generated in bfs. So even unwanted nodes are to
be remembered (stored in queue) which is of no practical use of the
search.
18
Depth First Search (DFS)
• DFS begins by expanding the initial node ,generate
all successors of the initial node and test them.
• DFS is characterized by the expansion of the
most recently generated or deepest node first.
• DFS needs to store the path from the root to the leaf node
as well as the unexpanded nodes.
• It is neither complete nor optimal. If DFS goes down an
infinite branch (depth cut-off point) will not end till
a goal state is found.
• If d is set too shallow, goal may be missed.
• If d is set too deep then extra computation may be
performed.
19
Depth First Search (DFS)
Starts from root node and follows each path to its greatest depth node before
moving to the next path.
20
This is the explored
path 18
21
Depth First Search (DFS):Exercise
19
22
This is the
explored
path
20
23
Depth First Search (DFS):Exercise
24
Depth First Search (DFS):Exercise
25
Advantages of DFS
DFS has some advantages. They are given below-
1. Memory requirements in dfs are less as only nodes on the current path
are stored.
2. Less time to reach goal node if traversal in right path.
Disadvantages of DFS
3. No guarantee of finding solutions.
4. This type of search can go on and on, deeper and deeper into the
search and thus, we can get lost. This is referred to as blind alley.
21
26
Depth-Limited Search Algorithm
27
Depth-Limited Search Algorithm
28
Sr.No Breadth First Search (BFS) Depth First Search (DFS)
1 BFS stands for Breadth First Search. DFS stands for Depth First Search.
2 BFS (Breadth First Search) uses DFS (Depth First Search) uses Stack data
Queue data structure for finding the structure.
shortest path.
3 BFS can be used to find single In DFS, we might traverse through more
source shortest path in an edges to reach a destination vertex from
unweighted graph, because in BFS, a source.
we reach a vertex with minimum
number of edges from a source
vertex.
4 BFS is more suitable for searching DFS is more suitable when there are
vertices which are closer to the solutions away from source.
given source.
5 BFS considers all neighbours first DFS is more suitable for game or puzzle
and therefore not suitable for problems.
decision making trees used in We make a decision, then explore all
games or puzzles. paths through this decision. And if this
decision leads to win situation, we stop
29
Water jug problem and its state
representation
Suppose we have 2 JUGS of different capacities Example:
2 litre
Suppose capacity of two jugs
3 litre
Goal :To get exactly 1 litre of water in 2 litre JUG < 1, 0 >
x, y
< 0 ,0 > -----------(Initial state)
State is represented as < x, y >
< 0,3 > < 2, 0 > x-Integer amount of water in
‘x’ litre tank and
< 2 ,1 > < 2, 3 >
y- Integer amount of water in
‘y’ litre tank
< 0, 1 > < 2 , 0 > ------- ( not possible to select this option)
30
Water jug problem and its state representation
We are given two jugs, a four-gallon one and a three-gallon one. Neither has
any measuring markers on it. There is a pump which can be used to fill the
jugs with water. How can we get exactly two gallons of water into the four-
gallon jug? This is shown in Figure..
The state space for this problem can be described as the set of
ordered pairs of integers (x, y), such that x = 0, 1, 2, 3, or 4 and y = 0,
1, 2, or 3.
Here x denotes the number of gallons of water in the four-gallon jug,
and y is the quantity of water in the three-gallon jug.
Please note that the start state is (0, 0), and the goal state is (2, n) for any
value of n, as the problem does not specify how many gallons need to be
filled in the three-gallon jug (0 ≤ n ≤ 3). Also, note that such problems may
have multiple initial states and many goal states.
31
One solution can be :
32
Uniform Cost Search Algorithm
35
Uniform Cost Search Algorithm: Problem Example
36
Find the path using Uniform Cost Search algorithm
37
Informed Search
Informed search (guided):
In this type of search, we use additional knowledge or
heuristics to guide the exploration process.
Informed search algorithms are also known as heuristic search
algorithms, use additional information (heuristics) to find
solutions more efficiently.
Here are some of the key informed search algorithms:
• Greedy Best-First Search
• A* Search
38
Heuristic functions
Heuristic functions are the most common form in
which additional knowledge of the problem is passed
to the search algorithm.
It is denoted by h(n).
h(n)=estimated cost of the cheapest path from the
state at node n to a goal state. (for goal state: h(n)=0)
Example: in route planning the estimate of the cost of the
cheapest path might be the straight line distance between two
cities.
39
Informed Search:GBFS
Greedy Best-First Search:
This algorithm expands the node that appears most promising
based on a heuristic function h(n), which estimates the cost
from the node to the goal.
It is efficient but not guaranteed to find the most cost-effective
path.
For example, Greedy Best-First Search is efficient because it
quickly finds a path to the goal by always expanding the most
promising node based on a heuristic.
40
Informed Search:GBFS
41
GBFS Working
• Greedy Best First Search works by evaluating the cost of each
possible path and then expanding the path with the lowest
heuristic cost.
• The algorithm uses a heuristic function to determine which
path is the most promising.
• If the heuristic cost of the current path is lower than the
estimated cost of the remaining paths, then the current path
is chosen This process is repeated until the goal is reached.
42
GBFS Problem 1
44
45
46
47
48
49
50
GBFS Problem 2
51
Solution: GBFS Problem 2
52
Disadvantages of Greedy Best-First Search
• Not Always Optimal: It doesn’t guarantee finding the
shortest path because it only considers the heuristic value
and ignores the actual path cost.
• Can Get Stuck: The algorithm can get stuck in loops or dead-
end paths if the heuristic is misleading.
• Local Optima: It may get trapped in local optima, where it
finds a solution that appears best locally but is not the best
overall.
• Heuristic Dependency: The performance heavily depends on
the quality of the heuristic function. A poor heuristic can
lead to inefficient searches.
• Incomplete: It is not guaranteed to find a solution if one
exists, especially in cases where the search space is large and
complex.
53
Informed Search: A* Search
• This algorithm combines the cost to reach a node
and the estimated cost from that node to the goal.
• It uses the function f(n)=g(n)+h(n),
• where g(n) is the cost from the start node to node n,
and h(n) is the estimated cost from n to the goal.
• A* is both complete and optimal when the
heuristic is admissible.
54
A* Problem 1
55
Solution: A* Problem 1
56
Solution: A* Problem 1
57
Solution: A* Problem 1
58
A* Problem 2
59
Disadvantages of A* Search
60
Example of Heuristic function for 8 Puzzle
Problem
61
Example of Heuristic function for 8 Puzzle
Problem
62
Hamming Distance
63
Manhattan Distance
64
Concept of Heuristic Admissibility
Admissibility
For the heuristic to be admissible, it must never overestimate the actual cost to reach
the goal. In this case, the Manhattan Distance is admissible because it always provides a
lower or equal estimate compared to the actual shortest path in a grid with only
horizontal and vertical movements.
Example Calculation
•Current Node: (2, 3)
•Goal Node: (4, 4)
Using the Manhattan Distance:
h(2,3)=∣2−4∣+∣3−4∣=2+1=3
The actual shortest path cost might be 3 or more, but it will never be less than 3.
Therefore, the heuristic is admissible because it does not overestimate the cost.
65
Applying Greedy Best First Search For 8
Puzzle Problem
66
Applying Greedy Best First Search For 8
Puzzle Problem
67
Applying Greedy Best First Search For 8
Puzzle Problem
68
Applying Greedy Best First Search For 8
Puzzle Problem
69
Difference between Informed and
Uninformed Search
Uninformed Search (Blind Search Informed Search
70
Local Search Algorithm-Hill Climbing
71
Local Search Algorithm-Hill Climbing
72
Local Search Algorithm-Hill Climbing
73
Local Search Algorithm-Hill Climbing
74
Local Search Algorithm-Hill Climbing
75
Local Search Algorithm-Hill Climbing
76
Local Search Algorithm-Hill Climbing
77
MiniMax Algorithm
78
MiniMax Algorithm
79
MiniMax Algorithm
80
MiniMax Algorithm
81
MiniMax Algorithm
82
MiniMax Algorithm
83
MiniMax Algorithm-Practice Problem
84
Alpha Beta Pruning
• Alpha-beta pruning is a modified version of the minimax
algorithm.
• It is an optimization technique for the minimax
algorithm.
• As we have seen in the minimax search algorithm that the
number of game states it has to examine are exponential in
depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half.
• Hence there is a technique by which without checking
each node of the game tree we can compute the
correct minimax decision, and this technique is
called pruning.
• This involves two threshold parameter Alpha and beta
85
for future expansion, so it is called alpha-beta pruning. It
Alpha Beta Pruning
Efficiency:
By pruning these branches, the algorithm significantly
reduces the number of nodes it needs to evaluate,
making the search process faster and more efficient.
87
How Alpha-Beta Pruning Works
The Alpha-Beta pruning algorithm traverses the game tree similarly to Minimax but
prunes branches that do not need to be explored. The steps are discussed below:
Initialization: Start with alpha set to negative infinity and beta set to positive infinity.
Max Node Evaluation:
For each child of a Max node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update alpha: α=max(α,child value).
• If alpha is greater than or equal to beta, prune the remaining children (beta cutoff).
Min Node Evaluation:
For each child of a Min node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update beta: β=min(β,childvalue)
• If beta is less than or equal to alpha, prune the remaining children (alpha cutoff).
88
How Alpha-Beta Pruning Works
The Alpha-Beta pruning algorithm traverses the game tree similarly to Minimax but
prunes branches that do not need to be explored. The steps are discussed below:
Initialization: Start with alpha set to negative infinity and beta set to positive infinity.
Max Node Evaluation:
For each child of a Max node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update alpha: α=max(α,child value).
• If alpha is greater than or equal to beta, prune the remaining children (beta cutoff).
Min Node Evaluation:
For each child of a Min node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update beta: β=min(β,childvalue)
• If beta is less than or equal to alpha, prune the remaining children (alpha cutoff).
89
How Alpha-Beta Pruning Works
90
Problem on Alpha-Beta Pruning
• Consider an example of two-player search tree to understand the working of
Alpha-beta pruning.
Step 1: At the first step the, Max player will start first move from
node A where α= -∞ and β= +∞, these value of alpha and beta
passed down to node B where again α= -∞ and β= +∞, and Node B
passes the same value to its child D.
91
Problem on Alpha-Beta Pruning
Step 2: At Node D, the value of α will be calculated as its turn for Max.
The value of α is compared with firstly 2 and then 3, and the max (2,
3) = 3 will be the value of α at node D and node value will also 3.
92
Problem on Alpha-Beta Pruning
93
Problem on Alpha-Beta Pruning
Step 5: At next step, algorithm again backtrack the tree, from
node B to node A.
At node A, the value of alpha will be changed.
The maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞,
and the same values will be
passed on to node F.
Step 6: At node F, again the value of
α will be compared with left child
which is 0, and max(3,0)= 3, and
then compared with right child
which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F
will become 1.
94
Problem on Alpha-Beta Pruning
95
Problem on Alpha-Beta Pruning
96
Problem on Alpha-Beta Pruning
97
Problem on Alpha-Beta Pruning
98
Problem on Alpha-Beta Pruning
99
Problem on Alpha-Beta Pruning
10
0
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?
10
1
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?
10
2
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?
10
3
Genetic Algorithm
What is a Genetic Algorithm?
A genetic algorithm (GA) is a search heuristic inspired by Charles Darwin’s theory of
natural evolution.
This algorithm reflects the process of natural selection where the fittest individuals are
selected for reproduction to produce the offspring of the next generation.
Key Concepts
• Population: A set of potential solutions to the problem.
• Chromosomes: A representation of a solution, often encoded as a string of bits.
• Genes: Elements of a chromosome, representing a part of the solution.
• Fitness Function: A function that evaluates how close a given solution is to the
optimum.
• Selection: The process of choosing the fittest individuals for reproduction.
• Crossover (Recombination): Combining two parent solutions to produce offspring.
• Mutation: Randomly altering genes in a chromosome to maintain genetic diversity.
10
4
Genetic Algorithm
Steps of a Genetic Algorithm
1.Initialization: Generate an initial population of potential solutions
randomly.
2.Evaluation: Calculate the fitness of each individual in the
population.
3.Selection: Select pairs of individuals based on their fitness to
reproduce.
4.Crossover: Combine pairs of individuals to produce offspring.
5.Mutation: Apply random changes to individual genes in the
offspring.
6.Replacement: Form a new population by replacing some or all of the
old population with the new offspring.
7.Termination: Repeat the process until a stopping criterion is met
(e.g., a solution is good enough or a maximum number of generations
10
5
Genetic Algorithm: Flow Chart
Introduction to Genetic Algorithms (youtube.com)
10
7
Genetic Algorithm: Applications
10
8
Genetic Algorithm: Advantages
• Robustness: Genetic algorithms are highly robust and can handle a wide range of
problem types, including those with complex, non-linear, and multi-modal
landscapes.
• Parallelism: They can explore multiple solutions simultaneously, reducing the risk
of getting stuck in local optima.
• Flexibility: GAs can be applied to various types of optimization problems, including
those with discrete, continuous, or mixed variables.
• No Derivative Information Needed: Unlike gradient-based methods, GAs do not
require derivative information, making them suitable for problems where the
objective function is not differentiable.
• Global Search Capability: They are effective at performing global searches, which
is useful for finding the global optimum in large and complex search spaces.
• Adaptability: GAs can adapt to changes in the problem environment, making them
suitable for dynamic and real-time applications. 10
9
Genetic Algorithm: Limitations
• Computational Cost: GAs can be computationally expensive, especially for large
populations and many generations.
• Parameter Sensitivity: The performance of GAs can be sensitive to the choice of
parameters such as population size, mutation rate, and crossover rate.
• Premature Convergence: There is a risk of premature convergence to suboptimal
solutions, especially if the diversity in the population is not maintained.
• No Guarantee of Optimality: GAs do not guarantee finding the absolute optimal
solution, but rather a good enough solution within a reasonable time frame.
• Complexity in Implementation: Designing an effective fitness function and
encoding scheme can be challenging and problem-specific.
• Stochastic Nature: The random processes involved in selection, crossover, and
mutation can lead to variability in results, requiring multiple runs to obtain reliable
solutions.
11
0
Genetic Algorithm
Genetic Algorithm How Genetic Algorithm Works Evolutionary Algorithm Machine - YouTube
11
1
Thank You
112