0% found this document useful (0 votes)
7 views112 pages

Unit II Search-Final Updated Version-1

Uploaded by

Vaishnavi Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views112 pages

Unit II Search-Final Updated Version-1

Uploaded by

Vaishnavi Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 112

Unit II

Search Algorithms

Dr. Praveen Kumar Loharkar

1
Outli
ne
(What is searching, state space search, State space search
examples, Types of searching )
 Uninformed Search(Blind search)
Breadth first search
Depth first search
Uniform cost search

2
Searching-Examples
General Searching: This involves finding
solutions to problems.
Website Searching: It refers to finding answers
to questions on websites.
City Searching: In a city, it’s about finding the
correct path.
Chess Searching: In chess, it’s finding the next
best move.

3
Searching-Examples

4
Searching-Examples

Search Space for 8 Puzzle


Problem 5
Problem Formulation
1.The puzzle consists of a 3x3 square frame board with eight movable tiles
numbered from 1 to 8.
2.One square on the board is empty, allowing adjacent tiles to be shifted.
3.The objective is to find a sequence of tile movements that leads from an
initial configuration (START STATE) to a final configuration (GOAL STATE).

Identify:
• Initial State
• Goal
• Actions
• Draw the state
space for the
next iteration.

6
Difference between Informed and
Uninformed Search
Uninformed Search (Blind Search Informed Search

Operate without any domain-specific Utilize domain-specific knowledge


information or knowledge about the or information to find solutions
problem space. more efficiently.
Search without Information Search with Information
No Knowledge of the Problem Domain Use of Knowledge

Time-Consuming Quick Solutions


Complexity (Time and Space): They may lead to Less Complexity (Time and Space): They
more complex solutions in terms of time and often result in less complex solutions in
space requirements. terms of time and space requirements.

Examples: Depth-First Search (DFS), Breadth- Examples: GBFS, A* search, etc.


First Search (BFS), etc.
7
Types of Searching
Many different types of search algorithms. Basically, there are two
types of searches:
1.Uniformed or blind search (unguided): This approach explores
options without using any additional information or heuristics.
2.Informed or heuristic search (guided): In this type of search, we use
additional knowledge or heuristics to guide the exploration process."

8
Breadth-First Search(BFS)
• It is the most simple form of blind search. In this
technique the root node is expanded first, then all of its
successors are expanded and then their successors and so
on….
• It Explores all the nodes at given depth before proceeding
to the next level.
• It means that all immediate children of the nodes are
explored before any of the children’s children are
considered.
• It uses queue for implementation.

9
Breadth-First Search(BFS)
Algorithm:
i) Enter starting nodes on queue
ii) If queue is empty, then return fail and stop
iii) If first element on queue is GOAL node, then return
success and stop
ELSE
iv)Remove and expand first element from queue and place
children at end of queue
v) Goto stop (ii)

10
This is Travelling
path

11
Breadth-First Search(BFS):Exercise

12
Breadth-First Search(BFS):Exercise

13
Breadth-First Search(BFS):Exercise

14
Breadth-First Search(BFS):Exercise

15
Breadth-First Search(BFS):Exercise

16
Breadth-First Search(BFS):Exercise

17
Advantages and disadvantages of BFS

Advantages of BFS
BFS has some advantages and these are given below-
1. BFS will never get trapped exploring a blind alley
2. It is guaranteed to find a solution if one exists.

Disadvantages of BFS
BFS has certain disadvantages also. They are given below
3. Time complexity and space complexity are both i.e exponential
type.
This is a very big hurdle.
2. All nodes are to be generated in bfs. So even unwanted nodes are to
be remembered (stored in queue) which is of no practical use of the
search.

18
Depth First Search (DFS)
• DFS begins by expanding the initial node ,generate
all successors of the initial node and test them.
• DFS is characterized by the expansion of the
most recently generated or deepest node first.
• DFS needs to store the path from the root to the leaf node
as well as the unexpanded nodes.
• It is neither complete nor optimal. If DFS goes down an
infinite branch (depth cut-off point) will not end till
a goal state is found.
• If d is set too shallow, goal may be missed.
• If d is set too deep then extra computation may be
performed.
19
Depth First Search (DFS)

Starts from root node and follows each path to its greatest depth node before
moving to the next path.

It is implemented using STACK (LIFO).


This algorithm is Recursive algorithm.
ALGORITHM:
i) Enter root node on stack
ii) Do until stack is not empty
a) Remove node
1) If node is Goal then stop
2) Push all children in stack.

20
This is the explored
path 18
21
Depth First Search (DFS):Exercise

19
22
This is the
explored
path

20
23
Depth First Search (DFS):Exercise

24
Depth First Search (DFS):Exercise

25
Advantages of DFS
DFS has some advantages. They are given below-
1. Memory requirements in dfs are less as only nodes on the current path
are stored.
2. Less time to reach goal node if traversal in right path.

Disadvantages of DFS
3. No guarantee of finding solutions.
4. This type of search can go on and on, deeper and deeper into the
search and thus, we can get lost. This is referred to as blind alley.

21
26
Depth-Limited Search Algorithm

A depth-limited search algorithm is similar to depth-first search


with a pre-determined limit.
•Depth-limited search can solve the drawback of the infinite path
in the Depth-first search.
In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
Advantages:
•Depth-limited search is Memory efficient.
Disadvantages:
•Depth-limited search also has a disadvantage of incompleteness.
•It may not be optimal if the problem has more than one solution.

27
Depth-Limited Search Algorithm

28
Sr.No Breadth First Search (BFS) Depth First Search (DFS)
1 BFS stands for Breadth First Search. DFS stands for Depth First Search.

2 BFS (Breadth First Search) uses DFS (Depth First Search) uses Stack data
Queue data structure for finding the structure.
shortest path.

3 BFS can be used to find single In DFS, we might traverse through more
source shortest path in an edges to reach a destination vertex from
unweighted graph, because in BFS, a source.
we reach a vertex with minimum
number of edges from a source
vertex.

4 BFS is more suitable for searching DFS is more suitable when there are
vertices which are closer to the solutions away from source.
given source.

5 BFS considers all neighbours first DFS is more suitable for game or puzzle
and therefore not suitable for problems.
decision making trees used in We make a decision, then explore all
games or puzzles. paths through this decision. And if this
decision leads to win situation, we stop
29
Water jug problem and its state
representation
Suppose we have 2 JUGS of different capacities Example:
2 litre
Suppose capacity of two jugs
3 litre
Goal :To get exactly 1 litre of water in 2 litre JUG < 1, 0 >
x, y
< 0 ,0 > -----------(Initial state)
State is represented as < x, y >
< 0,3 > < 2, 0 > x-Integer amount of water in
‘x’ litre tank and
< 2 ,1 > < 2, 3 >
y- Integer amount of water in
‘y’ litre tank
< 0, 1 > < 2 , 0 > ------- ( not possible to select this option)

< 1, 0 > --------- (Goal state)

30
Water jug problem and its state representation
We are given two jugs, a four-gallon one and a three-gallon one. Neither has
any measuring markers on it. There is a pump which can be used to fill the
jugs with water. How can we get exactly two gallons of water into the four-
gallon jug? This is shown in Figure..
The state space for this problem can be described as the set of
ordered pairs of integers (x, y), such that x = 0, 1, 2, 3, or 4 and y = 0,
1, 2, or 3.
Here x denotes the number of gallons of water in the four-gallon jug,
and y is the quantity of water in the three-gallon jug.
Please note that the start state is (0, 0), and the goal state is (2, n) for any
value of n, as the problem does not specify how many gallons need to be
filled in the three-gallon jug (0 ≤ n ≤ 3). Also, note that such problems may
have multiple initial states and many goal states.

31
One solution can be :

Second solution can be :

32
Uniform Cost Search Algorithm

Type: Uninformed search algorithm.


Goal: Finds the least-cost path from the start node to the goal node.
Mechanism: Expands the node with the lowest path cost first.
Cost Function: Uses a cost function g(n), which represents the
cumulative cost from the start node to node n.
Optimality: Guarantees finding the optimal solution if the cost of
each step is non-negative.
Completeness: Complete, meaning it will find a solution if one
exists.
Data Structure: Typically uses a priority queue to manage the
frontier (the set of nodes to be explored).
Applications: Suitable for scenarios where the path cost is
important, such as network routing, robot navigation, and puzzle
solving. 33
Uniform Cost Search Algorithm:Steps
Steps to solve a problem using Uniform Cost Search (UCS):
1. Initialize:
- Start with the initial node.
- Create a priority queue (often called the frontier) and add the
initial node with a path cost of 0.
2. Expand Node:
- Remove the node with the lowest path cost from the priority
queue.
- If this node is the goal, return the solution (the path from the start
node to this node).
3. Generate Successors:
- For the current node, generate all possible successor nodes.
- Calculate the path cost for each successor by adding the cost of
the edge from the current node to the successor to the current
node's path cost. 34
Uniform Cost Search Algorithm
5. Repeat:
- Repeat steps 2-4 until the priority queue is empty or the
goal node is reached.
6. Solution:
- If the goal node is reached, reconstruct the path from
the start node to the goal node using the parent pointers.
- If the priority queue is empty and the goal node has not
been reached, return that no solution exists.

35
Uniform Cost Search Algorithm: Problem Example

36
Find the path using Uniform Cost Search algorithm

37
Informed Search
Informed search (guided):
In this type of search, we use additional knowledge or
heuristics to guide the exploration process.
Informed search algorithms are also known as heuristic search
algorithms, use additional information (heuristics) to find
solutions more efficiently.
Here are some of the key informed search algorithms:
• Greedy Best-First Search
• A* Search

38
Heuristic functions
Heuristic functions are the most common form in
which additional knowledge of the problem is passed
to the search algorithm.
It is denoted by h(n).
h(n)=estimated cost of the cheapest path from the
state at node n to a goal state. (for goal state: h(n)=0)
Example: in route planning the estimate of the cost of the
cheapest path might be the straight line distance between two
cities.

39
Informed Search:GBFS
Greedy Best-First Search:
This algorithm expands the node that appears most promising
based on a heuristic function h(n), which estimates the cost
from the node to the goal.
It is efficient but not guaranteed to find the most cost-effective
path.
For example, Greedy Best-First Search is efficient because it
quickly finds a path to the goal by always expanding the most
promising node based on a heuristic.

40
Informed Search:GBFS

An efficient algorithm performs its tasks quickly and uses


minimal computational resources (like CPU time and memory).
It can handle large inputs or complex problems within a
reasonable time frame.

41
GBFS Working
• Greedy Best First Search works by evaluating the cost of each
possible path and then expanding the path with the lowest
heuristic cost.
• The algorithm uses a heuristic function to determine which
path is the most promising.
• If the heuristic cost of the current path is lower than the
estimated cost of the remaining paths, then the current path
is chosen This process is repeated until the goal is reached.

42
GBFS Problem 1

Solve the problem using GBFS. Start from S 43


Solution: GBFS Problem 1
Convert the diagram into a
Tree Diagram

44
45
46
47
48
49
50
GBFS Problem 2

51
Solution: GBFS Problem 2

52
Disadvantages of Greedy Best-First Search
• Not Always Optimal: It doesn’t guarantee finding the
shortest path because it only considers the heuristic value
and ignores the actual path cost.
• Can Get Stuck: The algorithm can get stuck in loops or dead-
end paths if the heuristic is misleading.
• Local Optima: It may get trapped in local optima, where it
finds a solution that appears best locally but is not the best
overall.
• Heuristic Dependency: The performance heavily depends on
the quality of the heuristic function. A poor heuristic can
lead to inefficient searches.
• Incomplete: It is not guaranteed to find a solution if one
exists, especially in cases where the search space is large and
complex.
53
Informed Search: A* Search
• This algorithm combines the cost to reach a node
and the estimated cost from that node to the goal.
• It uses the function f(n)=g(n)+h(n),
• where g(n) is the cost from the start node to node n,
and h(n) is the estimated cost from n to the goal.
• A* is both complete and optimal when the
heuristic is admissible.

54
A* Problem 1

55
Solution: A* Problem 1

56
Solution: A* Problem 1

57
Solution: A* Problem 1

58
A* Problem 2

59
Disadvantages of A* Search

1. Heuristic Dependency: The performance of A* heavily


relies on the quality of the heuristic function used. If the
heuristic is not well-designed, the algorithm may not perform
optimally.
2. Memory Consumption: A* requires storing all
generated nodes in memory, which can be problematic for
large graphs or complex problems, leading to high memory
usage.
3. Complexity: The algorithm can become computationally
expensive, especially in scenarios with a high branching
factor or when the heuristic is not efficient.
4. Admissibility Challenge: Designing an admissible
heuristic (one that never overestimates the cost to reach the
goal) can be difficult for certain problems, impacting the
algorithm’s performance.

60
Example of Heuristic function for 8 Puzzle
Problem

61
Example of Heuristic function for 8 Puzzle
Problem

62
Hamming Distance

63
Manhattan Distance

64
Concept of Heuristic Admissibility
Admissibility
For the heuristic to be admissible, it must never overestimate the actual cost to reach
the goal. In this case, the Manhattan Distance is admissible because it always provides a
lower or equal estimate compared to the actual shortest path in a grid with only
horizontal and vertical movements.
Example Calculation
•Current Node: (2, 3)
•Goal Node: (4, 4)
Using the Manhattan Distance:
h(2,3)=∣2−4∣+∣3−4∣=2+1=3
The actual shortest path cost might be 3 or more, but it will never be less than 3.
Therefore, the heuristic is admissible because it does not overestimate the cost.

65
Applying Greedy Best First Search For 8
Puzzle Problem

66
Applying Greedy Best First Search For 8
Puzzle Problem

67
Applying Greedy Best First Search For 8
Puzzle Problem

68
Applying Greedy Best First Search For 8
Puzzle Problem

69
Difference between Informed and
Uninformed Search
Uninformed Search (Blind Search Informed Search

Operate without any domain-specific Utilize domain-specific knowledge


information or knowledge about the or information to find solutions
problem space. more efficiently.
No Knowledge of the Problem Domain Use of Knowledge

Time-Consuming Quick Solutions


Complexity (Time and Space): They may lead to Less Complexity (Time and Space): They
more complex solutions in terms of time and often result in less complex solutions in
space requirements. terms of time and space requirements.

Examples: Depth-First Search (DFS), Breadth- Examples: GBFS, A* search, etc.


First Search (BFS), etc.

70
Local Search Algorithm-Hill Climbing

71
Local Search Algorithm-Hill Climbing

72
Local Search Algorithm-Hill Climbing

73
Local Search Algorithm-Hill Climbing

74
Local Search Algorithm-Hill Climbing

75
Local Search Algorithm-Hill Climbing

76
Local Search Algorithm-Hill Climbing

77
MiniMax Algorithm

78
MiniMax Algorithm

79
MiniMax Algorithm

80
MiniMax Algorithm

81
MiniMax Algorithm

82
MiniMax Algorithm

83
MiniMax Algorithm-Practice Problem

84
Alpha Beta Pruning
• Alpha-beta pruning is a modified version of the minimax
algorithm.
• It is an optimization technique for the minimax
algorithm.
• As we have seen in the minimax search algorithm that the
number of game states it has to examine are exponential in
depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half.
• Hence there is a technique by which without checking
each node of the game tree we can compute the
correct minimax decision, and this technique is
called pruning.
• This involves two threshold parameter Alpha and beta
85
for future expansion, so it is called alpha-beta pruning. It
Alpha Beta Pruning

Alpha and Beta Values:


1.Alpha: The best value that the maximizer
(player trying to maximize the score) can
guarantee at that level or above. The initial
value of alpha is -∞.
2.Beta: The best value that the minimizer
(player trying to minimize the score) can
guarantee at that level or below. The initial
value of beta is ∞.
Pruning: During the search, if the algorithm finds a move that
is worse than a previously examined move, it stops
evaluating that branch.
This is because further exploration of that branch cannot
influence the final decision.
86
Alpha Beta Pruning

Efficiency:
By pruning these branches, the algorithm significantly
reduces the number of nodes it needs to evaluate,
making the search process faster and more efficient.

87
How Alpha-Beta Pruning Works

The Alpha-Beta pruning algorithm traverses the game tree similarly to Minimax but
prunes branches that do not need to be explored. The steps are discussed below:
Initialization: Start with alpha set to negative infinity and beta set to positive infinity.
Max Node Evaluation:
For each child of a Max node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update alpha: α=max(α,child value).
• If alpha is greater than or equal to beta, prune the remaining children (beta cutoff).
Min Node Evaluation:
For each child of a Min node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update beta: β=min⁡(β,childvalue)
• If beta is less than or equal to alpha, prune the remaining children (alpha cutoff).

88
How Alpha-Beta Pruning Works

The Alpha-Beta pruning algorithm traverses the game tree similarly to Minimax but
prunes branches that do not need to be explored. The steps are discussed below:
Initialization: Start with alpha set to negative infinity and beta set to positive infinity.
Max Node Evaluation:
For each child of a Max node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update alpha: α=max(α,child value).
• If alpha is greater than or equal to beta, prune the remaining children (beta cutoff).
Min Node Evaluation:
For each child of a Min node:
• Evaluate the child node using the Minimax algorithm with Alpha-Beta pruning.
• Update beta: β=min⁡(β,childvalue)
• If beta is less than or equal to alpha, prune the remaining children (alpha cutoff).

89
How Alpha-Beta Pruning Works

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is:
α>=β

Key points about alpha-beta pruning:

• The Max player will only update the value of alpha.


• The Min player will only update the value of beta.
• While backtracking the tree, the node values will be passed to
upper nodes instead of values of alpha and beta.
• We will only pass the alpha, beta values to the child nodes.

90
Problem on Alpha-Beta Pruning
• Consider an example of two-player search tree to understand the working of
Alpha-beta pruning.
Step 1: At the first step the, Max player will start first move from
node A where α= -∞ and β= +∞, these value of alpha and beta
passed down to node B where again α= -∞ and β= +∞, and Node B
passes the same value to its child D.

91
Problem on Alpha-Beta Pruning
Step 2: At Node D, the value of α will be calculated as its turn for Max.
The value of α is compared with firstly 2 and then 3, and the max (2,
3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtracks to node B, where the value of β


will change as this is a turn of Min, Now β= +∞, will compare
with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence
at node B now α= -∞, and β= 3.

92
Problem on Alpha-Beta Pruning

In the next step, algorithm traverse the next successor of Node B


which is node E, and the values of α= -∞, and β= 3 will also be
passed.
Step 4: At node E, Max will take its turn, and the value of alpha will
change. The current value of alpha will be compared with 5, so max
(-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the
right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.

93
Problem on Alpha-Beta Pruning
Step 5: At next step, algorithm again backtrack the tree, from
node B to node A.
At node A, the value of alpha will be changed.
The maximum available value is 3 as max (-∞, 3)= 3, and β= +∞,
these two values now passes to right successor of A which is
Node C.
At node C, α=3 and β= +∞,
and the same values will be
passed on to node F.
Step 6: At node F, again the value of
α will be compared with left child
which is 0, and max(3,0)= 3, and
then compared with right child
which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F
will become 1.

94
Problem on Alpha-Beta Pruning

Step 7: Node F returns the node value 1 to node C.


At C, α= 3 and β= +∞, here the value of beta will be changed,
it will compare with 1 so min (∞, 1) = 1.
Now at C, α=3 and β= 1, it satisfies the condition α>=β,
so the next child of C which is G will be pruned, and the
algorithm will not compute the entire sub-tree G.

95
Problem on Alpha-Beta Pruning

Step 8: C now returns the value of 1 to A.


Here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes
which are computed and nodes which has never computed.
Hence the optimal value for the maximizer is 3 for this example.

96
Problem on Alpha-Beta Pruning

97
Problem on Alpha-Beta Pruning

98
Problem on Alpha-Beta Pruning

99
Problem on Alpha-Beta Pruning

10
0
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?

10
1
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?

10
2
Problem on Alpha-Beta Pruning-Homework
Which nodes will be pruned?

10
3
Genetic Algorithm
What is a Genetic Algorithm?
A genetic algorithm (GA) is a search heuristic inspired by Charles Darwin’s theory of
natural evolution.
This algorithm reflects the process of natural selection where the fittest individuals are
selected for reproduction to produce the offspring of the next generation.

Key Concepts
• Population: A set of potential solutions to the problem.
• Chromosomes: A representation of a solution, often encoded as a string of bits.
• Genes: Elements of a chromosome, representing a part of the solution.
• Fitness Function: A function that evaluates how close a given solution is to the
optimum.
• Selection: The process of choosing the fittest individuals for reproduction.
• Crossover (Recombination): Combining two parent solutions to produce offspring.
• Mutation: Randomly altering genes in a chromosome to maintain genetic diversity.

10
4
Genetic Algorithm
Steps of a Genetic Algorithm
1.Initialization: Generate an initial population of potential solutions
randomly.
2.Evaluation: Calculate the fitness of each individual in the
population.
3.Selection: Select pairs of individuals based on their fitness to
reproduce.
4.Crossover: Combine pairs of individuals to produce offspring.
5.Mutation: Apply random changes to individual genes in the
offspring.
6.Replacement: Form a new population by replacing some or all of the
old population with the new offspring.
7.Termination: Repeat the process until a stopping criterion is met
(e.g., a solution is good enough or a maximum number of generations
10
5
Genetic Algorithm: Flow Chart
Introduction to Genetic Algorithms (youtube.com)

Genetic Algorithm How Genetic Algorithm Works Evolutionary Algorithm Machine -


YouTube 10
6
Genetic Algorithm: Example Use
Example: Solving a Search Problem
Let’s consider a simple example where we want to find the maximum value of a function
( f(x) = x^2 ) within a given range.

• Representation: Encode the variable ( x ) as a binary string.


• Initialization: Generate a random population of binary strings.
• Fitness Function: Evaluate each string by converting it to a number and computing
( f(x) ).
• Selection: Select the top-performing strings based on their fitness values.
• Crossover: Combine pairs of strings to create new strings.
• Mutation: Randomly flip bits in the new strings.
• Replacement: Replace the old population with the new one.
• Termination: Continue until the maximum number of generations is reached or the
fitness values converge.

10
7
Genetic Algorithm: Applications

Genetic algorithms are widely used in various fields such as:

• Optimization: Finding the best solution among many


possible options.
• Machine Learning: Feature selection, hyper-parameter
tuning.
• Robotics: Path planning, control systems.
• Game Development: AI for game strategies.

By iteratively improving the population of solutions,


genetic algorithms can efficiently search large and complex
spaces to find optimal or near-optimal solutions.

10
8
Genetic Algorithm: Advantages
• Robustness: Genetic algorithms are highly robust and can handle a wide range of
problem types, including those with complex, non-linear, and multi-modal
landscapes.
• Parallelism: They can explore multiple solutions simultaneously, reducing the risk
of getting stuck in local optima.
• Flexibility: GAs can be applied to various types of optimization problems, including
those with discrete, continuous, or mixed variables.
• No Derivative Information Needed: Unlike gradient-based methods, GAs do not
require derivative information, making them suitable for problems where the
objective function is not differentiable.
• Global Search Capability: They are effective at performing global searches, which
is useful for finding the global optimum in large and complex search spaces.
• Adaptability: GAs can adapt to changes in the problem environment, making them
suitable for dynamic and real-time applications. 10
9
Genetic Algorithm: Limitations
• Computational Cost: GAs can be computationally expensive, especially for large
populations and many generations.
• Parameter Sensitivity: The performance of GAs can be sensitive to the choice of
parameters such as population size, mutation rate, and crossover rate.
• Premature Convergence: There is a risk of premature convergence to suboptimal
solutions, especially if the diversity in the population is not maintained.
• No Guarantee of Optimality: GAs do not guarantee finding the absolute optimal
solution, but rather a good enough solution within a reasonable time frame.
• Complexity in Implementation: Designing an effective fitness function and
encoding scheme can be challenging and problem-specific.
• Stochastic Nature: The random processes involved in selection, crossover, and
mutation can lead to variability in results, requiring multiple runs to obtain reliable
solutions.
11
0
Genetic Algorithm

GA to Drive a Car:GitHub - Gary-The-Cat/SelfDriving

Genetic Algorithm How Genetic Algorithm Works Evolutionary Algorithm Machine - YouTube

11
1
Thank You

112

You might also like