0% found this document useful (0 votes)
3 views

AI_Module 3

The document discusses problem-solving through search strategies in artificial intelligence, detailing the representation of state spaces, problem formulation, and various search methods including uninformed and informed searches. It outlines the properties of search algorithms such as completeness, optimality, time complexity, and space complexity, and describes specific algorithms like Breadth-First Search, Depth-First Search, and Uniform Cost Search. The text emphasizes the importance of planning and the computational processes agents undertake to achieve their goals in various environments.

Uploaded by

akshatravi12315
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AI_Module 3

The document discusses problem-solving through search strategies in artificial intelligence, detailing the representation of state spaces, problem formulation, and various search methods including uninformed and informed searches. It outlines the properties of search algorithms such as completeness, optimality, time complexity, and space complexity, and describes specific algorithms like Breadth-First Search, Depth-First Search, and Uniform Cost Search. The text emphasizes the importance of planning and the computational processes agents undertake to achieve their goals in various environments.

Uploaded by

akshatravi12315
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Module 3

Solving Problems by Searching

By: Prof. Sheetal Kadam

1
Contents
• Definition, State space representation,
• Problem as a state space search, Problem formulation,
• Well-defined problems
• Solving Problems by Searching,
• Performance evaluation of search strategies,
• Time Complexity, Space Complexity, Completeness, Optimality
• Uninformed Search
• Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Search
• Informed Search, Heuristic Function, Admissible Heuristic, Informed Search
Technique, Greedy Best First Search, A* Search, Local Search, Hill Climbing
Search, Simulated Annealing Search, Optimization, Genetic Algorithm
• Game Playing, Adversarial Search Techniques, Mini-max Search, Alpha-Beta
Pruning
2
Solving Problems by Searching
• how an agent can look ahead to find a sequence of
actions that will eventually achieve its goal.
• When the correct action to take is not immediately
obvious,
• an agent may need to plan ahead
• to consider a sequence of actions that form a path to a goal
state.
• Such an agent is called a problem-solving agent,
• and the computational process it undertakes is called
search.
• Agents that use structured representations of states are
called planning agents
3
Problem-Solving Agents
• Imagine an agent enjoying a touring vacation in
Romania.
• The agent wants to take in the sights, improve its
Romanian, enjoy the nightlife, avoid hangovers, and
so on.
• The decision problem is a complex one.

4
• suppose the agent is currently in the city of Arad
• and has a nonrefundable ticket to fly out of Bucharest the following
day.
• The agent observes street signs and sees that there are three roads
leading out of Arad:
• one toward Sibiu,
• one to Timisoara,
• and one to Zerind.
• None of these are the goal, so unless the agent is familiar with the
geography of Romania, it will not know which road to follow.
• If the agent has no additional information—that is, if the
environment is unknown—then the agent can do no better than to
execute one of the actions at random.
5
6
Goal formulation:
The agent adopts the goal of reaching Bucharest.
• Problem formulation:
• The agent devises a description of the states and actions
necessary to reach the goal.
• an abstract model of the relevant part of the world
• Search: Before taking any action in the real world, the agent
simulates sequences of actions in its model,
• searches until it finds a sequence of actions that reaches the goal.
Such a sequence is called a solution.
• Execution: The agent can now execute the actions in the solution,
one at a time.
7
Inferences
• In a fully observable, deterministic, known environment:-
• the solution to any problem is a fixed sequence of actions: drive to Sibiu, then
Fagaras, then Bucharest.

• If the model is correct, then once the agent has found a solution, it can
ignore its percepts while it is executing the actions—

• closing its eyes, so to speak—because the solution is guaranteed to


lead to the goal.
• Control theorists call this an open-loop system: ignoring the Closed-
loop percepts breaks the loop between agent and environment.

8
• If the model is incorrect or the environment is not deterministic, Then
agent will continuously monitor the percepts. i.e. closed Loop

• In partially observable or nondeterministic environments, a solution


would be a branching strategy that recommends different future
actions depending on what percepts arrive.

• For example, the agent might plan to drive from Arad to Sibiu but
might need a contingency plan in case it arrives in Zerind by accident
or finds a sign saying “Drum ˆ Inchis” (Road Closed).
9
Search problems and solutions
• A search problem can be defined formally as follows:

• State Space: A set of possible states that the environment can be in.

• Initial state: The initial state that the agent starts in.

• For example: Arad

• Goal States: A set of one or more goal states

• Sometimes there is one goal state (e.g., Bucharest),

• sometimes there is a small set of alternative goal states,

• and sometimes the goal is defined by a property that applies to many states
(potentially an infinite number) 10
• The actions available to the agent.
• Given a state s, ACTIONS(s) returns a finite set of actions that can be
executed in s.
• We say that each of these actions is applicable in s.
• example: ACTIONS(Arad) = {To Sibiu,To Timisoara,To Zerind}
• A transition model, which describes what each action does.
• RESULT(s,a) returns the Transition model state that results from doing
action a in state s. For example, RESULT(Arad To Zerind) = Zerind

11
• An action cost function, denoted by ACTION-COST(s a s′) when we are
programming.
• Action cost function or c(s a s′) when we are doing math, that gives
the numeric cost of applying action a in state s to reach state s′.
• A problem-solving agent should use a cost function that reflects its
own performance measure;
• for example, for route-finding agents, the cost of an action might be
the length in miles or it might be the time it takes to complete the
action.
• A sequence of actions forms a path.
• a solution is a path from the initial state to a goal state.
• We assume that action costs are additive; that is, the total cost of a
path is the sum of the individual action costs.
12
• An optimal solution has the lowest path cost among all solutions.
• The state space can be represented as a graph in which the vertices
are states and the Graph directed edges between them are actions.

13
14
15
16
State?
Initial State?
Actions?
Transition model
Goal test?
Path Cost?

17
18
19
20
Problem Solving Agents
• In Artificial Intelligence, Search techniques are universal problem-
solving methods.
• Rational agents or Problem-solving agents also known as goal based
agent in AI mostly used these search strategies or algorithms to solve
a specific problem and provide the best result.

21
Properties of Search Algorithms:
• Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
• Completeness: A search algorithm is said to be complete if it
guarantees to return a solution if at least any solution exists for any
random input.
• Optimality: If a solution found for an algorithm is guaranteed to be
the best solution (lowest path cost) among all other solutions, then
such a solution for is said to be an optimal solution.
• Time Complexity: Time complexity is a measure of time for an
algorithm to complete its task.
• Space Complexity: It is the maximum storage space required at any
point during the search, as the complexity of the problem.

22
23
24
25
Uninformed Search Methods
• Uninformed search means that they have no additional information about states
beyond that provided in the problem definition.
• All they can do is generate successors and distinguish a goal state from non-goal
state.
• The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.
• It operates in a brute-force way as it only includes information about how to traverse
the tree and how to identify leaf and goal nodes.
• Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal, so it
is also called blind search.
• It examines each node of the tree until it achieves the goal node.
Uninformed strategies use only the information available in the problem definition
1) Breadth-first search
2) Depth-first search
3) Depth-limited search
4) Iterative deepening search
26
5)Bidirectional search
27
Note: (Only for Understanding)
1) It is important to understand the distinction between nodes and states.
- A node is book keeping data structure used to represent the search tree.
- A state corresponds to a configuration of the world.

2) We also need to represent the collection of nodes that have been generated
but not yet expanded; this collection is called the fringe.

3) In AI ,where the graph is represented implicitly by the initial state and


successor function and is frequently infinite , its complexity expressed in terms
of three quantities:
b ; the branching factor or maximum number of successor of any node.
d; the depth of shallowest goal node; and
m; the maximum length of any path in the state space.
28
• BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to
nodes of next level.
• The breadth-first search algorithm is an example of a general-graph
search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
• Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS
will provide the minimal solution which requires the least number of
steps.
• Disadvantages:
• It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
29
30
• Time Complexity: Time Complexity of BFS algorithm can be obtained
by the number of nodes traversed in BFS until the shallowest Node.
Where the d= depth of shallowest solution and b is a node at every
state.
• T (b) = 1+b2+b3+.......+ bd= O (bd)
• Space Complexity: Space complexity of BFS algorithm is given by the
Memory size of frontier which is O(bd).
• Completeness: BFS is complete, which means if the shallowest goal
node is at some finite depth, then BFS will find a solution.
• Optimality: BFS is optimal if path cost is a non-decreasing function of
the depth of the node.

31
• Algorithm:
1. Place the starting node.
2. If the queue is empty return failure and stop.
3. If the first element on the queue is a goal node, return success and stop otherwise.
4. Remove and expand the first element from the queue and place all children at the end of the queue in any order.
5. Go back to step 1.

• Advantages:
Breadth first search will never get trapped exploring the useless path forever. If there is a solution, BFS will definitely
find it out. If there is more than one solution then BFS can find the minimal one that requires less number of steps.
• Disadvantages:
If the solution is farther away from the root, breadth first search will consume lot of time.

32
Depth First Search (DFS)
Depth-first search (DFS) is an algorithm for traversing or searching a tree,
tree structure, or graph. One starts at the root (selecting some node as the
root in the graph case) and explores as far as possible along each branch
before backtracking.
• Algorithm:
1. Push the root node onto a stack.
2. Pop a node from the stack and examine it.
• If the element sought is found in this node, quit the search and return a
result.
• Otherwise push all its successors (child nodes) that have not yet been
discovered onto the stack.
3. If the stack is empty, every node in the tree has been examined – quit the
search and return "not found".
4. If the stack is not empty, repeat from Step 2.
33
DFS
• Advantages:
• If depth-first search finds solution without exploring much in a
path then the time and space it takes will be very less.

• The advantage of depth-first Search is that memory requirement is


only linear with respect to the search graph. This is in contrast
with breadth-first search which requires more space.

• Disadvantages:
• Depth-First Search is not guaranteed to find the solution.
• It is not complete algorithm, if it go into infinite loops.

34
35
36
37
38
Depth Limited Search:
• A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cutoff failure value: It defines no solution for the problem within a given depth
limit.

39
DLS
• Advantages:
• Depth-limited search is Memory efficient.

• Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.

• Completeness: The DLS search algorithm is complete if the solution is above the
depth-limit.
• Time Complexity: Time complexity of DLS algorithm is O(bℓ).
• Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
• Optimal: Depth-limited search can be viewed as a special case of DFS, and it is
also not optimal even if ℓ>d.

40
•-
Iterative deepening depth-first Search:
• The iterative deepening algorithm is a combination of DFS and BFS
algorithms. This search algorithm finds out the best depth limit and
does it by gradually increasing the limit until a goal is found.

• This algorithm performs depth-first search up to a certain "depth


limit", and it keeps increasing the depth limit after each iteration until
the goal node is found.

• This Search algorithm combines the benefits of Breadth-first search's


fast search and depth-first search's memory efficiency.

• The iterative search algorithm is useful uninformed search when


search space is large, and depth of goal node is unknown.
41
Iterative Deepening Search:
• Iterative Deepening Search (IDS) is a derivative of DLS and combines the feature of depth-first search
with that of breadth-first search.
• IDS operate by performing DLS searches with increased depths until the goal is found.
• The depth begins at one, and increases until the goal is found, or no further nodes can be enumerated.
• By minimizing the depth of the search, we force the algorithm to also search the breadth of a graph.
• If the goal is not found, the depth that the algorithm is permitted to search is increased and the
algorithm is started again.

42
Example

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the
goal node.

43
44
Uniform Cost Search/ Dijkstra’s algorithm
• Uniform-cost search is a searching algorithm used for traversing a weighted
tree or graph.
• This algorithm comes into play when a different cost is available for each
edge.
• The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost.
• Uniform-cost search expands nodes according to their path costs from the
root node. It can be used to solve any graph/tree where the optimal cost is
in demand.

45
Uniform Cost Search
• A uniform-cost search algorithm is implemented by the priority
queue.
• It gives maximum priority to the lowest cumulative cost.
• Algorithm:
• Insert the root node in the priority queue.

• Remove the element with highest priority (lowest cumulative cost)

• If the removed node is the goal node, print total cost and stop the algorithm

• Else: Enqueue all the children of the current node to the priority queue , with
their cumulative cost from the root node as priority.

46
Example
Advantages:
•Uniform cost search is optimal
because at every state the path with
the least cost is chosen.
Disadvantages:
•It does not care about the number of
steps involved in searching and only
concerned about path cost.
• Due to which this algorithm may be
stuck in an infinite loop.

47
Bidirectional Search
• Bidirectional search algorithm runs two simultaneous searches,
• one form initial state called as forward-search
• and other from goal node called as backward-search, to find the goal
node
• Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and
other starts from goal vertex.
• The search stops when these two graphs intersect each other.
• Bidirectional search can use search techniques such as BFS, DFS, DLS,
etc.

48
• Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
• Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.

49
Example:
• In the below search tree, bidirectional search algorithm is applied.
This algorithm divides one graph/tree into two sub-graphs. It starts
traversing from node 1 in the forward direction and starts from goal
node 16 in the backward direction.
• The algorithm terminates at node 9 where two searches meet.

50
Performance measure
• Completeness: Bidirectional Search is complete if we use BFS in both
searches.
• Time Complexity: Time complexity of bidirectional search using BFS
is O(bd/2).
• Space Complexity: Space complexity of bidirectional search is O(bd).
• Optimal: Bidirectional search is Optimal.

51
Comparing Search Strategies:

52
Informed Search
• informed search algorithm contains an array of knowledge
• how far we are from the goal,
• path cost,
• how to reach to goal node, etc.
This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
• The informed search :- more useful for large search space.
• Informed search algorithm uses the idea of heuristic, so it is also
called Heuristic search.
• Examples of informed search algorithms include A* search, Best-First
search, and Greedy search.

53
• Here are some key features of informed search algorithms in AI:

• Use of Heuristics – informed search algorithms use heuristics, or additional


information, to guide the search process and prioritize which nodes to expand.

• More efficient – informed search algorithms are designed to be more efficient


than uninformed search algorithms, such as breadth-first search or depth-first
search, by avoiding the exploration of unlikely paths and focusing on more
promising ones.

• Goal-directed – informed search algorithms are goal-directed, meaning that they


are designed to find a solution to a specific problem.
54
• Cost-based – informed search algorithms often use cost-based
estimates to evaluate nodes, such as the estimated cost to reach the
goal or the cost of a particular path.

• Prioritization – informed search algorithms prioritize which nodes to


expand based on the additional information available, often leading
to more efficient problem-solving.

• Optimality – informed search algorithms may guarantee an optimal


solution if the heuristics used are admissible (never overestimating
the actual cost) and consistent (the estimated cost is a lower bound
on the actual cost).

55
• What are Admissible heuristics?
• Admissible heuristics are used to estimate the cost of reaching the goal
state in a search algorithm.

• Admissible heuristics never overestimate the cost of reaching the goal


state.

• The use of admissible heuristics also results in optimal solutions as they


always find the cheapest path solution.

• For a heuristic to be admissible to a search problem, needs to be lower


than or equal to the actual cost of reaching the goal.
56
Heuristic Function
• Heuristics function: Heuristic is a function which is used in Informed
Search, and it finds the most promising path.

• It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.

• The heuristic method, however, might not always give the best solution,
but it guarantees to find a good solution in a reasonable time. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.
57
Parameter Informed Search Uninformed Search

It is also known as Heuristic It is also known as Blind


Known as
Search. Search.

It uses knowledge for the searching It doesn’t use knowledge


Using Knowledge
process. for the searching process.

It finds solution slow as


Performance It finds a solution more quickly. compared to an informed
search.

Completion It may or may not be complete. It is always complete.


Cost Factor Cost is low. Cost is high.

It consumes moderate
It consumes less time because of
Time time because of slow
quick searching. 58
Parameter Informed Search Uninformed Search

It is more efficient as efficiency It is comparatively less


takes into account cost and efficient as incurred cost is
efficiency performance. The incurred cost is more and the speed of
less and speed of finding solutions finding the Breadth-
is quick. Firstsolution is slow.
Comparatively higher
Computational Computational requirements are
computational
requirements lessened.
requirements.

Size of search Having a wide scope in terms of Solving a massive search


problems handling large search problems. task is challenging.

•Greedy Search •Depth First Search (DFS)


Examples of
•A* Search •Breadth First Search (BFS)
Algorithms
•Hill Climbing Algorithm •Branch and Bound 59
60
61
Q-01

62
• Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• This algorithm can solve very complex problems.
• Disadvantages:
• It does not always produce the shortest path as it mostly based on
heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various large-
scale problems.
63
Performance metric
• Complete: A* algorithm is complete as long as:Branching factor is
finite.
• Cost at every action is fixed.
• Optimal: A* search algorithm is optimal if it follows below two
conditions:
• Admissible: the first condition requires for optimality is that h(n)
should be an admissible heuristic for A* tree search. An admissible
heuristic is optimistic in nature.
• Consistency: Second required condition is consistency for only A*
graph-search.
• If the heuristic function is admissible, then A* tree search will always
find the least cost path. 64
• Time Complexity: The time complexity of A* search algorithm
depends
• on heuristic function,
• and the number of nodes expanded is exponential to the depth of solution d.
• So the time complexity is O(b^d), where b is the branching factor.
• Space Complexity: The space complexity of A* search algorithm
is O(b^d)

65
Greedy best-first search algorithm
• Greedy best-first search algorithm always selects the path which
appears best at that moment.
• It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search.
• Best-first search allows us to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the
most promising node.

66
Best First search
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic
function, i.e.
• f(n)= g(n).
• Were, h(n)= estimated cost from node n to the goal.
• The greedy best first algorithm is implemented by the priority queue.

67
Solution

•Expand the nodes of S and put in the


CLOSED list
•Initialization: Open [A, B], Closed [S]​
•Iteration 1: Open [A], Closed [S, B]​
•Iteration 2: Open [E, F, A], Closed [S,
B]​
: Open [E, A], Closed [S, B, F]​
•Iteration 3: Open [I, G, E, A], Closed
[S, B, F]​
: Open [I, E, A], Closed [S, B, F,
G]​
•Hence the final solution path
will be: S----> B----->F----> G
•Time
68
Complexity: The worst case time
Solve using A star, greedy BFS and UCS
determine optimal path.

69
Performance parameter
• Space Complexity: The worst case space complexity of Greedy best
first search is O(bm). Where, m is the maximum depth of the search
space.
• Complete: Greedy best-first search is also incomplete, even if the
given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.

70
Greedy Best First Search,

71
72
Q-02

73
Q-03

74
Solution

75
Genetic Algorithm

76
Hill Climbing Search

77
78
79
80
81
82
Features of Hill
Climbing:
• Generate and Test variant: Hill
Climbing is the variant of Generate
and Test method. The Generate and
Test method produce feedback which
helps to decide which direction to
move in the search space.
• Greedy approach: Hill-climbing
algorithm search moves in the
direction which optimizes the cost.
• No backtracking: It does not
backtrack the search space, as it does
not remember the previous states.

83
State-space
Diagram for
Hill Climbing:
Different regions in the state space landscape:

• Local Maximum: Local maximum is a state which is better than its


neighbor states, but there is also another state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.

85
• Types of Hill Climbing Algorithm:
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:

86
87
Algorithm for Simple Hill Climbing:

• Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
• Step 2: Loop Until a solution is found or there is no new operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to step2.
• Step 5: Exit.

88
Simulated Annealing
Search, Optimization,

89
90
91
92
93
Genetic Algorithm
• A genetic algorithm is an adaptive heuristic search algorithm inspired by
"Darwin's theory of evolution in Nature."
• used to solve optimization problems in machine learning
• one of the important algorithms as it helps solve complex problems that
would take a long time to solve.
• Genetic Algorithms are being widely used in different real-world
applications, for example, Designing electronic circuits, code-breaking,
image processing, and artificial creativity.
•.

94
Basic terminologies
• Population: Population is the subset of all possible or probable solutions, which can solve the
given problem.

• Chromosomes: A chromosome is one of the solutions in the population for the given
problem, and the collection of gene generate a chromosome

• Gene: A chromosome is divided into a different gene, or it is an element of the chromosome.

• Allele: Allele is the value provided to the gene within a particular chromosome.

• Fitness Function: The fitness function is used to determine the individual's fitness level in the
population. It means the ability of an individual to compete with other individuals. In every
iteration, individuals are evaluated based on their fitness function.

95
Genetic Operators: In a genetic algorithm, the best individual mate to regenerate
offspring better than parents. Here genetic operators play a role in changing the
genetic composition of the next generation.

Selection:

After calculating the fitness of every existent in the population, a selection process is
used to determine which of the individualities in the population will get to reproduce
and produce the seed that will form the coming generation.

96
• Mutation
The mutation operator inserts random genes in the offspring
(new child) to maintain the diversity in the population. It can
be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature
convergence and enhances diversification. The below image
shows the mutation process:

97
Crossover:
• The crossover plays a most significant role in the reproduction
phase of the genetic algorithm. In this process, a crossover point
is selected at random within the genes. Then the crossover
operator swaps genetic information of two parents from the
current generation to produce a new individual representing the
offspring.

98
99

You might also like