AI_Module 3
AI_Module 3
1
Contents
• Definition, State space representation,
• Problem as a state space search, Problem formulation,
• Well-defined problems
• Solving Problems by Searching,
• Performance evaluation of search strategies,
• Time Complexity, Space Complexity, Completeness, Optimality
• Uninformed Search
• Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Search
• Informed Search, Heuristic Function, Admissible Heuristic, Informed Search
Technique, Greedy Best First Search, A* Search, Local Search, Hill Climbing
Search, Simulated Annealing Search, Optimization, Genetic Algorithm
• Game Playing, Adversarial Search Techniques, Mini-max Search, Alpha-Beta
Pruning
2
Solving Problems by Searching
• how an agent can look ahead to find a sequence of
actions that will eventually achieve its goal.
• When the correct action to take is not immediately
obvious,
• an agent may need to plan ahead
• to consider a sequence of actions that form a path to a goal
state.
• Such an agent is called a problem-solving agent,
• and the computational process it undertakes is called
search.
• Agents that use structured representations of states are
called planning agents
3
Problem-Solving Agents
• Imagine an agent enjoying a touring vacation in
Romania.
• The agent wants to take in the sights, improve its
Romanian, enjoy the nightlife, avoid hangovers, and
so on.
• The decision problem is a complex one.
4
• suppose the agent is currently in the city of Arad
• and has a nonrefundable ticket to fly out of Bucharest the following
day.
• The agent observes street signs and sees that there are three roads
leading out of Arad:
• one toward Sibiu,
• one to Timisoara,
• and one to Zerind.
• None of these are the goal, so unless the agent is familiar with the
geography of Romania, it will not know which road to follow.
• If the agent has no additional information—that is, if the
environment is unknown—then the agent can do no better than to
execute one of the actions at random.
5
6
Goal formulation:
The agent adopts the goal of reaching Bucharest.
• Problem formulation:
• The agent devises a description of the states and actions
necessary to reach the goal.
• an abstract model of the relevant part of the world
• Search: Before taking any action in the real world, the agent
simulates sequences of actions in its model,
• searches until it finds a sequence of actions that reaches the goal.
Such a sequence is called a solution.
• Execution: The agent can now execute the actions in the solution,
one at a time.
7
Inferences
• In a fully observable, deterministic, known environment:-
• the solution to any problem is a fixed sequence of actions: drive to Sibiu, then
Fagaras, then Bucharest.
• If the model is correct, then once the agent has found a solution, it can
ignore its percepts while it is executing the actions—
• For example, the agent might plan to drive from Arad to Sibiu but
might need a contingency plan in case it arrives in Zerind by accident
or finds a sign saying “Drum ˆ Inchis” (Road Closed).
9
Search problems and solutions
• A search problem can be defined formally as follows:
• State Space: A set of possible states that the environment can be in.
• Initial state: The initial state that the agent starts in.
• and sometimes the goal is defined by a property that applies to many states
(potentially an infinite number) 10
• The actions available to the agent.
• Given a state s, ACTIONS(s) returns a finite set of actions that can be
executed in s.
• We say that each of these actions is applicable in s.
• example: ACTIONS(Arad) = {To Sibiu,To Timisoara,To Zerind}
• A transition model, which describes what each action does.
• RESULT(s,a) returns the Transition model state that results from doing
action a in state s. For example, RESULT(Arad To Zerind) = Zerind
11
• An action cost function, denoted by ACTION-COST(s a s′) when we are
programming.
• Action cost function or c(s a s′) when we are doing math, that gives
the numeric cost of applying action a in state s to reach state s′.
• A problem-solving agent should use a cost function that reflects its
own performance measure;
• for example, for route-finding agents, the cost of an action might be
the length in miles or it might be the time it takes to complete the
action.
• A sequence of actions forms a path.
• a solution is a path from the initial state to a goal state.
• We assume that action costs are additive; that is, the total cost of a
path is the sum of the individual action costs.
12
• An optimal solution has the lowest path cost among all solutions.
• The state space can be represented as a graph in which the vertices
are states and the Graph directed edges between them are actions.
13
14
15
16
State?
Initial State?
Actions?
Transition model
Goal test?
Path Cost?
17
18
19
20
Problem Solving Agents
• In Artificial Intelligence, Search techniques are universal problem-
solving methods.
• Rational agents or Problem-solving agents also known as goal based
agent in AI mostly used these search strategies or algorithms to solve
a specific problem and provide the best result.
21
Properties of Search Algorithms:
• Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
• Completeness: A search algorithm is said to be complete if it
guarantees to return a solution if at least any solution exists for any
random input.
• Optimality: If a solution found for an algorithm is guaranteed to be
the best solution (lowest path cost) among all other solutions, then
such a solution for is said to be an optimal solution.
• Time Complexity: Time complexity is a measure of time for an
algorithm to complete its task.
• Space Complexity: It is the maximum storage space required at any
point during the search, as the complexity of the problem.
22
23
24
25
Uninformed Search Methods
• Uninformed search means that they have no additional information about states
beyond that provided in the problem definition.
• All they can do is generate successors and distinguish a goal state from non-goal
state.
• The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.
• It operates in a brute-force way as it only includes information about how to traverse
the tree and how to identify leaf and goal nodes.
• Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal, so it
is also called blind search.
• It examines each node of the tree until it achieves the goal node.
Uninformed strategies use only the information available in the problem definition
1) Breadth-first search
2) Depth-first search
3) Depth-limited search
4) Iterative deepening search
26
5)Bidirectional search
27
Note: (Only for Understanding)
1) It is important to understand the distinction between nodes and states.
- A node is book keeping data structure used to represent the search tree.
- A state corresponds to a configuration of the world.
2) We also need to represent the collection of nodes that have been generated
but not yet expanded; this collection is called the fringe.
31
• Algorithm:
1. Place the starting node.
2. If the queue is empty return failure and stop.
3. If the first element on the queue is a goal node, return success and stop otherwise.
4. Remove and expand the first element from the queue and place all children at the end of the queue in any order.
5. Go back to step 1.
• Advantages:
Breadth first search will never get trapped exploring the useless path forever. If there is a solution, BFS will definitely
find it out. If there is more than one solution then BFS can find the minimal one that requires less number of steps.
• Disadvantages:
If the solution is farther away from the root, breadth first search will consume lot of time.
32
Depth First Search (DFS)
Depth-first search (DFS) is an algorithm for traversing or searching a tree,
tree structure, or graph. One starts at the root (selecting some node as the
root in the graph case) and explores as far as possible along each branch
before backtracking.
• Algorithm:
1. Push the root node onto a stack.
2. Pop a node from the stack and examine it.
• If the element sought is found in this node, quit the search and return a
result.
• Otherwise push all its successors (child nodes) that have not yet been
discovered onto the stack.
3. If the stack is empty, every node in the tree has been examined – quit the
search and return "not found".
4. If the stack is not empty, repeat from Step 2.
33
DFS
• Advantages:
• If depth-first search finds solution without exploring much in a
path then the time and space it takes will be very less.
• Disadvantages:
• Depth-First Search is not guaranteed to find the solution.
• It is not complete algorithm, if it go into infinite loops.
34
35
36
37
38
Depth Limited Search:
• A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cutoff failure value: It defines no solution for the problem within a given depth
limit.
39
DLS
• Advantages:
• Depth-limited search is Memory efficient.
• Disadvantages:
• Depth-limited search also has a disadvantage of incompleteness.
• It may not be optimal if the problem has more than one solution.
• Completeness: The DLS search algorithm is complete if the solution is above the
depth-limit.
• Time Complexity: Time complexity of DLS algorithm is O(bℓ).
• Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
• Optimal: Depth-limited search can be viewed as a special case of DFS, and it is
also not optimal even if ℓ>d.
40
•-
Iterative deepening depth-first Search:
• The iterative deepening algorithm is a combination of DFS and BFS
algorithms. This search algorithm finds out the best depth limit and
does it by gradually increasing the limit until a goal is found.
42
Example
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the
goal node.
43
44
Uniform Cost Search/ Dijkstra’s algorithm
• Uniform-cost search is a searching algorithm used for traversing a weighted
tree or graph.
• This algorithm comes into play when a different cost is available for each
edge.
• The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost.
• Uniform-cost search expands nodes according to their path costs from the
root node. It can be used to solve any graph/tree where the optimal cost is
in demand.
•
45
Uniform Cost Search
• A uniform-cost search algorithm is implemented by the priority
queue.
• It gives maximum priority to the lowest cumulative cost.
• Algorithm:
• Insert the root node in the priority queue.
• If the removed node is the goal node, print total cost and stop the algorithm
• Else: Enqueue all the children of the current node to the priority queue , with
their cumulative cost from the root node as priority.
46
Example
Advantages:
•Uniform cost search is optimal
because at every state the path with
the least cost is chosen.
Disadvantages:
•It does not care about the number of
steps involved in searching and only
concerned about path cost.
• Due to which this algorithm may be
stuck in an infinite loop.
47
Bidirectional Search
• Bidirectional search algorithm runs two simultaneous searches,
• one form initial state called as forward-search
• and other from goal node called as backward-search, to find the goal
node
• Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and
other starts from goal vertex.
• The search stops when these two graphs intersect each other.
• Bidirectional search can use search techniques such as BFS, DFS, DLS,
etc.
48
• Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
• Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
49
Example:
• In the below search tree, bidirectional search algorithm is applied.
This algorithm divides one graph/tree into two sub-graphs. It starts
traversing from node 1 in the forward direction and starts from goal
node 16 in the backward direction.
• The algorithm terminates at node 9 where two searches meet.
50
Performance measure
• Completeness: Bidirectional Search is complete if we use BFS in both
searches.
• Time Complexity: Time complexity of bidirectional search using BFS
is O(bd/2).
• Space Complexity: Space complexity of bidirectional search is O(bd).
• Optimal: Bidirectional search is Optimal.
51
Comparing Search Strategies:
52
Informed Search
• informed search algorithm contains an array of knowledge
• how far we are from the goal,
• path cost,
• how to reach to goal node, etc.
This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
• The informed search :- more useful for large search space.
• Informed search algorithm uses the idea of heuristic, so it is also
called Heuristic search.
• Examples of informed search algorithms include A* search, Best-First
search, and Greedy search.
53
• Here are some key features of informed search algorithms in AI:
55
• What are Admissible heuristics?
• Admissible heuristics are used to estimate the cost of reaching the goal
state in a search algorithm.
• It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.
• The heuristic method, however, might not always give the best solution,
but it guarantees to find a good solution in a reasonable time. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.
57
Parameter Informed Search Uninformed Search
It consumes moderate
It consumes less time because of
Time time because of slow
quick searching. 58
Parameter Informed Search Uninformed Search
62
• Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• This algorithm can solve very complex problems.
• Disadvantages:
• It does not always produce the shortest path as it mostly based on
heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various large-
scale problems.
63
Performance metric
• Complete: A* algorithm is complete as long as:Branching factor is
finite.
• Cost at every action is fixed.
• Optimal: A* search algorithm is optimal if it follows below two
conditions:
• Admissible: the first condition requires for optimality is that h(n)
should be an admissible heuristic for A* tree search. An admissible
heuristic is optimistic in nature.
• Consistency: Second required condition is consistency for only A*
graph-search.
• If the heuristic function is admissible, then A* tree search will always
find the least cost path. 64
• Time Complexity: The time complexity of A* search algorithm
depends
• on heuristic function,
• and the number of nodes expanded is exponential to the depth of solution d.
• So the time complexity is O(b^d), where b is the branching factor.
• Space Complexity: The space complexity of A* search algorithm
is O(b^d)
65
Greedy best-first search algorithm
• Greedy best-first search algorithm always selects the path which
appears best at that moment.
• It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search.
• Best-first search allows us to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the
most promising node.
66
Best First search
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic
function, i.e.
• f(n)= g(n).
• Were, h(n)= estimated cost from node n to the goal.
• The greedy best first algorithm is implemented by the priority queue.
67
Solution
69
Performance parameter
• Space Complexity: The worst case space complexity of Greedy best
first search is O(bm). Where, m is the maximum depth of the search
space.
• Complete: Greedy best-first search is also incomplete, even if the
given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.
70
Greedy Best First Search,
71
72
Q-02
73
Q-03
74
Solution
75
Genetic Algorithm
76
Hill Climbing Search
77
78
79
80
81
82
Features of Hill
Climbing:
• Generate and Test variant: Hill
Climbing is the variant of Generate
and Test method. The Generate and
Test method produce feedback which
helps to decide which direction to
move in the search space.
• Greedy approach: Hill-climbing
algorithm search moves in the
direction which optimizes the cost.
• No backtracking: It does not
backtrack the search space, as it does
not remember the previous states.
83
State-space
Diagram for
Hill Climbing:
Different regions in the state space landscape:
85
• Types of Hill Climbing Algorithm:
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
86
87
Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
• Step 2: Loop Until a solution is found or there is no new operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to step2.
• Step 5: Exit.
88
Simulated Annealing
Search, Optimization,
89
90
91
92
93
Genetic Algorithm
• A genetic algorithm is an adaptive heuristic search algorithm inspired by
"Darwin's theory of evolution in Nature."
• used to solve optimization problems in machine learning
• one of the important algorithms as it helps solve complex problems that
would take a long time to solve.
• Genetic Algorithms are being widely used in different real-world
applications, for example, Designing electronic circuits, code-breaking,
image processing, and artificial creativity.
•.
94
Basic terminologies
• Population: Population is the subset of all possible or probable solutions, which can solve the
given problem.
• Chromosomes: A chromosome is one of the solutions in the population for the given
problem, and the collection of gene generate a chromosome
• Allele: Allele is the value provided to the gene within a particular chromosome.
• Fitness Function: The fitness function is used to determine the individual's fitness level in the
population. It means the ability of an individual to compete with other individuals. In every
iteration, individuals are evaluated based on their fitness function.
95
Genetic Operators: In a genetic algorithm, the best individual mate to regenerate
offspring better than parents. Here genetic operators play a role in changing the
genetic composition of the next generation.
Selection:
After calculating the fitness of every existent in the population, a selection process is
used to determine which of the individualities in the population will get to reproduce
and produce the seed that will form the coming generation.
96
• Mutation
The mutation operator inserts random genes in the offspring
(new child) to maintain the diversity in the population. It can
be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature
convergence and enhances diversification. The below image
shows the mutation process:
97
Crossover:
• The crossover plays a most significant role in the reproduction
phase of the genetic algorithm. In this process, a crossover point
is selected at random within the genes. Then the crossover
operator swaps genetic information of two parents from the
current generation to produce a new individual representing the
offspring.
98
99