0% found this document useful (0 votes)
4 views

AI_Unit_3_Problem Solving by Searching

Chapter 3 discusses problem-solving in artificial intelligence, emphasizing systematic searches to achieve goals through various methods, including special-purpose and general-purpose techniques. It outlines the components of problem formulation, state space representation, and types of search methods, distinguishing between uninformed and heuristically informed searches. The chapter evaluates search algorithms like Breadth-First Search and Depth-First Search, detailing their performance metrics, advantages, and disadvantages.

Uploaded by

Neplicate
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

AI_Unit_3_Problem Solving by Searching

Chapter 3 discusses problem-solving in artificial intelligence, emphasizing systematic searches to achieve goals through various methods, including special-purpose and general-purpose techniques. It outlines the components of problem formulation, state space representation, and types of search methods, distinguishing between uninformed and heuristically informed searches. The chapter evaluates search algorithms like Breadth-First Search and Depth-First Search, detailing their performance metrics, advantages, and disadvantages.

Uploaded by

Neplicate
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

1 Artificial Intelligence Chapter 3 Problem Solving by Searching

Unit 3 Problem Solving by Searching

Problem Solving
- Problem solving, particularly in artificial intelligence, may be characterized as a systematic
search through a range of possible actions in order to reach some predefined goal or solution.
- Problem-solving methods divide into special purpose and general purpose.
- A special-purpose method is tailor-made for a particular problem and often exploits very
specific features of the situation in which the problem is embedded.
- In contrast, a general- purpose method is applicable to a wide variety of problems.
- One general-purpose technique used in AI is means-end analysis—a step-by-step, or
incremental, reduction of the difference between the current state and the final goal.

Four general steps in problem solving


Goal formulation
o What are the successful world states
Problem formulation
o What actions and states to consider given the goal
Search
o Determine the possible sequence of actions that lead to the states of known values and
then choosing the best sequence.
Execute
o Give the solution perform the actions.

Problem formulation
A problem is defined formally by five components:
– An initial state: State from which agent start
– Successor function: Description of possible actions available to the agent.
– Actions: A description of the possible actions available to the agent.
– Goal test: Determine whether the given state is goal state or not
– Path cost: Sum of cost of each path from initial state to the given state.
A solution is a sequence of actions from initial to goal state. Optimal solution has the lowest path
cost.

Prepared By: Ramesh Kumar Chaudhary


2 Artificial Intelligence Chapter 3 Problem Solving by Searching

State Space representation


- The state space is commonly defined as a directed graph in which each node is a state and each
arc represents the application of an operator transforming a state to a successor state.
- A solution is a path from the initial state to a goal state.

State Space representation of Vacuum World Problem

Figure: The state space for the vacuum world. Links denote actions: L=Left, R=Right,
S=Suck.

– States?? Two locations with or without dirt: 2 x 22=8 states.


– Initial state?? Any state can be initial
– Actions?? {Left, Right, Suck}
– Goal test?? Check whether squares are clean.
– Path cost?? Number of actions to reach goal.

Search problem
Figure below contains a representation of a map. The nodes represent cities, and the links represent
direct road connections between cities. The number associated to a link represents the length of
the corresponding road.

The search problem is to find a path from a city S to a city G

Prepared By: Ramesh Kumar Chaudhary


3 Artificial Intelligence Chapter 3 Problem Solving by Searching

A 4 4
B C
3

S 5
5

4 G
D E F 3
2 4

Figure: A graph representation of a map

This problem will be used to illustrate some search methods.

Search problems are part of a large number of real world applications:

 VLSI layout
 Path planning
 Robot navigation etc.

Types of Search Methods


There are two broad classes of search methods:

1. Uninformed (or blind) search methods;


2. Heuristically informed search methods.

In the case of the uninformed search methods, the order in which potential solution paths are
considered is arbitrary, using no domain-specific information to judge where the solution is likely
to lie.

In the case of the heuristically informed search methods, one uses domain-dependent (heuristic)
information in order to search the space more efficiently.

Informed Search Uninformed Search

It uses knowledge during the process of It does not require using any knowledge during
searching. the process of searching.

Finding the solution is quicker. Finding the solution is much slower


comparatively.

Prepared By: Ramesh Kumar Chaudhary


4 Artificial Intelligence Chapter 3 Problem Solving by Searching

It can be both complete and incomplete. It is always bound to be complete.

Due to a quicker search, it consumes much Due to slow searches, it consumes


less time. comparatively more time.

The expenses are much lower. The expenses are comparatively higher.

The AI gets suggestions regarding how and The AI does not get any suggestions regarding
where to find a solution to any problem. what solution to find and where to find it.
Whatever knowledge it gets is out of the
information provided.

It costs less and generates quicker results. It costs more and generates slower results. Thus,
Thus, it is comparatively more efficient. it is comparatively less efficient.

Implementation is shorter using AI. The implementation is lengthier using AI.

A few examples include Graph Search and A few examples include Breadth-First Search or
Greedy Search. BFS and Depth-First Search or DFS.

Measuring problem Solving Performance


We will evaluate the performance of a search algorithm in four ways

 Completeness: An algorithm is said to be complete if it definitely finds solution to the problem,


if exist.
 Time Complexity: How long (worst or average case) does it take to find a solution? Usually
measured in terms of the number of nodes expanded
 Space Complexity: How much space is used by the algorithm? Usually measured in terms of
the maximum number of nodes in memory at a time
 Optimality/Admissibility: If a solution is found, is it guaranteed to be an optimal one? For
example, is it the one with minimum cost?

Time and space complexity are measured in terms of

b -- maximum branching factor (number of successor of any node) of the search tree

d -- depth of the least-cost solution

m -- maximum length of any path in the space

Prepared By: Ramesh Kumar Chaudhary


5 Artificial Intelligence Chapter 3 Problem Solving by Searching

Uninformed Search Algorithms


- Uninformed search is a class of general-purpose search algorithms which operates in brute
force-way.
- Uninformed search algorithms do not have additional information about state or search space
other than how to traverse the tree, so it is also called blind search.
- Following are the various types of uninformed search algorithms:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

Breadth-first Search
- Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
- BFS algorithm starts searching from the root node of the tree and expands all successor node
at the current level before moving to nodes of next level.
- The breadth-first search algorithm is an example of a general-graph search algorithm.
- Breadth-first search implemented using FIFO queue data structure.

Algorithm
procedure BFS(initialState):
queue.enqueue(initialState)
while queue is not empty:
current = queue.dequeue()
if current is a goal state:
return current
successors = generateSuccessors(current)
queue.enqueue(successors)

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the
root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path
which is shown by the dotted arrow, and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Prepared By: Ramesh Kumar Chaudhary


6 Artificial Intelligence Chapter 3 Problem Solving by Searching

Performance Evaluation of BFS


Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a
node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier
which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth,
then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

Advantages:
- BFS will provide a solution if any solution exists.
- If there are more than one solutions for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.

Disadvantages:
- It requires lots of memory since each level of the tree must be saved into memory to expand
the next level.

Prepared By: Ramesh Kumar Chaudhary


7 Artificial Intelligence Chapter 3 Problem Solving by Searching

- BFS needs lots of time if the solution is far away from the root node.

Depth-first Search
- Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
- It is called the depth-first search because it starts from the root node and follows each path to
its greatest depth node before moving to the next path.
- DFS uses a stack data structure for its implementation.
- The process of the DFS algorithm is similar to the BFS algorithm.

Algorithm
procedure DFS(initialState):
stack.push(initialState)
while stack is not empty:
current = stack.pop()
if current is a goal state:
return current
successors = generateSuccessors(current)
stack.push(successors)

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the order
as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it
will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.

Prepared By: Ramesh Kumar Chaudhary


8 Artificial Intelligence Chapter 3 Problem Solving by Searching

Performance Evaluation of DFS


Completeness: DFS search algorithm is complete within finite state space as it will expand every
node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution
depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high
cost to reach to the goal node.

Advantage:
- DFS requires very less memory as it only needs to store a stack of the nodes on the path from
root node to the current node.
- It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

Disadvantage:
- There is the possibility that many states keep re-occurring, and there is no guarantee of finding
the solution.
- DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Prepared By: Ramesh Kumar Chaudhary


9 Artificial Intelligence Chapter 3 Problem Solving by Searching

Depth-Limited Search
- Depth-Limited Search (DLS) is a modification of the Depth-First Search (DFS) algorithm in
artificial intelligence, designed to address the issue of infinite loops that may occur in DFS
when dealing with state spaces containing cycles.
- DLS imposes a maximum depth limit on the exploration, preventing the algorithm from going
too deep into the search tree.
- Depth-limited search can be terminated with two Conditions of failure:
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Algorithm
procedure DLS(initialState, depthLimit):
stack.push((initialState, 0)) # Tuple contains the current state and its depth
while stack is not empty:
current, depth = stack.pop()
if current is a goal state:
return current
if depth < depthLimit:
successors = generateSuccessors(current)
for successor in successors:
stack.push((successor, depth + 1))

Example

Performance Evaluation of DLS


Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Prepared By: Ramesh Kumar Chaudhary


10 Artificial Intelligence Chapter 3 Problem Solving by Searching

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal
even if ℓ>d.

Advantages:
- Depth-limited search is Memory efficient.

Disadvantages:
- Depth-limited search also has a disadvantage of incompleteness.
- It may not be optimal if the problem has more than one solution.

Uniform-Cost Search
- Uniform-Cost Search (UCS) is a search algorithm used in artificial intelligence for traversing
or searching tree or graph structures.
- UCS is an informed search algorithm that explores the search space by always selecting the
path with the lowest cumulative cost.
- It is particularly useful in scenarios where the cost of reaching a state is a crucial factor, such
as in pathfinding problems.

Algorithm
procedure UCS(initialState):
priorityQueue.push((initialState, 0)) # Tuple contains the current state and its cumulative cost
while priorityQueue is not empty:
current, cumulativeCost = priorityQueue.pop()
if current is a goal state:
return current
successors = generateSuccessors(current)
for successor, actionCost in successors:
priorityQueue.push((successor, cumulativeCost + actionCost))

Example

Prepared By: Ramesh Kumar Chaudhary


11 Artificial Intelligence Chapter 3 Problem Solving by Searching

Performance Evaluation of DLS


Completeness: Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity: Let C* is Cost of the optimal solution, and ε is each step to get closer to the
goal node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0
and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε] ).

Space Complexity: The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε] ).

Optimal: Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

Advantages:
- Uniform cost search is optimal because at every state the path with the least cost is chosen.

Disadvantages:
- It does not care about the number of steps involve in searching and only concerned about path
cost. Due to which this algorithm may be stuck in an infinite loop.

Prepared By: Ramesh Kumar Chaudhary


12 Artificial Intelligence Chapter 3 Problem Solving by Searching

Iterative Deepening Depth-First Search


- The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a
goal is found.
- This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing
the depth limit after each iteration until the goal node is found.
- This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.
- The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.

Algorithm
procedure IDDFS(initialState):
depthLimit = 0
while true:
result = DLS(initialState, depthLimit)
if result is a solution:
return result
depthLimit++

Example

Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:

Prepared By: Ramesh Kumar Chaudhary


13 Artificial Intelligence Chapter 3 Problem Solving by Searching

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Performance Evaluation of DLS


Completeness: This algorithm is complete is ifthe branching factor is finite.

Time Complexity: Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).

Space Complexity: The space complexity of IDDFS will be O(bd).

Optimal: IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.

Advantages:
- It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory
efficiency.

Disadvantages:
- The main drawback of IDDFS is that it repeats all the work of the previous phase.

Bidirectional Search Algorithm


- Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node.
- Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex.
- The search stops when these two graphs intersect each other.
- Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
- In bidirectional search, one should know the goal state in advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and starts
from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Prepared By: Ramesh Kumar Chaudhary


14 Artificial Intelligence Chapter 3 Problem Solving by Searching

Performance Evaluation of Bidirectional Search


Completeness: Bidirectional Search is complete if we use BFS in both searches.

Time Complexity: Time complexity of bidirectional search using BFS is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

Advantages:
- Bidirectional search is fast.
- Bidirectional search requires less memory

Disadvantages:
- Implementation of the bidirectional search tree is difficult.

Informed Search
- Informed Search, also known as heuristic search or guided search, is an approach in artificial
intelligence where additional knowledge about the problem domain is utilized to make more
informed decisions during the search process.
- Unlike uninformed search algorithms, informed search algorithms make use of heuristics or
other domain-specific information to guide the exploration of the search space more efficiently.

Prepared By: Ramesh Kumar Chaudhary


15 Artificial Intelligence Chapter 3 Problem Solving by Searching

- The goal is to prioritize paths that are more likely to lead to a solution, reducing the overall
search effort.
- The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.

Heuristics function
- Heuristic is a function which is used in Informed Search, and it finds the most promising path.
- It takes the current state of the agent as its input and produces the estimation of how close agent
is from the goal.
- The heuristic method, however, might not always give the best solution, but it guaranteed to
find a good solution in reasonable time.
- Heuristic function estimates how close a state is to the goal.
- It is represented by h(n), and it calculates the cost of an optimal path between the pair of states.
- The value of the heuristic function is always positive.

Admissibility of the heuristic function


h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than
or equal to the estimated cost.

Greedy Best First Search


- Greedy best-first search is an informed search algorithm where the evaluation function is
strictly equal to the heuristic function, disregarding the edge weights in a weighted graph
because only the heuristic value is considered.
- In order to search for a goal node it expands the node that is closest to the goal as determined
by the heuristic function.
- This approach assumes that it is likely to lead to a solution quickly.
- However, the solution from a greedy best-first search may not be optimal since a shorter path
may exist.
- In this algorithm, search cost is at a minimum since the solution is found without expanding a
node that is not on the solution path.
- This algorithm is minimal, but not complete, since it can lead to a dead end. It’s called
“Greedy” because at each step it tries to get as close to the goal as it can.

Prepared By: Ramesh Kumar Chaudhary


16 Artificial Intelligence Chapter 3 Problem Solving by Searching

Algorithm
1. Initialize a tree with the root node being the start node in the open list.
2. If the open list is empty, return a failure, otherwise, add the current node to the closed list.
3. Remove the node with the lowest h(x) value from the open list for exploration.
4. If a child node is the target, return a success. Otherwise, if the node has not been in either
the open or closed list, add it to the open list for exploration.

Example

Consider finding the path from P to S in the following graph:

In this example, the cost is measured strictly using the heuristic value. In other words, how close
it is to the target.

Prepared By: Ramesh Kumar Chaudhary


17 Artificial Intelligence Chapter 3 Problem Solving by Searching

C has the lowest cost of 6. Therefore, the search will continue like so:

U has the lowest cost compared to M and R, so the search will continue by exploring U. Finally,
S has a heuristic value of 0 since that is the target node:

The total cost for the path (P -> C -> U -> S) evaluates to 11. The potential problem with a greedy
best-first search is revealed by the path (P -> R -> E -> S) having a cost of 10, which is lower than
(P -> C -> U -> S). Greedy best-first search ignored this path because it does not consider the edge
weights.

Prepared By: Ramesh Kumar Chaudhary


18 Artificial Intelligence Chapter 3 Problem Solving by Searching

Performance Evaluation of Greedy Best First Search


Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where,
m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

Advantages:
- Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
- This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
- It can behave as an unguided depth-first search in the worst case scenario.
- It can get stuck in a loop as DFS.
- This algorithm is not optimal.

Prepared By: Ramesh Kumar Chaudhary


19 Artificial Intelligence Chapter 3 Problem Solving by Searching

Hill Climbing Search


Hill climbing can be used to solve problems that have many solutions, some of which are better
than others. It starts with a random (potentially poor) solution, and iteratively makes small changes
to the solution, each time improving it a little. When the algorithm cannot see any improvement
anymore, it terminates. Ideally, at that point the current solution is close to optimal, but it is not
guaranteed that hill climbing will ever come close to the optimal solution.

For example, hill climbing can be applied to the traveling salesman problem. It is easy to find a
solution that visits all the cities but will be very poor compared to the optimal solution. The
algorithm starts with such a solution and makes small improvements to it, such as switching the
order in which two cities are visited. Eventually, a much better route is obtained. In hill climbing
the basic idea is to always head towards a state which is better than the current one. So, if you are
at town A and you can get to town B and town C (and your target is town D) then you should make
a move IF town B or C appear nearer to town D than town A does.

The hill climbing can be described as follows:

1. Start with current-state = initial-state.


2. Until current-state = goal-state OR there is no change in current-state do:
 Get the successors of the current state and use the
evaluation function to assign a score to each successor.
 If one of the successors has a better score than the current-
state then set the new current-state to be the successor with
the best score.

Hill climbing terminates when there are no successors of the current state which are better than the
current state itself.

Hill climbing is depth-first search with a heuristic measurement that orders choices as nodes
are expanded. It always selects the most promising successor of the node last expanded.

For instance, consider that the most promising successor of a node is the one that has the shortest
straight-line distance to the goal node G. In figure below, the straight line distances between each
city and goal G is indicated in square brackets, i.e. the heuristic.

Simulated Annealing:
It is motivated by the physical annealing process in which material is heated and slowly cooled
into a uniform structure. Compared to hill climbing the main difference is that SA allows
downwards steps. Simulated annealing also differs from hill climbing in that a move is selected at
random and then decides whether to accept it. If the move is better than its current position then

Prepared By: Ramesh Kumar Chaudhary


20 Artificial Intelligence Chapter 3 Problem Solving by Searching

simulated annealing will always take it. If the move is worse (i.e. lesser quality) then it will be
accepted based on some probability. The probability of accepting a worse state is given by the
equation

P = exponential(-c /t) > r

Where

c = the change in the evaluation function

t = the current value

r = a random number between 0 and 1

The probability of accepting a worse state is a function of both the current value and the change in
the cost function. The most common way of implementing an SA algorithm is to implement hill
climbing with an accept function and modify it for SA

By analogy with this physical process, each step of the SA algorithm replaces the current solution
by a random "nearby" solution, chosen with a probability that depends on the difference between
the corresponding function values and on a global parameter T (called the temperature), that is
gradually decreased during the process. The dependency is such that the current solution changes
almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance
for "uphill" moves saves the method from becoming stuck at local optima—which are the bane of
greedier methods.

Game Search:
Games are a form of multi-agent environment

– What do other agents do and how do they affect our success?


– Cooperative vs. competitive multi-agent environments.
– Competitive multi-agent environments give rise to
adversarial search often known as games
Games – adversary

– Solution is strategy (strategy specifies move for every possible opponent


reply).
– Time limits force an approximate solution
– Evaluation function: evaluate ―goondess‖ of game position
– Examples: chess, checkers, Othello, backgammon

Prepared By: Ramesh Kumar Chaudhary


21 Artificial Intelligence Chapter 3 Problem Solving by Searching

Difference between the search space of a game and the search space of a problem: In the first case
it represents the moves of two (or more) players, whereas in the latter case it represents the "moves"
of a single problem-solving agent.

An exemplary game: Tic-tac-toe

There are two players denoted by X and O. They are alternatively writing their letter in one of the
9 cells of a 3 by 3 board. The winner is the one who succeeds in writing three letters in line.

The game begins with an empty board. It ends in a win for one player and a loss for the other, or
possibly in a draw.

A complete tree is a representation of all the possible plays of the game. The root node is the initial
state, in which it is the first player's turn to move (the player X).

The successors of the initial state are the states the player can reach in one move, their successors
are the states resulting from the other player's possible replies, and so on.

Terminal states are those representing a win for X, loss for X, or a draw.

Each path from the root node to a terminal node gives a different complete play of the game. Figure
given below shows the initial search space of Tic-Tac-Toe.

Prepared By: Ramesh Kumar Chaudhary


22 Artificial Intelligence Chapter 3 Problem Solving by Searching

A game can be formally defined as a kind of search problem as below:

 Initial state: It includes the board position and identifies the players to move.
 Successor function: It gives a list of (move, state) pairs each
indicating a legal move and resulting state.
 Terminal test: This determines when the game is over. States where
the game is ended are called terminal states.
 Utility function: It gives numerical value of terminal states. E.g. win
(+1), loose (- 1) and draw (0). Some games have a wider variety of
possible outcomes e.g. Ranging from +92 to -192.

The Minimax Algorithm:


Let us assign the following values for the game: 1 for win by X, 0 for draw, -1 for loss by X.

Given the values of the terminal nodes (win for X (1), loss for X (-1), or draw (0)), the values
of the non-terminal nodes are computed as follows:

• the value of a node where it is the turn of player X to move is the maximum of the
values of its successors (because X tries to maximize its outcome);
• the value of a node where it is the turn of player O to move is the minimum of the
values of its successors (because O tries to minimize the outcome of X).

Figure below shows how the values of the nodes of the search tree are computed from the values
of the leaves of the tree. The values of the leaves of the tree are given by the rules of the game:

 1 if there are three X in a row, column or diagonal;


 -1 if there are three O in a row, column or diagonal;
 0 otherwise

Prepared By: Ramesh Kumar Chaudhary


23 Artificial Intelligence Chapter 3 Problem Solving by Searching

Prepared By: Ramesh Kumar Chaudhary


24 Artificial Intelligence Chapter 3 Problem Solving by Searching

An Example:

Consider the following game tree (drawn from the point of view of the Maximizing player):

Ma x a

Min b c

d e f g

h i j k l m n o p r
5 3 3 1 4 7 5 9 2 7

Show what moves should be chosen by the two players, assuming that both are using the mini-
max procedure.

Solution:

Prepared By: Ramesh Kumar Chaudhary


25 Artificial Intelligence Chapter 3 Problem Solving by Searching

Ma x a 7

Min b 4 7 c

5 d 4 e 7 f 9 g

h i j k l m n o p r
5 3 3 1 4 7 5 9 2 7

Figure 3.16: The mini-max path for the game tree

Alpha-Beta Pruning:
The problem with minimax search is that the number if game states it has examine is exponential
in the number of moves. Unfortunately, we can’t eliminate the exponent, but we can effectively
cut it in half. The idea is to compute the correct minimax decision without looking at every node
in the game tree, which is the concept behind pruning. Here idea is to eliminate large parts of the
tree from consideration. The particular technique for pruning that we will discuss here is ―Alpha-
Beta Pruning‖. When this approach is applied to a standard minimax tree, it returns the same move
as minimax would, but prunes away branches that cannot possibly influence the final decision.
Alpha-beta pruning can be applied to trees of any depth, and it is often possible to prune entire
sub-trees rather than just leaves.

Alpha-beta pruning is a technique for evaluating nodes of a game tree that eliminates unnecessary
evaluations. It uses two parameters, alpha and beta.

Alpha: is the value of the best (i.e. highest value) choice we have found so far at any choice point
along the path for MAX.

Beta: is the value of the best (i.e. lowest-value) choice we have found so far at any choice point
along the path for MIN.

Alpha-beta search updates the values of alpha and beta as it goes along and prunes the remaining
branches at a node as soon as the value of the current node is known to be worse than the current
alpha or beta for MAX or MIN respectively.

An alpha cutoff:

Prepared By: Ramesh Kumar Chaudhary


26 Artificial Intelligence Chapter 3 Problem Solving by Searching

To apply this technique, one uses a parameter called alpha that represents a lower bound for the
achievement of the Max player at a given node.

Let us consider that the current board situation corresponds to the node A in the following figure.

The minimax method uses a depth-first search strategy in evaluating the descendants of a node. It
will therefore estimate first the value of the node B. Let us suppose that this value has been
evaluated to 15, either by using a static evaluation function, or by backing up from descendants
omitted in the figure. If Max will move to B then it is guaranteed to achieve 15. Therefore 15 is a
lower bound for the achievement of the Max player (it may still be possible to achieve more,
depending on the values of the other descendants of A).

upward to the node A and will


be used for evaluating the other possible moves from A.

To evaluate the node C, its left-most child D has to be evaluated first. Let us assume that the value
of D is 10 (this value has been obtained either by applying a static evaluation function directly to
D, or by backing up values from descendants omitted in the figure). Because this value is less than

Prepared By: Ramesh Kumar Chaudhary


27 Artificial Intelligence Chapter 3 Problem Solving by Searching

not be evaluated. Indeed, if the value of E is greater than 10, Min will move to D which has the
value 10 for Max. Otherwise, if the value of E is less than 10, Min will move to E which has a
value less than 10. So, if Max moves to C, the best it can get is 10, which is less than
= 15 that would be gotten if Max would move to B. Therefore, the best move for Max is to B,
independent of the value of E. The elimination of the node E is an alpha cutoff.

One should notice that E may itself have a huge subtree. Therefore, the elimination of E means, in
fact, the elimination of this subtree.

A beta cutoff:

To apply this technique, one uses a parameter called beta that represents an upper bound for the
achievement of the Max player at a given node.

In the above tree, the Max player moved to the node B. Now it is the turn of the Min player to
decide where to move:

The Min player also evaluates its descendants in a depth-first order.

Let us assume that the value of F has been evaluated to 15. From the point of view of Min, this is
an upper bound for the achievement of Min (it may still be possible to make Min achieve less,
depending of the values of the other descendants of B). Therefore the value of

Prepared By: Ramesh Kumar Chaudhary


28 Artificial Intelligence Chapter 3 Problem Solving by Searching

the other possible moves from B.

To evaluate the node G, its left-most child H is evaluated first. Let us assume that the value of H
is 25 (this value has been obtained either by applying a static evaluation function directly to H, or
by backing up values from descendants omitted in the figure). Because this value is greater than
nt of the value of node I that need
not be evaluated. Indeed, if the value of I is v ≥ 25, then Max (in G) will move to I. Otherwise, if
the value of I is less than 25, Max will move to H. So in both cases, the value obtained by Max is
at least 25 which is ined by Max if Min moves to F).

Therefore, the best move for Min is at F, independent of the value of I. The elimination of the node
I is a beta cutoff.

One should notice that by applying alpha and beta cut-off, one obtains the same results as in the
case of mini-max, but (in general) with less effort. This means that, in a given amount of time, one
could search deeper in the game tree than in the case of mini-max.

Constraint Satisfaction Problem


A Constraint Satisfaction Problem is characterized by:

• a set of variables {x1, x2, .., xn},

• for each variable xi a domain Di with the possible values for that variable, and
• a set of constraints, i.e. relations, that are assumed to hold between the values of the
variables. [These relations can be given intentionally, i.e. as a formula, or extensionally,
i.e. as a set, or procedurally, i.e. with an appropriate generating or recognizing function.]
We will only consider constraints involving one or two variables.

The constraint satisfaction problem is to find, for each i from 1 to n, a value in Di for xi so that all
constraints are satisfied. Means that, we must find a value for each of the variables that satisfies
all of the constraints.

A CS problem can easily be stated as a sentence in first order logic, of the form:

(exist x1)..(exist xn) (D1(x1) & .. Dn(xn) => C1..Cm)

A CS problem is usually represented as an undirected graph, called Constraint Graph where the
nodes are the variables and the edges are the binary constraints. Unary constraints can be disposed

Prepared By: Ramesh Kumar Chaudhary


29 Artificial Intelligence Chapter 3 Problem Solving by Searching

of by just redefining the domains to contain only the values that satisfy all the unary constraints.
Higher order constraints are represented by hyperarcs. In the following we restrict our attention to
the case of unary and binary constraints.

Constraints
• A constraint is a relation between a local collection of variables.
• The constraint restricts the values that these variables can simultaneously have.
• For example, all-diff(X1, X2, X3). This constraint says that X1, X2, and X3 must take on
different values. Say that {1,2,3} is the set of values for each of these variables then:

X1=1, X2=2, X3=3 OK X1=1, X2=1,X3=3 NO

The constraints are the key component in expressing a problem as a CSP.

• The constraints are determined by how the variables and the set of values are chosen.
• Each constraint consists of;
1. A set of variables it is over.
2. A specification of the sets of assignments to those variables that satisfy the constraint.
• The idea is that we break the problem up into a set of distinct conditions each of which
have to be satisfied for the problem to be solved.

Example (N-Queens):

Place N queens on an N x N chess board so that queen can attack any other queen.

• No queen can attack any other queen.


• Given any two queens Qi and Qj they cannot attack each other.
• Now we translate each of these individual conditions into a separate constraint.
o Qi cannot attack Qj(i ≠j)
 Qi is a queen to be placed in column i, Qj is a queen to be placed in column j.
 The value of Qi and Qj are the rows the queens are to be placed in.
• Note the translation is dependent on the representation we chose.

Prepared By: Ramesh Kumar Chaudhary


30 Artificial Intelligence Chapter 3 Problem Solving by Searching

Queens can attack each other

1. Vertically, if they are in the same column---this is impossible as Qi and Qj are placed
in different columns.
2. Horizontally, if they are in the same row---we need the constraint Qi≠Qj.
3. Along a diagonal, they cannot be the same number of columns apart as they are rows
apart: we need the constraint |i-j| ≠|Qi-Qj| ( | | is absolute value)

Representing the Constraints

1. Between every pair of variables (Qi,Qj) (i ≠j), we have a constraint Cij.


2. For each Cij, an assignment of values to the variables Qi= A and Qj= B, satisfies this
constraint if and only if;
A ≠B
|A-B| ≠ |i-j|

Solutions

• A solution to the N-Queens problem will be any assignment of values to the variables
Q1,…,QN that satisfies all of the constraints.
• Constraints can be over any collection of variables. In N-Queens we only need binary
constraints---constraints over pairs of variables.

Assignment #3
1. Construct a state space with appropriate heuristics and local costs. Show that Greedy Best
First search is not complete for the state space. Also illustrate A* is complete and
guarantees solution for the same state space. (10) [TU 2076]
2. Illustrate with an example, how uniform cost search algorithm can be used for finding goal
in a state space. (5) [TU 2076]
3. How informed search are different than uniformed? Given following state space, illustrate
how depth limited search and iterative deepening search works? Use your own assumption
for depth limit.

Prepared By: Ramesh Kumar Chaudhary


31 Artificial Intelligence Chapter 3 Problem Solving by Searching

Here, S is start and K is goal. (3+7) [TU 2078]

4. Given following search space, determine if these exists any alpha and beta cutoffs. (5)
[TU 2078]

5. Define admissible heuristic with an example. Explain the working mechanism and
limitations of hill climbing search. (10) [TU 2079]
6. How do you define problem? What are criteria for defining problem? Compare
Constraint Satisfaction Problem and Real World Problem in detail with appropriate
example. (10) [TU 2079]
7. What is state space representation? Illustrate with one example. (5) [TU 2079]
8. Define game. Write the benefits and limitations of depth limited search. (5) [TU 2079]

Prepared By: Ramesh Kumar Chaudhary

You might also like