Chapter 3 Searching and Solving Problems
Chapter 3 Searching and Solving Problems
Furthermore, they have no knowledge of what their actions do or of what they are trying to
achieve
3
Introduction of Artificial Intelligence
Goal Formulation
what are the successful world state
Problem Formulation
what actions and states to consider give the goal
Search
determine the possible sequence of actions that lead to the state of known values and the
choosing the best sequence
Execute
Give the solution preform the actions 4
Introduction of Artificial Intelligence
GOAL FORMULATION
To solve a problem, the first step is the goal formulation, based on the current situation
The goal is formulated as a set of world states, in which the goal is satisfied
Goal-objectives that the agent is trying to achieve and, hence the actions it needs to consider.
Reaching from initial state to à goal state
Actions are required
Actions are the operators
⚫ causing transitions between world states
⚫ Actions should be abstract enough at a certain degree, instead of very detailed
Problem formulation: is the process of deciding what actions and states to consider, and
An agent with several immediate options of unknown value can decide what to do by first
examining ;
different possible sequences of actions that lead to states of known value, and then choosing
A search algorithm takes a problem as input and returns a solution in the form of an action
6
sequence
Introduction of Artificial Intelligence
Thus, we have a simple "formulate, search, execute" design for the agent
Multiple-state problems,
Exploration problems
7
Introduction of Artificial Intelligence
A problem is really a collection of information that the agent will use to decide what
to do
Initial state
Actions
Goal Test
Path Cost 8
Introduction of Artificial Intelligence
State: Description of the state of the world in which the search agent finds itself.
Starting / initial state: The initial state in which the search agent is started.
Goal state: If the agent reaches a goal state, then it terminates and outputs a solution (if
desired).
Actions: All of the agents allowed actions.
Solution: The path in the search tree from the starting state to the goal state.
Cost function: Assigns a cost value to every action. Necessary for finding a cost-optimal
solution.
State space: Set of all states.
A path in the state space- any sequence of states connected by a sequence of actions. 9
Introduction of Artificial Intelligence
The term operator is used to denote the description of an action in terms of which state will
Together, these define the state space of the problem: the set of all states reachable from the
10
Introduction of Artificial Intelligence
The goal test applied to the current state to test if the agent is in its goal
Sometimes the goal is described by the properties instead of stating explicitly the set of
states
Example: Chess
o The agent wins if it can capture the KING of the opponent on next move
( checkmate).
The cost of a path is the sum of the costs of the individual actions along the path.
The solution of a problem is then a path from the initial state to a state satisfying the
goal test
Optimal solution is the solution with lowest path cost among all solutions
12
Introduction of Artificial Intelligence
E.g:
13
Introduction of Artificial Intelligence
……..{ajàja àam}}
14
Introduction of Artificial Intelligence
The 8-puzzle
Problem formulation
State: the location of each of the eight tiles in one of the nine squares
Operators: blank space moves left, right, up, or down
Goal test: state matches the right figure (goal state)
Path cost: each step costs 1, since we are moving one step each time.
15
Introduction of Artificial Intelligence
EXERCISE:1 RIVER CROSSING PUZZLES
1. Goat, Wolf and Cabbage problem
A farmer returns from the market, where he bought a goat a cabbage and a wolf. On the way home he must cross
a river. His boat is small and unable to transport more than one of his purchases. He cannot leave the goat alone
with the cabbage (because the goat would eat it), nor he can leave the goat alone with the wolf (because the goat
would be eaten). How can the farmer get everything safely on the other side?
Search Strategies
17
Introduction of Artificial Intelligence
3. Search
Because there are many ways to achieve the same goal
⚫ Those ways are together expressed as a tree
the agent can examine different possible sequences of actions, and choose the best
⚫ This process of looking for the best sequence is called search.
Search Algorithm:
Defined as
⚫ taking a problem
Design of an agent
“Formulate, search, execute” 18
Introduction of Artificial Intelligence
Expanding
⚫ applying successor function to the current state
Leaf nodes
⚫ the states having no successors
19
Introduction of Artificial Intelligence
The root of the search tree is a search node corresponding to the initial state
The leaf nodes of the tree correspond to states that do not have successors in the tree,
At each step, the search algorithm chooses one leaf node to expand
20
Introduction of Artificial Intelligence
21
Introduction of Artificial Intelligence
Optimality: does the strategy find the highest-quality solution when there are several
different solutions?
22
Introduction of Artificial Intelligence
24
Introduction of Artificial Intelligence
SEARCH METHODS:
or the path cost from the current state to the goal
Uninformed search
⚫ Breadth first search
⚫ Depth first search
⚫ Uniform cost search
⚫ Depth limited search
⚫ Iterative deepening search
Informed search
⚫ Greedy search
⚫ A*-search 26
Introduction of Artificial Intelligence
Breadth-first search
All the nodes generated by the root node are then expanded
In general, all the nodes at depth d in the search tree are expanded before the nodes
at depth d + 1
27
Introduction of Artificial Intelligence
28
Introduction of Artificial Intelligence
29
Introduction of Artificial Intelligence
30
Introduction of Artificial Intelligence
The disadvantage
if the branching factor of a node is large, for even small instances (e.g.,
chess)
32
Introduction of Artificial Intelligence
33
Introduction of Artificial Intelligence
Exercise 1
7
13
17
Goal State
4
1
Initial state
2
9
34
Introduction of Artificial Intelligence
35
Introduction of Artificial Intelligence
It is easy to see that breadth-first search is just uniform cost search with g(n) =
DEPTH(n)
it would have been expanded earlier, and thus would have been found first
36
Introduction of Artificial Intelligence
A route-finding problem:
(a) The state space, showing the cost for
each operator,
37
Introduction of Artificial Intelligence
38
Introduction of Artificial Intelligence
39
Introduction of Artificial Intelligence
……………………………………………………………………………………………
………………………………………………………………………..
N step
42
Introduction of Artificial Intelligence
43
Introduction of Artificial Intelligence
Not complete
⚫ because a path may be infinite or looping
⚫ then the path will never fail and go back try another option
Not optimal
⚫ it doesn't guarantee to the best solution
It overcomes
• the time and space complexities
44
Introduction of Artificial Intelligence
45
Introduction of Artificial Intelligence
ITERATIVE DEEPENING SEARCH
Is a modified versions of DFS
No choosing of the best depth limit
It tries all possible depth limits:
⚫ first 0, then 1, 2, and so on
⚫ It will terminate when the depth limits reaches d, depth of shallowest goal
46
Introduction of Artificial Intelligence
Limit 1
47
Introduction of Artificial Intelligence
Advantages:
It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
The main drawback of IDS is that it repeats all the work of the previous phase.
Example:
We are guaranteed to find the solution if it exists, but we are still not guaranteed to
50
Introduction of Artificial Intelligence
If we choose a depth limit that is too small, then depth-limited search is not even complete.
The time and space complexity of depth-limited search is similar to depth-first search
It takes O(bl) time and O(bl) space, where l is the depth limit.
Example:
51
Introduction of Artificial Intelligence
Standard failure value: It indicates that problem does not have any solution.
Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Disadvantages:
It may not be optimal if the problem has more than one solution.
52
Introduction of Artificial Intelligence
Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
53
Introduction of Artificial Intelligence
one forward from the initial state another backward from the goal
54
Introduction of Artificial Intelligence
because the forward and backward searches each have to go only half way.
The main question is, what does it mean to search backwards from the goal?
Searching backward means generating predecessors successively starting from the goal
node.
We need to decide what kind of search is going to take place in each half
56
Introduction of Artificial Intelligence
Advantages:
Bidirectional search is fast.
Bidirectional search requires less memory
Disadvantages:
Implementation of the bidirectional search tree is difficult.
In bidirectional search, one should know the goal state in advance.
57
Introduction of Artificial Intelligence
58
Introduction of Artificial Intelligence
INFORMED SEARCH/HEURISTIC SEARCH
An informed search is more efficient than an uninformed search because in informed search,
along with the current state information, some additional information is also present, which make
it easy to reach the goal state.
Blind search – BFS, DFS, uniform cost
⚫ no notion concept of the “right direction”
Heuristic search
Estimates the cost of the minimal cost path from node n to a goal state
h(n) >= 0 for all nodes n
h(n) = 0 Implies that n is a goal node
h(n) = ¥ Implies that n is a dead end from which a goal cannot be reached.
All domain knowledge used in the search is encoded in the heuristic function h.
Weak method :because of the limited way that domain-specific information is used to solve a
problem
Sort nodes in the node list by increasing values of an evaluation function f(n) that incorporates
59
domain-specific information
Introduction of Artificial Intelligence
When the nodes are ordered so that the one with the best evaluation expanded first, the
result is called best-first strategy
A Best-first search that uses h to select the next node to expand is called Greedy Search
Use as an evaluation function f(n)=h(n), sorting nodes in the nodes list by increasing
values of f.
Selects the node to expand that is believed to be closest
(i.e., smallest f values) to a goal node.
# of nodes tested 0, expanded 0
(S:8)
63
Introduction of Artificial Intelligence
S
h=8
1 8
A B C
Expanded node OPEN list h=8 h=4 h=3
9 4
3 E 5
7
D G
(S:8) h= h= h=0
64
Introduction of Artificial Intelligence
S
h=8
A B C
h=8 h=4 h=3
(S:8) 9 4
3 E 5
S
7
D G
(C:3,B:4,A:8) h= h= h=0
65
Introduction of Artificial Intelligence
A B C
h=8 h=4 h=3
(S:8) 9 4
3 E 5
7
S D G
(C:3,B:4,A:8) h= h= h=0
C (G:0,B:4,A:8)
66
Introduction of Artificial Intelligence
Time?
⚫ O(bm) – worst case (like Depth First Search)
Optimal? No
67
Introduction of Artificial Intelligence
Advantages:
Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
It can behave as an unguided depth-first search in the worst case scenario.
It can get stuck in a loop as DFS.
This algorithm is not optimal.
68
Introduction of Artificial Intelligence
69
Introduction of Artificial Intelligence
Example for A* search
Initial state
Goal state
70
Introduction of Artificial Intelligence
71
Introduction of Artificial Intelligence
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics and
approximation.
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated nodes in
the memory, so it is not practical for various large-scale problems.
72
Introduction of Artificial Intelligence
74
Introduction of Artificial Intelligence
Con’t…
• A constraint satisfaction problem consists of three components, Variable, Domain, and
Constraints.
• Variables:
The entities to be assigned values.
Example: X,Y,Z
• Domains:
The possible values that each variable can take.
Example: X∈{1,2,3}, Y∈{1,2}, Z ∈{a,b,c}
• Constraints:
Rules that restrict the values the variables can simultaneously take.
Example: X≠Y, X+Z≤5
75
Introduction of Artificial Intelligence
Con’t…
• To solve a CSP, we need to define a state space and the notion of a solution.
• Each state in a CSP is defined by an assignment of values to some or all of the variables,
{Xi = vi, Xj = vj, . . .}.
• An assignment that does not violate any constraints is called a consistent or legal
assignment.
• A complete assignment is one in which every variable is assigned, and a solution to a
CSP is a consistent, complete assignment.
• A partial assignment is one that assigns values to only some of the variables.
76
Introduction of Artificial Intelligence
Con’t…
Types of CSPs
• Finite CSP:
Variables have finite domains (e.g., scheduling, Sudoku).
• Infinite CSP:
Variables can take values from infinite domains (e.g., real numbers in equations).
• Binary CSP:
Constraints involve pairs of variables (e.g., X<Y).
• Higher-order CSP:
Constraints involve three or more variables (e.g., X+Y+Z=10, A<B<C)
77
Introduction of Artificial Intelligence
Example of a CSP
• Problem:
• Imagine we have a map with four regions: A, B, C, and D. We want to color each
region such that no two adjacent regions have the same color. We have three colors
available: Red, Green, and Blue..
• Variables: Regions (e.g., A,B,C,D).
• Domains: Colors (e.g., {Red, Green, Blue}).
• Constraints: Adjacent regions cannot have the same color (e.g., A≠B, A≠C, B≠C,
B≠D,C≠D).
78
Introduction of Artificial Intelligence
Solving CSPs
1. Backtracking Search:
• A systematic search algorithm that tries assigning values to variables and backtracks when
constraints are violated.
• Improvement Techniques:
• Forward Checking: Eliminate inconsistent values from domains of unassigned
variables.
• Constraint Propagation: Enforce constraints (e.g., Arc Consistency using AC-3
algorithm).
2. Local Search:
• Start with a random solution and iteratively improve by making small changes.
Example: Min-Conflicts heuristic for resolving violated constraints.
79
Introduction of Artificial Intelligence
Con’t…
3. Heuristics:
• Most Constrained Variable (MRV): Assign a value to the variable with the fewest
legal values.
• Least Constraining Value (LCV): Choose the value that leaves the most options for
other variables.
4. Consistency Algorithms:
• Node Consistency: Ensure individual variables satisfy unary constraints.
• Arc Consistency (AC-3): Ensure every value of one variable has a consistent value in
another.
80
Introduction of Artificial Intelligence
Applications of CSP
• Scheduling: Allocating resources to tasks subject to constraints.
• Timetabling: Assigning classes to timeslots without conflicts.
• Puzzle Solving: Sudoku, N-Queens problem.
• Configuration Problems: Designing systems subject to compatibility
constraints.
• Logical Reasoning: Modeling and solving logical inference problems.
81
Introduction of Artificial Intelligence
82
Introduction of Artificial Intelligence
Example: Tic-Tac-Toe
• State Space: All possible board configurations (up to 39=19,683 states,
considering the 3x3 grid).
• Initial State: Empty board.
• Players: Player 1 (X) and Player 2 (O).
• Actions: Placing X or O in any empty cell.
• Transition Model: Updates the board after a move.
• Terminal States: A row, column, or diagonal is filled with the same symbol
(win), or no empty cells remain (draw).
• Utility Function:
• +1 for X wins, -1 for O wins, 0 for a draw.
84
Introduction of Artificial Intelligence
85
Introduction of Artificial Intelligence
•Real-World Applications
• AI in Board Games: Chess engines like Stockfish, Go AI like AlphaGo.
• Video Games: AI controlling enemies or NPCs in real-time strategy or role-playing games.
• Decision-Making: Autonomous agents navigating adversarial environments.
86
Introduction of Artificial Intelligence
87