0% found this document useful (0 votes)
41 views174 pages

Chapter 3

The document discusses different types of problems that can be solved through artificial intelligence including route finding problems. It provides examples of route finding scenarios and outlines the steps to solve problems which include goal formulation, problem formulation, searching for solutions, and executing the optimal solution. It also gives examples of state spaces for route finding and the vacuum world problem.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views174 pages

Chapter 3

The document discusses different types of problems that can be solved through artificial intelligence including route finding problems. It provides examples of route finding scenarios and outlines the steps to solve problems which include goal formulation, problem formulation, searching for solutions, and executing the optimal solution. It also gives examples of state spaces for route finding and the vacuum world problem.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 174

College of Computing and Informatics

Department of Software Engineering


Fundamentals of Artificial Intelligence

CHAPTER 3:
Solving Problems by Searching and
Constraint Satisfaction Problem
Compiled by Aliazar D.
BSc & MSc in Software Engineering
[email protected]
What is problem in context
of AI?

3
Problem
• It is a collection of information that the agent will use to
decide what to do.
• Problems have the general form:
 Given such-and-such information, find x
• A huge variety of problems are addressed in AI, both:
"Given a list of cities and the distances
 Toy problems between each pair of cities, what is the
shortest possible route that visits each city
• e.g. 8-puzzle, vacuum cleaner world,… exactly once and returns to the origin city?"
 Real-life problems
• E.g. Route finding, Traveling sales person, etc.
4
Route finding is the ability to perceive spatial relations between objects and navigate between said objects literally or within a written maplike scenario.
Example

1. Vacuum cleaner world problem


 An agent vacuum cleaner operates in a simple world with only
two locations to be cleaned.
 Each location may or may not contain dirt
 The agent may be in one location or the other
 The agent can sense
 Location: Whether it is in room A or B
 Status: Whether the room is Clean or Dirty

 The agent is expected to clean all the dirt in the given locations

5
Solving a problem
• Formalize the problem: Identify the collection of information that the
agent will use to decide what to do.
• Define states.
 States describe distinguishable stages during the problem-solving process.
 Example:- What are the various states in route finding problem?
 The various cities including the location of the agent.
• Define the available operators/rules for getting from one state to the
next.
 Operators cause an action that brings transitions from one state to
another by applying on a current state.
• Suggest a suitable representation for the problem space/state space.
 Graph, table, list, set, … or a combination of them.
6
State space of the problem
• The state space defines the set of all relevant states reachable from
the initial state by (any) sequence of actions through iterative
application of all permutations and combinations of operators,
• State space of the problem includes the various states
 Initial state: defines where the agent starts in or begins with
 Goal state: describes the situation we want to achieve
 Transition states: other states in between initial and goal states
• AKA search space/problem space and can be represented in a
tree/graph.
 Example – Find the state space for route finding problem
• Think of the states reachable from the initial state.
7
State Space for Route Finding
Romania Ethiopia Aksum
100
200 Mekele
Gondar 180 80
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre Markos 330 Dire Dawa


230
400
330
Jima Addis Ababa
100
430 Adama 370

Gambela 230 320 Nekemt

8
Awasa
Steps in problem solving

• Goal formulation
 is a step that specifies exactly what the agent is trying to achieve?
 this step narrows down the state-space that the agent has to look at.
• Problem formulation
 is a step that puts down the actions and states that the agent has to consider given a goal (avoiding
any redundant states), like:
 The initial state
 The allowable actions etc…
• Search
 is the process of looking for the various sequence of actions that lead to a goal state, evaluating
them and choosing the optimal sequence.
• Execute
 is the final step that the agent executes the chosen sequence of actions to get it to the solution/goal
9
Example: Route Finding*
 Scenario: On holiday in Romania: currently in Arad.
Flight leaves tomorrow from
Bucharest.
Since then no seats available for the next six
weeks. The ticket is non-refundable, visa is about to expire, …
• Formalize problem:
 Define the possible states involved to solve
the problem
• Formulate goal:
 Be in Bucharest
• Formulate problem:
 Initial state: in Arad
 States: various cities
 Operators/actions: driving between cities
• Find solution:
 Optimal sequence of cities, e.g., Arad, Sibiu,
10
Fagaras, Bucharest
Aksum
100
Mekele
Activity!
200  Scenario: Behailu was in Aksum
Gondar 80 city and he wanted to go to
180
Lalibela Awasa for holiday.
110 250  Find a short route to drive to
150 Awasa.
Bahr dar
Dessie
170

Debre Markos 330


Dire Dawa
230

400
330
Jima  Formalize problem:
Addis Ababa
100
430 Adama 370
 Formulate goal:
 Formulate problem:
Gambela 230 320 Nekemt  Initial state:
 States:
 Operators/actions:
Awasa  Find solution: 11
The 8 puzzle problem*

• Arrange the tiles so that all the tiles are in the


correct positions. You do this by moving tiles.
• You can move a tile up, down, left, or right,
so long as the following conditions are met:
A. There's no other tile blocking you in the
direction of the movement; and
B. You're not trying to move outside of the
boundaries/edges.

.. More ctrl + click


12
Example: Vacuum world problem

 To simplify the problem (rather than the full version), let;


• The world has only two locations
 Each location may or may not contain dirt
 The agent may be in one location or the other
• Eight possible world states (on next slide)
• Three possible actions (Left, Right, Suck)
 Suck operator clean the dirt
 Left and Right operators move the agent from location to location
• Goal: to clean up all the dirt
13
Clean House Task

14
Vacuum Cleaner * State Space

15
Knowledge and types of problems

• There are four types of problems:


 Single state problems (Type 1)
 Multiple state problems (Type 2)
 Exploration problems (Type 3)
 Contingency Problems (Type 4)
• This classification is based on the level of knowledge that an
agent can have concerning its action and the state of the world
• Thus, the first step in formulating a problem for an agent is to
see what knowledge it has concerning
 The effects of its action on the environment
 Accessibility of the state of the world
• This knowledge depends on how it is connected to the
environment via its percepts and actions
16
Single state problem

• Accessible: The world is accessible to the agent


 It can determine its exact state through its sensors
 The agent’s sensor knows which state it is in
• Deterministic: The agent knows exactly the effect of its actions
 It can then calculate exactly which state it will be in after any sequence of
actions
• Action sequence is completely planned
 Example - Vacuum cleaner world
 What will happen if the agent is initially at state = 5 and formulates
action sequence - [Right, Suck]?
 Agent calculates and knows that it will get to a goal state
• Right  {6} 17
• Suck  {8}
Multiple state problems
• Deterministic: The agent knows exactly what
each of its actions do
 It can then calculate which state it will
be in after any sequence of actions
• Inaccessible: The agent has limited access to
the world state
 It might not have sensors to get full
access to the environment states or as an
extreme, it can have no sensors at all
(due to lack of percepts)
• If the agent has full knowledge of how its actions change the world, but does
not know of the state of the world, it can still solve the task
Example - Vacuum cleaner world
• Agent’s initial state is one of the 8 states: {1,2,3,4,5,6,7,8}
• Action sequence: {right, suck, left, suck}
• Because agent knows what its actions do, it can discover and reach to goal state. 18
Contingency Problems
• Non-deterministic
 The agent is ignorant of the effect of its actions Murphy's law states that “Anything
that can go wrong will go wrong”.
• Inaccessible
 The agent has limited access to the world state
• Sometimes ignorance prevents the agent from
finding a guaranteed solution sequence.
• Suppose the agent is in Murphy’s law world
 The agent has to sense during the execution phase, since things might have changed while it
was carrying out an action. This implies that the agent has to compute a tree of actions, rather than
a linear sequence of action
 Example - Vacuum cleaner world:
• Action ‘Suck’ deposits dirt on the carpet, but only if there is no dirt already. Depositing dirt rather than19
Conti..
• Example - Vacuum cleaner world
 What will happen given initial state {1,3}, and action sequence:
[Suck, Right, Suck]?
 {1,3}  {5,7}  {6,8}  {6,8} (failure)
• Is there a way to solve this problem?
 Thus, solving this problem requires local sensing, i.e. sensing the
execution phase,
 Start from one of the states {1,3}, and take improved action
sequence [Suck, Right, Suck (only if there is dirt there)]
• Many problems in the real world are contingency problems (exact
prediction is impossible)
 For this reason many people keep their eyes open while walking
around or driving. 20
Exploration problem
• The agent has no knowledge of the environment
 World Inaccessible: No knowledge of states (environment)
 Unknown state space (no map, no sensor)
 Non-deterministic: No knowledge of the effects of its actions
 Problem faced by (intelligent) agents (like, newborn babies)
• This is a kind of problem in the real world rather than in a model,
which may involve significant danger for an ignorant agent. If the
agent survives, it learns about the environment
• The agent must experiment, learn and build the model of the
environment through its results, gradually, discovering
 What sort of states exist and What its action do
 Then it can use these to solve subsequent (future) problems
 Example: In solving Vacuum cleaner world problem the agent learns 21
the state space and effects of its action sequences say: [Suck, Right]
Well-defined problems and solutions

 To define a problem, we need the following


elements
 The Initial state: is the state that the
agent starts in or begins with.
 Example- the initial state for each of the
following:
 Route finding problem
 Arad (states are the various cities)
 Coloring problem
22
Operators
• The set of possible actions available to the agent, i.e.
 Which state(s) will be reached by carrying out the action in a
particular state
 A Successor function S(x)
• Is a function that returns the set of states that are reachable from a single
state by any single action/operator
• Given state x, S(x) returns the set of states reachable from x by any single
action.
• Example:
 Route finding problem: Drive between cities drive (city x,
city y)
 Coloring problem: Paint green color paint (color y, color r)
23
Goal test function
• The agent execute to determine if it has reached the goal state
or not
 Is a function which determines if a single state is a goal state

• Example:
 Route finding problem:
 Reach Bucharest airport on time (IsCity(x, Bucharest))
 Coloring problem:
 All circle green(IsGreen(x, y, z) 24
Path cost function
• A function that assigns a cost to a path (sequence of
actions).
 Is often denoted by g. Usually, it is the sum of the costs of the
individual actions along the path (from one state to another
state)
 A sequence of actions in the state space. For example, we
may prefer paths with fewer or less costly actions
• Example:
Route finding problem:
• Path cost from initial to goal state
 Coloring problem: 25
Example problems
 We can group well defined problems into two:
• Toy Problems
 Are problems that are useful to test and demonstrate methodologies
 Concise and exact description (abstract version) of real problem
 Can be used by researchers to compare researchers the performance
of different algorithms
Example:
 The vacuum Cleaner world, The 8-puzzle, The 8-queens problem

• Real-World Problem
 More difficult and complex to solve, and there is no single agreed-
upon description
 Have much greater commercial/economic impact if solved
 Example: 26
The vacuum world
• Problem formulation
 State : one or both of the eight states shown in the Figure
 Operators: move left, move right, suck
 Goal test: no dirt left in any square
 Path cost: each action costs 1

27
The 8-puzzle
• Problem formulation
 State: the location of each of the eight tiles in one of the nine squares
 Operators: blank space moves left, right, up, or down
 Goal test: state matches the right figure (goal state)
 Path cost: each step costs 1, since we are moving one step each time.

5 4 1 2 3
6 1 8 8 4
7 3 2 7 6 5
28
Activity!

29
Jealous Husbands Problem
 The are three married couples, must cross a river using a
boat which can carry at most two people, with the
constraint that no woman can be in the presence of
another man unless her husband is also present.
 Under this constraint, there cannot be both women and
men present on a bank with
women outnumbering men, since
if there were, these women would
be without their husbands.
 The boat cannot cross the river by
itself with no people on board
30
Missionary-and-cannibal problem:
 Three missionaries and three cannibals are on one side of a river that
they wish to cross. There is a boat that can hold one or two people.
 Find an action sequence that brings everyone safely to the opposite
bank (i.e. Cross the river).
 But you must never leave a group of missionaries outnumbered by
cannibals on the same bank (in any place).
 The boat cannot cross the river by itself with no people on board.
And, in some variations, one of the cannibals has only one arm and
cannot row.
1) Identify the set of states and operators
2) Show using suitable representation the state space
of the problem
31
Monkey and Banana Problem

Problem Statement
 Suppose the problem is as given below :−
• A hungry monkey is in a room, and
he is near the door.
• The monkey is on the floor.
• Bananas have been hung from the
center of the ceiling of the room.
• There is a block (or chair) present in
the room near the window.
• The monkey wants the banana, but
cannot reach it. 32
So how can the monkey get the bananas?
33
Monkey and Banana Problem Solutions
• if the monkey is clever enough, he can come to the block,
drag the block to the center, climb on it, and get the
banana. Below are few observations in this case −
• Monkey can reach the block, if both of them are at the
same level. From the image, we can see that both the
monkey and the block are on the floor.
• If the block position is not at the center, then monkey can
drag it to the center.
• If monkey and the block both are on the floor, and block is
at the center, then the monkey can climb up on the block.
So the vertical position of the monkey will be changed.
• When the monkey is on the block, and block is at the
34
center, then the monkey can get the bananas.
More AI Problems and Solutions….
Uninformed Search
Examples of Search Problems

• Route finding : Search through set of paths


 Looking for one which will minimize distance
• Machine learning: Search through a set of concepts
 Looking for a concept which achieves target categorisation
• Chess: search through set of possible moves
 Looking for one which will best improve position
• Theorem proving: Search through sets of reasoning steps
 Looking for a reasoning progression which proves theorem
• Missionaries and Cannibals: Search through set of possible crossing the
river 36

Problem Spaces
• For the given problem we have to identify:
 A set of states
 Special states: initial state(s), goal state(s)
 Successor function (also called operators)
• In combination, these create problem space or a
graph which can be used for searching
• What else might we know about a Problem?
 Constraints on operators
 Costs incurred by applying operators
37

Search
What is Search in context of AI?
• It is about examining different possible sequences of actions & states, and come up
with specific sequence of operators/actions that will take you from the initial state
to the goal state
 Given an initial state and a goal, find the sequence of actions leading through a sequence of
states to the final goal state

• During searching, the agent must use:


 All available knowledge about a problem to avoid unnecessary moves during searching for
optimal solution
 Some times, the agent also use heuristic knowledge (knowledge that is true from experience)
38
Where to Search?
• State space
 For a well-defined problem we know states, operators (actions), goal
test function and cost function.
 Then we construct the state space that contains a set of states.
 State space is the environment in which the search takes place.
 To find the optimal solution to our problem, there is a need to search
through the search space.

• Searching State Space


 Choose a node/state in the state space,
 Test the state to know whether we reached to the goal state or not,
39
 if not, expand the node further to identify its successor.
Search Tree

The searching process is like building the search tree that is super imposed over the state space
 A search tree is a representation in which nodes denote paths and branches connect paths. The node with no
parent is the root node. The nodes with no children are called leaf nodes.
Example: Route finding Problem
Partial search tree for route finding from Sidist Kilo to Stadium.
(a) The initial state SidistKilo goal test
SidistKilo
(b) After expanding Sidist Kilo generating a new state
AratKilo Giorgis ShiroMeda
choosing one SidistKilo
option
(c) After expanding Arat Kilo AratKilo Giorgis ShiroMeda

MeskelSquare Piassa Megenagna 40


Search algorithm
Two functions needed for conducting search
 Generator (or successors) function: Given a state and action, produces its successor states (in a state space)
 Tester (or IsGoal) function: Tells whether given state S is a goal state IsGoal(S)  True/False
 IsGoal and Successors functions depend on problem domain.
Two lists maintained during searching
OPEN list/frontier: stores the nodes we have seen but not explored
CLOSED list/explored list: the nodes we have seen and explored
Generally, search proceeds by examining each node on the OPEN list, performing some expansion operation that
adds its children to the OPEN list, & moving the node to the CLOSED list.
Merge function: Given successor nodes, it either append, prepend or arrange based on evaluation cost
Path cost: function assigning a numeric cost to each path; either from initial node to current node
and/or from current node to goal node

41
Infrastructure for search algorithms
• Search algorithms require a data structure to keep track of the
search tree that is being constructed.
 For each node n of the tree, we have a structure that
contains four components:
 n.STATE: the state in the state space to which the node
corresponds;
 n.PARENT: the node in the search tree that generated this node;
 n.ACTION: the action that was applied to the parent to generate
the node;
 n.PATH-COST: the cost, traditionally denoted by g(n), of the
path from the initial state to the node, as indicated by the parent
pointers.

42
Conti..

• The frontier needs to be stored in such a way that the search algorithm can easily choose the next
node to expand according to its preferred strategy.
• The operations on a queue are as follows:
 EMPTY?( queue) returns true only if there are no more elements in the queue.
 Pop(queue) removes the first element of the queue and returns it.
 INSERT(element, queue) inserts an element and returns the resulting queue.
 A queue may be
• LIFO (stack): which pops the newest element of the queue
• FIFO: which pops the oldest element of the queue
• PRIORITY: which pops the element of the queue with the highest priority according to some
ordering function.
43
Algorithm Evaluation: Completeness and
Optimality

Is the search strategies find solutions to problems with no solutions


 Is it produces incorrect solutions to problems
Completeness
 Is the algorithm guarantees in finding a solution whenever one exists
 Think about the density of solutions in space and evaluate whether the
searching technique guaranteed to find all solutions or not.
 Optimality
 is the algorithm finding an optimal solution; i.e. the one with minimum
cost
how good is our solution?
44
Algorithm Evaluation: Time & Space Tradeoffs

With many computing projects, we worry about:


Speed versus memory
Time complexity: how long does it take to find a solution
Space complexity: how much space is used by the algorithm
Fast programs can be written
 But they use up too much memory
Memory efficient programs can be written
 But they are slow
We consider various search strategies
In terms of their memory/speed tradeoffs
45
Searching for Solutions

 Example 1: 8-puzzle: Given 8 tiles and a blank space, in a square:


Show the state space diagram that generate new arrangement till
the goal state is reached
 Empty space can move up, down, left, right; Cannot move diagonally
Initial state Goal state
7 2 4 1 2 3
5 8 6 8 4
3 1 7 6 5
Can you try it? 46
8-puzzle Search Tree
7 2 4
initial state 5 8 6
3 1

7 2 4 7 2 4
8 6 5 8 6
5 3 1 3 1

2 4 7 2 4 7 2 4 7 2 4 7 2 4 7 2 4
7 8 6 8 6 5 8 6 5 6 5 8 6 5 8 6
5 3 1 5 3 1 3 1 3 8 1 3 1 3 1

47
Searching Strategies: classification
• Search strategy gives the order in which the search space is examined
Uses a trial-and-error approach to
•Uninformed (= blind) search systematically guess the solution,

 AKA weak search methods, most general methods are brute-force because they do
not need domain knowledge that guide them to the right direction towards the goal
 Have no information about the number of steps or the path cost from the current
state to the goal
 It is important for problems for which there is no additional information to consider
•Informed (= heuristic) search
 Have problem-specific knowledge (knowledge that is true from experience)
 Have knowledge about how far are the various state from the goal
 Can find solutions more efficiently than uninformed search 48
Search Methods:
• Uninformed search
 Breadth first
 Depth first
 Uniform cost, …
 Depth limited search
 Iterative deepening
 etc.
• Informed search
 Greedy search
 A*-search
 Iterative improvement,
 Constraint satisfaction
 etc. 49
Breadth first search
• The Breadth-First Search (BFS) is used to search a tree or graph data
structure for a node that meets a set of criteria.
• It starts at the tree’s root or graph and searches/visits all nodes at the current
depth level before moving on to the nodes at the next depth level.
• Expand shallowest unexpanded node,
 i.e. expand all nodes on a given level of the search tree before moving to the next
level
• Implementation: use queue (enqueue and dequeue) data structure to store
the list:
 Expansion: put successors at the end of queue
 Pop nodes from the front of the queue
• Properties:
 Takes space: keeps every node in memory 50
51
More
Breadth-First Search (BFS)
A
Successors: B,C,D

Initial state D
B C

E F G H I J

K N O P Q R
L M

Goal state
S T U
Fringe: A (FIFO) Visited:
Breadth-First Search (BFS)
A

Next node
B D
Successors: E,F C

E F G H I J

K N O P Q R
L M

S T U
Fringe: B,C,D (FIFO) Visited: A
Breadth-First Search (BFS)
A
Next node
Successors: G,H B C D

E F G H I J

K N O P Q R
L M

S T U
Fringe: C,D,E,F (FIFO) Visited: A, B
Breadth-First Search (BFS)
A
Next node
Successors: I,J
B C D

E F G H I J

K L M N O P Q R

S T U
Fringe: D,E,F,G,H (FIFO) Visited: A, B, C
Breadth-First Search (BFS)
A

Successors: K,L
B C D

Next node
E F G H I J

K N O P Q R
L M

S T U
Fringe: E,F,G,H,I,J (FIFO) Visited: A, B, C, D
Breadth-First Search (BFS)
A

Successors: M D
B C
Next node
E F G H I J

K N O P Q R
L M

S T U
Fringe: F,G,H,I,J,K,L (FIFO) Visited: A, B, C, D, E
Breadth-First Search (BFS)
A

Successors: N D
B C
Next node
E F G H I J

K L M N O P Q R

S T U
Fringe: G,H,I,J,K,L,M (FIFO) Visited: A, B, C, D, E, F
Breadth-First Search (BFS)
A

Successors: O D
B C
Next node

E F G H I J

K N O P Q R
L M

S T U
Fringe: H,I,J,K,L,M,N (FIFO) Visited: A, B, C, D, E, F, G
Breadth-First Search (BFS)
A

Successors: P,Q
B C D
Next node

E F G H I J

K N O P Q R
L M

S T U
Fringe: I,J,K,L,M,N,O (FIFO) Visited: A, B, C, D, E, F, G, H
Breadth-First Search (BFS)
A

Successors: R
B C D
Next node

E F G H I J

K N O P Q R
L M

S T U
Fringe: J,K,L,M,N,O,P,Q (FIFO) Visited: A, B, C, D, E, F, G, H, I
Breadth-First Search (BFS)
A

Successors: S
B C D

E F G H I J

Next node

K N O P Q R
L M

S T U
Fringe: K,L,M,N,O,P,Q,R (FIFO) Visited: A, B, C, D, E, F, G, H, I, J
Breadth-First Search (BFS)
A

Successors: T D
B C

E F G H I J
Next node

K L N O P Q R
M

S T U
Fringe: L,M,N,O,P,Q,R,S (FIFO) Visited: A, B, C, D, E, F, G, H, I, J, K
Breadth-First Search (BFS)
A

Successors: B C D

E F G H I J

K L M N O P Q R

Next node
S T U
Fringe: M,N,O,P,Q,R,S,T (FIFO) Visited: A, B, C, D, E, F, G, H, I, J, K, L
Breadth-First Search (BFS)
A
Goal state achieved

Successors: D
B C

E F G H I J

K L M N O P Q R
Next node
S T U
Fringe: N,O,P,Q,R,S,T (FIFO) Visited: A, B, C, D, E, F, G, H, I, J, K, L, M
66
Activity 1!
The queue evolves like this

A
B C D
C D E F
D E F G H I
E F G H I J
F G H I J K L
G H I J K L M
H I J K L M
I J K L M N O
J K L M N O
K L M N O P Q
L M N O P Q
M N O P Q
N O P Q
O P Q
P Q R S T
Q R S T
R S T U
S T U
T U V
U V
V

67
Conti..
 Web Crawling: BFS indexing web pages. The algorithm starts
traversing from the source page and follows all the links
associated with the page.
 GPS Navigation systems: BFS is one of the best algorithms used
to find neighboring locations by using the GPS system.
 Find the Shortest Path & Minimum Spanning Tree for an
unweighted graph: BFS can allow this by traversing a minimum
number of nodes starting from the source node.
 Broadcasting: Networking makes use of what we call as packets
for communication. These packets follow a traversal method to
reach various networking nodes. BFS is being used as an
algorithm that is used to communicate broadcasted packets
across all the nodes in a network.
 Peer to Peer Networking: BFS can be used as a traversal method
to find all the neighboring nodes in a Peer to Peer Network. For
example, BitTorrent uses Breadth-First Search for peer to peer
communication.
68
Uniform Cost Search
• So far in BFS we’ve ignored the issue of costs.
• The goal of this technique is to find the shortest path to the goal in terms of cost.
 It modifies the BFS by always expanding least-cost unexpanded node
• Implementation: nodes in list keep track of total path length from start to that node
 List kept in priority queue ordered by path cost
A
1 10 S S S
S
5 B 5
S G 0 A B C A B C A B C
1 5 15 5 15 15
15 5 G G G
C 11 11 10
A route-finding problem, (a) The state space, showing the cost for each operator, (b) Progression of the search. Each
node is labeled with g(n). At the next step, the goal node with g = 10 will be selected.
• Properties:
 This strategy finds the cheapest solution provided the cost of a path must never decrease as we
go along the path g(successor(n)) ≥ g(n), for every node n
69
 Takes space since it keeps every node in memory
Conti..

Initialization: { [ S , 0 ] }
Iteration1: { [ S->A , 1 ] , [ S->G , 12 ] }
Iteration2: { [ S->A->C , 2 ] , [ S->A->B , 4 ] , [ S->G , 12] }
Iteration3: { [ S->A->C->D , 3 ] , [ S->A->B , 4 ] , [ S->A->C->G , 4 ] , [ S->G , 12 ] }
Iteration4: { [ S->A->B , 4 ] , [ S->A->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S->G , 12 ] }
Iteration5: { [ S->A->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S->A->B->D , 7 ] , [ S->G , 12 ] }
Iteration6 gives the final output as S->A->C->G.
More 70
Activity!

71
Depth-first search
• Expand one of the node at the deepest level of the tree.
 Only when the search hits a non-goal dead end does the search
go back and expand nodes at shallower levels
• Implementation: treat the list as stack
 Expansion: push successors at the top of stack
 Pop nodes from the top of the stack
• Properties
 Incomplete and not optimal: fails in infinite-depth spaces,
spaces with loops.
• Modify to avoid repeated states along the path
 Takes less space (Linear): Only needs to remember up to the 72
Conti…

73
Depth-First Search (DFS)
A

Successors: B,C,D
C Initial state D
B

E F G H I J

K N O P Q R
L M

S T Goal state U
Fringe: A (LIFO) Visited:
Depth-First Search (DFS)
A
Successors: E,F
B C D

E F G H I J

K N O P Q R
L M

S T U
Fringe: B,C,D (LIFO) Visited: A
Depth-First Search (DFS)
A

Successors: K,L
B C D

E F G H I J

K N O P Q R
L M

S T U
Fringe: E,F,C,D (LIFO) Visited: A, B
Depth-First Search (DFS)
A

Successors: S D
B C

E F G H I J

K N O P Q R
L M

S T U
Fringe: K,L,F,C,D (LIFO) Visited: A, B, E
Depth-First Search (DFS)
A

Successors: C D
B

E F G H I J

K N O P Q R
L M

S T U
Fringe: S,L,F,C,D (LIFO) Visited: A, B, E, K
Depth-First Search (DFS)
A

Successors: T D
B C

E F G H I J

K N O P Q R
L M

S T U Visited: A, B, E, K, S
Fringe: L,F,C,D (LIFO) Backtracking
Depth-First Search (DFS)
A

Successors: D
B C

E F G H I J

K L N O P Q R
M

S T U
Fringe: T,F,C,D (LIFO) Visited: A, B, E, K, S, L
Depth-First Search (DFS)
A

Successors: M
B C D

E F G H I J

K L N O P Q R
M

S T U Visited: A, B, E, K, S, L, T
Fringe: F,C,D (LIFO) Backtracking
Depth-First Search (DFS)
A

Successors: B C D

E F G H I J

K L N O P Q R
M

S T U
Fringe: M,C,D (LIFO) Visited: A, B, E, K, S, L, T, F
Depth-First Search (DFS)
A

Successors: G,H
B C D

E F G H I J

K L M N O P Q R

S T U Visited: A, B, E, K, S, L, T, F,
Fringe: C,D (LIFO) M Backtracking
Depth-First Search (DFS)
A

Successors: N B C D

E F G H I J

K L M N O P Q R

S T U
Fringe: G,H,D (LIFO) Visited: A, B, E, K, S, L, T, F, M, C
Depth-First Search (DFS)
A
Goal state achieved

Successors: D
B C
Finished search

E F G H I J

K L M N O P Q R

S T U
Fringe: N,H,D (LIFO) Visited: A, B, E, K, S, L, T, F, M, C,
Activity!

For the search tree above:

A
B C D
E F C D
K L F C D
L F C D
F C D
M C D
C D
G H I D
H I D
N O I D
O I D
R S T I D
S T I D
V T I D
T I D
I D
D
J
P Q
Q
U
86
Depth-Limited Strategy
• Depth-first with depth cutoff k (maximal
depth below which nodes are not expanded)

• Three possible outcomes:


Solution
Failure (no solution)
Cutoff (no solution within cutoff)
Conti..
• It is depth-first search
with a predefined maximum depth
However, it is usually not easy to define the suitable
maximum depth
too small  no solution can be found
too large  the same problems are suffered from
• Anyway the search is
complete
but still not optimal
Depth-limited Search
S depth = 3
A D 3
6
B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Example

90
Iterative Deepening Search(IDS)
• IDS solves the issue of choosing the best depth limit by trying all possible depth limit:
 Perform depth-first search to a bounded depth d, starting at d = 1 and increasing it by 1 at each
iteration.

• This search combines the benefits of DFS and BFS


 DFS is efficient in space, but has no path-length guarantee
 BFS finds min-step path towards the goal, but requires memory space
 IDS performs a sequence of DFS searches with increasing depth-cutoff until goal is found

Limit=0 Limit=1 Limit=2

91
Example

More
92
Activity!

93
Bidirectional Search
• Simultaneously search both forward from the initial state to the goal and
backward from the goal to the initial state, and stop when the two searches
meet somewhere in the middle
 Requires an explicit goal state and invertible operators (or backward chaining).
 Decide what kind of search is going to take place in each half using BFS, DFS, uniform
cost search, etc.

Start Goal

94
Example

Intersection at: 9
*****Path*****
14  8  9  10 12 16

95
Activity!

Intersection at: 7
*****Path*****
04  6  7  8 10 14
96
…. More
• Advantage, Disadvantage and Real World Application of
 BFS,
 UCS,
 DFS,
 DLS,
 IDS and
 Bidirectional Search is Reading Assignment !!
• Questions from those topics may appear in the exam. 97
Comparing Uninformed Search
Strategies Complete Optimal Time complexity Space
complexity

Breadth first search yes yes O(bd) O(bd)


Depth first search no no O(bm) O(bm)
Uniform cost search yes yes O(bd) O(bd)
Iterative deepening search yes yes O(bd) O(bd)

bi-directional search yes yes O(bd/2) O(bd/2)


• b is branching factor,
• d is depth of the shallowest solution,
• m is the maximum depth of the search tree,
• l is the depth limit 98
Quiz!

99
Quiz

1. Breadth-First Search?
2. Depth-First Search?
3. Uniform-Cost Search?
4. Iterative-Deepening Search?
100
Solutions!

101
Find nodes expanded and Solution Path

1. Breadth-First Search?
2. Depth-First Search?
3. Uniform-Cost Search?
4. Iterative-Deepening Search?
102
Solution

• Depth-First Search: S A D E G
Solution found: S A G
• Breadth-First Search: S A B C D E G
Solution found: S A G
• Uniform-Cost Search: S A D B C E G
Solution found: S B G
• Iterative-Deepening Search: S A B C S A D E G
Solution found: S A G
103
Informed Search
Informed Search
• Search efficiency would be improved greatly if there is a way to order the choices
so that the most promising are explored first.
 This requires domain knowledge of the problem.
• Add domain specific knowledge to select what is the best one to continue
searching along the path.
• Define a heuristic function, h(n) that estimates the goodness of a node n, based on
domain specific information that is computable from the current state description.
• Heuristics is knowledge about the domain that helps to undertake focused search.
 It is rule of thumb, or guess that limits the search space towards the goal.
Example: 8- puzzle
105
Heuristic function, h(n)
• It is an estimate of how close we are to a goal state.
• Heuristic function h(n)
h(n) = estimated cost of path from state n to a goal state.
• Heuristic function is good for the following problems:
 Route finding: straight line distance
 8-puzzle: (i) Number of mismatched tiles and (ii) Manhattan or city block distance (sum of
distances each tile is from its goal position)
 8-queens: Min-conflict or max conflict – (that is, the number of attacking queens)
• Note that:
 h(n) ≥ 0 for all nodes n
 h(n) = 0 if n is a goal node
 h(n) = infinity if n is a dead-end from which a goal can’t be reached 106
Best first search
• It is a generic name to the class of informed methods
• Nodes are ordered in the open list so that the one with the best evaluation (minimum
cost) is expanded first
 Evaluation function, f incorporates domain specific information and determines the desirability of
expanding the node
 It aims to find low-cost solutions towards goal node
• There are two measures
 The use of the path cost g to decide which path to extend further (e. g. uniform cost search).
 The use of some estimate of the cost of the path from current state to the goal state (heuristics h)
 The first measure may not direct search towards the goal. It needs to be supplemented by the
second measure
107
Best First Search Approaches
• Two approaches to find the shortest path:
 Greedy search: minimizes estimated cost to reach a goal
 A*-search: minimizes the total path cost
• When expanding a node n in the search tree, greedy search uses the
estimated cost to get from the current state to the goal state, define as
h(n).
 In route finding problem h(n) is the straight-line distance
• We also possess the sum of the cost to reach that node from the start
state, define as g(n).
 In route finding problem this is the sum of the step costs for the search
path.
• For each node in the search tree, an evaluation function f(n) can be
defined as the sum of these functions.
f(n) = g(n) + h(n) 108
Greedy Search
• A best first search that uses a heuristic function h(n) alone to guide the search
 Selects node to expand that is closest (hence it’s greedy) to a goal node
 The algorithm doesn’t take minimum path costs from initial to current nodes into
account, it just go ahead optimistically and never looks back.
• Implementation: expand 1st the node closest to the goal state, i.e. with evaluation function
f(n) = h(n)
 h(n) = 0 if node n is the goal state
 Otherwise h(n) ≥ 0; an estimated cost of the cheapest path from the state at node n to
a goal state
Example 1: Route finding problem
109
Example

110
Solution to Route Finding
f(n) = 366 Arad

Sibiu 253 Timisoara 329 Zerind 374

Arad 366 Fagaras 178 Oradea 380 Rimnicu Vilcea 193

Bucharest 0 Sibiu 253

Is Arad  Sibiu  Fagaras  Bucharest optimal? Total cost = 431


111
Properties
• Incomplete:
 May not reach goal state because it may start down an infinite path and never return to try other possibilities
• Not optimal In computer science, specifically in algorithms related to path
 if estimated heuristic cost is not admissible finding, a heuristic function is said to be admissible if it never
overestimates the cost of reaching the goal, i.e. the cost it estimates
• Takes memory to reach the goal is not higher than the lowest possible cost from
 It retains all the nodes in memory the current point in the path.
 Space complexity is exponential: O(bm), where m is the maximum depth of the search space and b is branching
factor
• Time complexity
 It takes exponential time: O(bm)
• With good heuristic function the space and time complexity can be reduced substantially
• The problem with greedy search is that it didn’t take costs so far into account.
 We need a search algorithm (like A* search) that takes into consideration this cost so that it avoids expanding
paths that are already expensive 112
A*- Search Algorithm
• It considers both estimated cost of getting from n to the goal node
h(n), and cost of getting from initial node to node n, g(n)
• Apply three functions over every nodes
 g(n): Cost of path found so far from initial state to n
 h(n): Estimated cost of shortest path from n to z
 f(n): Estimated total cost of shortest path from a to z via n
 Evaluation function f(n) = h(n) + g(n)
• Implementation: Expand the node for which the evaluation function f(n) is
lowest
 Rank nodes by f(n) that goes from the start node to goal node via given
node
113
Example

114
Solution to Route Finding
f(n) = 0 + 366 Arad

393
Sibiu =140+253 Timisoara 447 Zerind 449

413 =
220+193
Arad 646 Fagaras 415 Oradea 671
Rimnicu Vilcea
415
=317+98 Pitesti Craiova 573

418 Bucharest
=418+0 Craiova 615 Rimnicu 607
115
Properties
• A* search is complete, optimal, and optimally efficient for any given
admissible heuristic function
• Complete
 It guarantees to reach to goal state
• Optimality
 If the given heuristic function is admissible, then A* search is optimal.
• Takes memory:
 It keeps all generated nodes in memory
 Both time and space complexity is exponential
• Time and space complexity of heuristic algorithms depends on the
quality of the heuristic function.
 Worst case is exponential 116
Local Search
• In many problems, the path to the goal is irrelevant; the goal state itself is the
solution
 Local Search: widely used for very big problems
 Returns good but not optimal solutions
 State space = set of complete configurations. Find configuration satisfying
constraints
 Examples: n-Queens, airline flight schedules
 In these cases, we can use local search algorithms that keep a single current state, try to
improve it.
• Being able to ignore the path offers two main advantages:
 Less memory is used, generally a constant amount.
 A reasonable solution can be found in infinite search spaces for which systematic search would
be unsuited. 117
Conti..
• They are also very useful for optimization problems where
the goal is to find the best state according to a function.
• Many of these problems are not suitable for standard search.
• As an example, nature provides a function of reproductive
Darwin defined evolution as
success that is used to guide Darwinian evolution. "descent with modification,"
Evolution tries to optimize this function, but there is no path the idea that species change
over time, give rise to new
cost or goal test. species, and share a common
• Example: n-queens. Put n queens on an n × n board with no ancestor.
two queens on the same row, column, or diagonal.

118
Conti..
Key idea (surprisingly simple):
1) Select (random) initial state (initial guess at solution)
e.g. guess random placement of N queens
2) Make local modification to improve current state
e.g. move queen under attack to “less attacked” square
3) Repeat Step 2 until goal state found (or out of time)
cycle can be done billions of times
Not necessarily!
Requirements: Method is incomplete.
 Generate an initial (often random; probably-not-optimal or even valid) guess
 Evaluate quality of guess
 Move to other state (well-defined neighborhood function)

. . . and do these operations quickly


. . . and don't save paths followed 119
Iterative improvement algorithm (IIA)

• The general idea is to start with a complete configuration and to make modifications to
improve its quality
• It keeps track of only the current state and try to improve it until goal state is reached,
 It does not look ahead beyond the immediate neighbors of that state
• Implementation:
 Consider all states laid out on the surface of a landscape
 IIA moves around the landscape trying to find the highest peaks, which are the optimal solutions
• IIA search algorithms:
 Hill climbing
 Simulated annealing
 Local beam search
 Genetic algorithms (Genetic programming)
 Tabu search 120
ADVERSARIAL SEARCH
• Games Playing as Search Problem
• Min-max Algorithm
• Alpha-Beta Pruning
Which one/s are Adversarial Search?
 Adversarial search is search when there is an "enemy" or
"opponent" changing the state of the problem every step
in a direction you do not want.
 Simply it is search in a competitive environments.
 Because you change state, but then you don't control the
next state.
 Examples: Chess, business, trading, war. 122
Tic-Tac-Toe Problem
• Given n x n matrix, the object of the game is to make three of your symbol
in a row, column or diagonal
 This is a win state for the game.
• One player is designated as player X and makes the first play by marking an
X into any of the n x n open squares of the board.
 Think a 3 x 3 square board where the player can put an X in any of the
9 open places
• The second player, "O", then follows suit by marking an O into any of the
other open squares that remain.
• This continues back-and-forth until one player wins the game or players fill
all squares on the board without establishing a winner. This is a draw. 123
Partial Game Tree for Tic-Tac-Toe Problem

8*9

7*(8*9)

124
How to play a game?
• The main objective in playing is to win the game; which requires to select the best move
carefully:
 The system first generates all possible legal moves, and applies them to the current
board.
 Evaluate each of the positions/states and
 Determine the best one to move.
 In a game like tic-tac-toe this process is repeated for each possible move until the
game is won, lost, or drawn.
 Wait for your opponent to move and repeat
• For real problems, the search tree is too big to make it The
possible
game treetoforreach thehas
Tic-Tac-Toe terminal
255,168 leafstates
 Example: nodes. This means that there are 255,168 possible
ways to play the game until it ends.
• Chess: 10120 nodes
• 8 Puzzle: 105 nodes
• How many nodes to consider in a Tic-Tac-Toe game with 3 x 3 board? 125
• In normal search problem, the optimal
solution is the sequence of actions
leading to the goal state.
• How can we come up with optimal
decision in game? Or
• Ex. How to win tic-tac-toe?

126
Adversarial Search
• In AI, it is used in game playing since one player's
attempts in maximizing winning (fitness) the game is
opposed by another player.
• The search tree in adversarial games like tic-tac-toe
consist of alternating levels where the moving (MAX)
player tries to maximize fitness and then the opposing
(MIN) player tries to minimize it.
• To simplify let us consider a typical case of:
 2-person game
 Zero-sum game 127
 Perfect information
Typical case
2-person game: two players alternate moves
e.g. chess playing
Zero-sum game: one player’s loss is the other’s gain
 The zero-sum assumption allows us to use a single evaluation function to describe the goodness of a board with respect to both
players.
 f(n) > 0: position n good for A and bad for B
 f(n) < 0: position n bad for A and good for B
 f(n) near 0: position n is a neutral position
 f(n) = +infinity: win for A
 f(n) = -infinity: win for B
 Perfect information: both players have access to complete information about the state of the game. No
information is hidden from either player.
Board configuration is known completely to both players at all times.
Any examples of perfect information game:
Tic-Tac-Toe & Chess. 128
Playing cards is not. Cards held by one are not known to others.
How a game can be formally defined?
• A game can be formally defined as a kind of search problem with the following elements:
1. S0:The initial state, which specifies how the game is set up at the start.
2. PLAYER(s): Defines which player has the move in a state.
3. ACTIONS(s): Returns the set of legal moves in a state.
4. RESULT(s, a): The transition model, which defines the result of a move.
5. TERMINAL-TEST (s): A terminal test, which is true when the game is over and false
otherwise. States where the game has ended are called terminal states.
6. UTILITY (s, p): A utility function (also called an objective function or payoff function), defines
the final numeric value for a game that ends in terminal state s for a player p. Ex. -1,0,1 in tic-
tac-toe
 The initial state, ACTIONS function, and RESULT function define the game tree for the game
—a tree where the nodes are game states and the edges are moves.

129
Game Tree Search
 Problem spaces for typical games are represented as trees
 In game tree, the nodes are game states and the edges are moves.
Game tree represents possible moves by both players given an initial configuration.
 Each node represents a (board) configuration. Ex. Each node marked by a letter (A, B, etc.)
 Root node represents initial configuration of the board at which a decision must be made as to what is the best
single move to make next.
 If it is my turn to move, then the root is labeled a "MAX" node indicating it is my turn; otherwise it is labeled a
"MIN" node to indicate it is my opponent's turn.
 Children of a node n indicate possible configurations after the player makes a move from node n
Ex. B, C, D are children of A. They are possible configurations after player makes a move from configuration A.
 In case of alternating moves between the players, alternating levels in a tree have alternating moves.
Each level of the tree has nodes that are all MAX or all MIN; nodes at level i are of the opposite kind from those at
130
level i+1.
Example… Game tree
• Game between players X and Y. Let us analyze from X's perspective by
looking ahead of two moves.
• H is a winning state for X (marked by plus infinity). If H reached, X wins. No
further moves.
• E, M are losing states for X (marked by minus infinity). If E or M reached, X
loses. No further moves.

Question: What should X move from A? Should X move to B, C, or D? 131


How to go searching?
• For many games, we can look the progress of the game based on the player’s position on the board.
 We can use heuristics to evaluate these board positions and judge how good (a chance of
winning) any one of the next moves can be.
 Evaluation is done on leaf nodes
• +inf stands for sure win,
• -inf for sure loss.
• Other numbers stand for intermediate values
• To obtain values of non-leaf nodes, analyze alternate levels in game tree as maximizing and minimizing
levels.
• Two techniques:
 Min-Max Algorithm
 Alpha and Beta pruning
132
Min-Max Algorithm
• It is a method in decision theory for minimizing the maximum possible loss.
 Alternatively, it can be thought of as maximizing the minimum gain.
 It started from two player zero-sum game theory, considering the case where
players take alternate moves.
• The Min-Max algorithm helps find the best move, by working backwards from the end
of the game.
 At each step it assumes that player A is trying to maximize the chances of A
winning,
 On the next turn player B is trying to minimize the chances of A winning (i.e., to
maximize B's own chances of winning). 133
Min-Max procedure
• Create start node as a MAX node with current board configuration
• Expand nodes down to some depth of look ahead in the game
 The min-max algorithm proceeds with depth-first search down to the terminal
states at the bottom of the tree.
• Apply the evaluation function at each of the leaf nodes
• “Back up” values for each of the non-leaf nodes until a value is computed for the root
node
 At MIN nodes, the backed-up value is the minimum of the values associated with
its children.
 At MAX nodes, the backed up value is the maximum of the values associated with
its children. 134
Example

•Lets consider perfect two players


game
•Two players MAX and MIN take
turn (with MAX playing first)
• Score tells whether a terminal state is
a win, loss, or draw (for MAX)
•Perfect knowledge of states, no
uncertainty in successor function
135
Max

Min

136
137
138
139
140
141
142
143
144
145
146
Exercise 1: Use Min-Max to find optimal play

147
Exercise 2: Identify the optimal play path?

MAX

MIN

MAX

MIN

MAX

MIN
148
Comments
• A complete depth-first search exploration of the game
tree is made by min-max algorithm.
 In a tree of maximum depth M with b legal moves
at any point results in complexity on the order of
bM .
 This exponential complexity means that min-max
by itself is not practical for real games.

• Is there a way to improve the performance of Min-Max


algorithm without affecting the result?
149
Alpha-Beta Pruning
• We can improve on the performance of the min-max algorithm
through alpha-beta pruning

• Basic idea: “If you have an idea that is surely bad, don't take the
time to see how truly awful or terrible it is.”

• Alpha Beta pruning reduces the complexity by pruning parts of


the tree that won't influence decision making
 With this, we don't have to search the complete tree. 150
Alpha-Beta Procedure
• Two values are used by this procedure.
 Alpha - the highest-value found so far at any choice-point along the path for Max
 Beta - the lowest-value found so far at any choice-point along the path for Min

• Traverse the search tree in depth-first order and update alpha and beta values for
each search node as the search progresses
 At each MAX node n, alpha(n) = maximum value found so far
 At each MIN node n, beta(n) = minimum value found so far
 The alpha values start at -infinity and only increase, while beta values start at +infinity and only
decrease.

• Prune branches that can not change the final decision


 Stop searching when
alpha(i) ≥ beta(n) for some MAX node ancestor i of n. 151
Alpha-Beta Pruning

• We don’t need to compute the value at this node


 No matter what it is, it can’t affect the value of the root node
152
Alpha-Beta Pruning Principle
 Procedure to perform mini-max search with Alpha-Beta pruning:
If the level is the top level, let alpha be negative maximum and let beta be positive
maximum.
If the limit of search has been reached, compute the value of the current position relative
to the appropriate player. Report the result.
If the level is a minimizing level,
 Until all children are examined with ALPHA-BETA or until alpha is equal to or greater than beta,
 Use the ALPHA-BETA procedure, with the current alpha and beta values, on a child; note the value reported.
 Compare the value reported with the beta value; if the reported value is smaller, reset beta to the new value.
 Report beta.
Otherwise, the level is a maximizing level:
 Until all children are examined with ALPHA-BETA or alpha is equal to or greater than beta,
 Use the ALPHA-BETA procedure, with the current alpha and beta value, on a child; note the value reported.
 Compare the value reported with the alpha value; if the reported value is larger, reset alpha to the new value.
 Report alpha. 153
Max

Min

154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
Exercise 1

170
Exercise 2: Mark Pruned paths with ‘X’

MAX
MIN

MAX

MIN

MAX

MIN
171
Effectiveness of alpha-beta
• Alpha-beta is guaranteed to compute the same value for the root node as
computed by min-max, with less or equal computation
• Worst case: no pruning, examining bM leaf nodes, where each node has b
children and a M-depth search is performed
• Best case is when each player’s best move is the first alternative generated
 Best case: examine only bM/2 leaf nodes.
 The result shows that you can search twice as deep as min-max.
• In Deep Blue (chess program), they found empirically that alpha-beta
pruning meant that the average branching factor at each node was about
6 instead of about 35!
172

You might also like