AI Module 2
AI Module 2
PROBLEM-SOLVING AGENTS
Goal formulation
Problem formulation
The second issue to be considered in problem solving.
Problem formulation is the process of deciding what actions and states to consider.
E.g., driving from Ernakulam to Chennai
in-between the way states and actions defined
States: Some places in Ernakulam and Chennai
Actions: Turn left, Turn right, go straight, accelerate & brake, etc.
Agent will consider actions at the level of driving from one major town to another.
Each state therefore corresponds to being in a particular town.
The problem-solving agent with several immediate options of unknown value can
decide what to do by just examining different possible sequences of actions that lead
to states of known value, and then choosing the best sequence. This process of
looking for such a sequence is called Search.
Examining future actions needs to be more specific about properties of the environment
1
Observable: the agent always knows the current state.
Discrete: at any given state there are only finitely many actions to choose from.
Known: so the agent knows which states are reached by each action.
Deterministic: so each action has exactly one outcome.
Under these assumptions, the solution in any problem is a, fixed sequence of actions.
if the agent knows the initial state and the environment is known and deterministic, it
knows exactly where it will be after the first action and what it will perceive. Since
only one percept is possible after the first action, the solution can specify only one
possible second action, and so on.
Search
After Goal formulation and problem formulation, the agent has to look for a sequence
of actions that reaches the goal.
This process is called Search. A search algorithm takes a problem as input and returns
a sequence of actions as output.
After the search phase, the agent has to carry out the actions that are recommended
by the search algorithm. This final phase is called execution phase.
Thus the agent has a formulate, search and execute design to it.
Searching Process
2
Initial State
The first component that describes the problem is the initial state that the agent starts in.
Actions
possible actions available to the agent. The most common formulation uses a successor
function. For any state x returns s(x), the set of states reachable from x with one action.
TRANSITION MODEL: A description of what each action does is known as the transition
model.
Together, the initial state and successor function implicitly define the state space of
the problem – the set of all states reachable from the initial state. The state space
forms a graph in which nodes are states and the links between nodes art actions.
Path
Goal Test
The goal test determines whether a given state is a goal state or not.
Path cost
A path cost function that assigns a numeric cost to each path. Cost of a path can be
described as the sum of the costs of the individual actions along the path. An optimal
solution has the lowest path cost among all the solutions.
The solution to the problem is an action sequence that leads from initial state to goal state
Solution quality is measured by the path cost function. An optimal solution has the lowest
path cost among all the solutions.
3
Example
A simplified Road Map of part of Romania
4
Formulating problems
Abstraction
◦ the process to take out the irrelevant information
◦ leave the most essential parts to the description of the states ( Remove detail from
representation)
◦ Conclusion: Only the most important parts that are contributing to searching are
used.
The choice of a good abstraction thus involves removing as much detail as possible while
retaining validity and ensuring that the abstract actions are easy to carry out.
Problem-Solving Agents
Agents whose task is to solve a particular problem (steps)
◦ goal formulation
◦ what is the goal state
◦ what are important characteristics of the goal state
◦ how does the agent know that it has reached the goal
◦ are there several possible goal states
◦ are they equal or are some more preferable
◦ problem formulation
◦ what are the possible states of the world relevant for solving the problem
◦ what information is accessible to the agent
◦ how can the agent progress from state to state
EXAMPLE PROBLEMS
The problem solving approach has been applied to a vast array of task environments.
5
A Real world problem is one whose solutions people actually care
about.
TOY PROBLEMS
o States: The agent is in one of two locations.,each of which might or might not
contain dirt. Thus there are possible world states.
o Initial state: Any state can be designated as initial state.
o Successor function : This generates the legal states that results from trying the
three actions (left, right, suck). The complete state space is shown in figure 2.3
o Goal Test : This tests whether all the squares are clean.
o Path cost : Each step costs one ,so that the the path cost is the number of steps in
the path.
6
8-queens problem:
The aim of this problem is to place eight queens on a chessboard in an order where
no queen may attack another. A queen can attack other queens either diagonallyor in
same row and column.
From the following figure, we can understand the problem as well as its correct
solution.
It is noticed from the above figure that each queen is set into the chessboard in a position
where no other queen is placed diagonally, in same row or column. Therefore, it is one
right approach tothe 8-queens problem.
7
Path cost: There is no need for path cost because only final states are
1. Complete-state formulation: It starts with all the 8-queens on the chessboard and
moves
States: Arrangement of all the 8 queens one per column with no queen attacking
the other queen.
Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space
Cell layout: Here, the primitive components of the circuit are grouped into cells, each
performing its specific function. Each cell has a fixed shape and size. The task is to
place the cells on the chip without overlapping each other.
Channel routing: It finds a specific route for each wire through the gaps between the
cells.
Protein Design: The objective is to find a sequence of amino acids which will fold into
3D protein having a property to cure some disease.
8
Searching for solutions
It is needed to search for a solution to start from Arad and reach Bucharest.
9
Since it is not the goal state expand the current state.
Expanding the state means, applying some legal actions to the current state and
there by generating new set of states.
But here starting from the state Arad again leads to repeated state(loop).Such
repeated states may takes the tree infinite.
We continue choosing, testing, and expanding until either a solution is found or there
are no more states to expand.
10
The looping problem here can be solved using graph search algorithm.
Each state appears in the graph only once. But, it may appear in the tree multiple
times
contains at most one copy of each state.
Don’t add a node if its state has already been expanded or a node pointing to the
same state is already in the frontier.
Every path from the initial state to an unexplored state has to pass through a state in
the frontier.
11
Measuring problem-solving performance
We can evaluate an algorithm’s performance in four ways:
• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
Complexity is expressed in terms of three quantities: b, the branching factor or maximum
number of successors of any node; d, the depth of the shallowest goal node and m, the
maximum length of any path in the state space.
12
All they can do is generate successors and distinguish a goal state from a non-goal
state.
They that do not take into account the location of the goal. These algorithms ignore
where they are going until they find a goal and report success.
Important uninformed search stratagies are
o Breadth-first search
o Uniform cost search
o Depth-first search
o Depth-limited search
o Iterative deepening search
Root node is expanded first, then all the successors of the root node are expanded
next, then their successors, and so on.
All the nodes are expanded at a given depth in the search tree before any nodes at
the next level are expanded.
Breadth-first search is an instance of the general graph-search algorithm
In BFS the shallowest unexpanded node is chosen for expansion. This is achieved very
simply by using a FIFO queue for the frontier.
The newly generated nodes always go to the back of the queue, while the older nodes
get expanded first.
The goal test is applied to each node when it is generated rather than when it is
selected for expansion.
13
Breadth-first search tree
In the above figure, it is seen that the nodes are expanded level by level starting from the
root node A till the last node I in the tree. Therefore, the BFS sequence followed is: A->B->C-
>D->E->F->G->I.
14
Space Complexity: The space complexity of BFS is O(bd), i.e., it requires a huge
amount of memory. Here, b is the branching factor and d denotes the depth/level of
the tree
Time Complexity: BFS consumes much time to reach the goal node for large
instances.
Disadvantages of BFS
The biggest disadvantage of BFS is that it requires a lot of memory space, therefore it
is a memory bounded strategy.
BFS is time taking search strategy because it expands the nodes breadthwise.
Note: BFS expands the nodes level by level, i.e., breadthwise, therefore it is also known as
a Level search technique.
Uniform-cost search
Unlike BFS, this uninformed search explores nodes based on their path cost from the
root node.
It expands a node n having the lowest path cost g(n), where g(n) is the total cost
from a root node to node n.
Uniform-cost search is significantly different from the breadth-first search because
of the following two reasons:
First, the goal test is applied to a node only when it is selected for expansion not
when it is first generated because the first goal node which is generated may be on a
suboptimal path.
Secondly, a goal test is added to a node, only when a better/ optimal path is found.
Thus, uniform-cost search expands nodes in a sequence of their optimal path cost because
before exploring any node, it searches the optimal path.
15
Example
Problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea
and Fagaras, with costs 80 and 99, respectively.
The least-cost node, Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80 +
97 = 177.
The least-cost node is now Fagaras, so it is expanded, adding Bucharest with cost 99
+ 211 = 310.
Now a goal node has been generated, but uniform-cost search keeps going, choosing
Pitesti for expansion and adding a second path to Bucharest with cost 80+ 97+ 101 =
278.
Now the algorithm checks to see if this new path is better than the old one; it is, so the old
one is discarded. Bucharest, now with g-cost 278, is selected for expansion and the solution
is returned.
16
Completeness: It guarantees to reach the goal state.
Optimality: It gives optimal path cost solution for the search.
Time Complexity: # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
where C* is the cost of the optimal solution
Space Complexity: # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
It does not care about the number of steps a path has taken to reach the goal state.
It may stick to an infinite loop if there is a path with infinite zero cost sequence.
It works hard as it examines each node in search of lowest cost path.
Depth-first search
Depth-first search always expands the deepest node in the current frontier of the
search tree.
The search proceeds immediately to the deepest level of the search tree, where the
nodes have no successors.
As those nodes are expanded, they are dropped from the frontier, so then the
search “backs up” to the next deepest node that still has unexplored successors.
Depth-first search uses a LIFO queue
A LIFO queue means that the most recently generated node is chosen for expansion.
This must be the deepest unexpanded node because it is one deeper than its
parent—which, in turn, was the deepest unexpanded node when it was selected.
In the figure shown below, DFS works starting from the initial node A (root node) and
traversing in one direction deeply till node I and then backtrack to B and so on.
Therefore, the sequence will be A->B->D->I->E->C->F->G.
17
The performance measure of DFS
Disadvantages of DFS
Note: DFS uses the concept of backtracking to explore each node in a search tree.
18
Depth limited search (DLS) is a form of depth-first search.
The depth bound can sometimes be chosen based on knowledge of the problem
In the above figure, the depth-limit is 1. So, only level 0 and 1 get expanded in A->B->C DFS
sequence, starting from the root node A till node B. It is not giving satisfactory result
because we could not reach the goal node I.
Completeness: Depth-limited search does not guarantee to reach the goal node.
Optimality: It does not give an optimal solution as it expands the nodes till the depth-
limit.
Space Complexity: The space complexity of the depth-limited search is O(bl).
Time Complexity: The time complexity of the depth-limited search is O(bl).
19
Disadvantages of Depth-limited search
This search is a combination of BFS and DFS, as BFS guarantees to reach the goal
node and DFS occupies less memory space.
Therefore, iterative deepening search combines these two advantages of BFS and
DFS to reach the goal node.
It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.
20
21
In the above figure, the goal node is H and initial depth-limit =[0-1]. So, it will expand level 0
and 1 and will terminate with A->B->C sequence. Further, change the depth-limit =[0-3], it
will again expand the nodes from level 0 till level 3 and the search terminate with A->B->D-
>F->E->H sequence where H is the desired goal node.
Completeness: Iterative deepening search may or may not reach the goal state.
Optimality: It does not give an optimal solution always.
Space Complexity: It has the same space complexity as BFS, i.e., O(bd).
Time Complexity: It has O(d) time complexity.
22