0% found this document useful (0 votes)
81 views22 pages

AI Module 2

Uploaded by

akashmkumar02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views22 pages

AI Module 2

Uploaded by

akashmkumar02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Module – 2 (Problem Solving)

Solving Problems by searching-Problem solving Agents, Example problems, Searching for


solutions, Uninformed search strategies, Informed search strategies, Heuristic functions.

Search in AI is the process of navigating from a starting state to a goal


state by transitioning through intermediate states.

PROBLEM-SOLVING AGENTS

 Problem solving agent is a goal-based agent that decides what to do by finding


sequences of actions that lead to desirable states.
 Intelligent agents are supposed to act in such a way that the environment goes
through a sequence of states that maximizes the performance measure.
 The task is simplified if the agent can adopt a goal and aim to satisfy it.

Goal formulation

 First step in problem solving is goal formulation.


 It is based on the current situation and the agent’s performance measure.
 The goal is formulated as a set of world states, – exactly those states in which the
goal is satisfied.
 Reaching from initial state to à goal state actions are required.
 Actions are the operators causing transitions between world states.
 Actions should be abstract enough at a certain degree, instead of very detailed.

Problem formulation
 The second issue to be considered in problem solving.
 Problem formulation is the process of deciding what actions and states to consider.
 E.g., driving from Ernakulam to Chennai
 in-between the way states and actions defined
 States: Some places in Ernakulam and Chennai
 Actions: Turn left, Turn right, go straight, accelerate & brake, etc.
 Agent will consider actions at the level of driving from one major town to another.
 Each state therefore corresponds to being in a particular town.
 The problem-solving agent with several immediate options of unknown value can
decide what to do by just examining different possible sequences of actions that lead
to states of known value, and then choosing the best sequence. This process of
looking for such a sequence is called Search.

Examining future actions needs to be more specific about properties of the environment

1
 Observable: the agent always knows the current state.
 Discrete: at any given state there are only finitely many actions to choose from.
 Known: so the agent knows which states are reached by each action.
 Deterministic: so each action has exactly one outcome.
 Under these assumptions, the solution in any problem is a, fixed sequence of actions.
 if the agent knows the initial state and the environment is known and deterministic, it
knows exactly where it will be after the first action and what it will perceive. Since
only one percept is possible after the first action, the solution can specify only one
possible second action, and so on.

Search

 After Goal formulation and problem formulation, the agent has to look for a sequence
of actions that reaches the goal.
 This process is called Search. A search algorithm takes a problem as input and returns
a sequence of actions as output.
 After the search phase, the agent has to carry out the actions that are recommended
by the search algorithm. This final phase is called execution phase.

Formulate — Search — Execute

Thus the agent has a formulate, search and execute design to it.

Searching Process

1. Formulate a goal and a problem to solve,


2. the agent calls a search procedure to solve it
3. Agent uses the solution to guide its actions,
4. do whatever the solution recommends
5. remove that step from the sequence.
6. Once the solution has been executed, the agent will formulate a new goal.

Well-defined problems and solutions


A problem can be defined formally by four components:
1] Initial state
2] Actions
3] Goal test
4] Path cost

2
Initial State
The first component that describes the problem is the initial state that the agent starts in.

Actions

The second component that describes the problem is a description of the

possible actions available to the agent. The most common formulation uses a successor

function. For any state x returns s(x), the set of states reachable from x with one action.

TRANSITION MODEL: A description of what each action does is known as the transition

model.

Together, the initial state and successor function implicitly define the state space of
the problem – the set of all states reachable from the initial state. The state space
forms a graph in which nodes are states and the links between nodes art actions.

Path

A path in the state space is a sequence of states connected by a sequence of actions.

Goal Test

The goal test determines whether a given state is a goal state or not.

Path cost
A path cost function that assigns a numeric cost to each path. Cost of a path can be
described as the sum of the costs of the individual actions along the path. An optimal
solution has the lowest path cost among all the solutions.

The solution to the problem is an action sequence that leads from initial state to goal state
Solution quality is measured by the path cost function. An optimal solution has the lowest
path cost among all the solutions.

3
Example
A simplified Road Map of part of Romania

4
Formulating problems
Abstraction
◦ the process to take out the irrelevant information
◦ leave the most essential parts to the description of the states ( Remove detail from
representation)
◦ Conclusion: Only the most important parts that are contributing to searching are
used.
The choice of a good abstraction thus involves removing as much detail as possible while
retaining validity and ensuring that the abstract actions are easy to carry out.

Problem-Solving Agents
Agents whose task is to solve a particular problem (steps)

◦ goal formulation
◦ what is the goal state
◦ what are important characteristics of the goal state
◦ how does the agent know that it has reached the goal
◦ are there several possible goal states
◦ are they equal or are some more preferable
◦ problem formulation
◦ what are the possible states of the world relevant for solving the problem
◦ what information is accessible to the agent
◦ how can the agent progress from state to state
EXAMPLE PROBLEMS

 The problem solving approach has been applied to a vast array of task environments.

Some best known problems are summarized below.

 They are distinguished as toy or real-world problems


A Toy problem is intended to illustrate various problem solving
methods. It can be easily used by different researchers to
compare the performance of algorithms.

5
A Real world problem is one whose solutions people actually care
about.

TOY PROBLEMS

Vacuum World Example

o States: The agent is in one of two locations.,each of which might or might not
contain dirt. Thus there are possible world states.
o Initial state: Any state can be designated as initial state.
o Successor function : This generates the legal states that results from trying the
three actions (left, right, suck). The complete state space is shown in figure 2.3
o Goal Test : This tests whether all the squares are clean.
o Path cost : Each step costs one ,so that the the path cost is the number of steps in
the path.

A larger environment with n locations has n*2^n states.

Arcs denote actions: L=Left, R=Right, S=Suck

6
8-queens problem:

 The aim of this problem is to place eight queens on a chessboard in an order where
no queen may attack another. A queen can attack other queens either diagonallyor in
same row and column.
From the following figure, we can understand the problem as well as its correct
solution.

It is noticed from the above figure that each queen is set into the chessboard in a position
where no other queen is placed diagonally, in same row or column. Therefore, it is one
right approach tothe 8-queens problem.

For this problem, there are two main kinds of formulation:

 Incremental formulation: It starts from an empty state where the operator


augments a queenat each step. This means each action adds a queen to
the state.
Following steps are involved in this formulation:

 States: Arrangement of any 0 to 8 queens on the chessboard.

 Initial State: An empty chessboard


 Actions: Add a queen to any empty box.
 Transition model: Returns the chess board with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without any
attack.

7
 Path cost: There is no need for path cost because only final states are

counted. In this formulation, there is approximately 1.8 x 1014 possible


sequence to investigate.

1. Complete-state formulation: It starts with all the 8-queens on the chessboard and
moves

them around saving from the attacks.

Following steps are involved in this formulation

 States: Arrangement of all the 8 queens one per column with no queen attacking
the other queen.
 Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space

from 1.8 x1014 to 2057, and it is easy to find the solutions.

Some Real-world problems

 Traveling salesperson problem(TSP): It is a touring problem where the salesman can


visit each city only once. The objective is to find the shortest tour and sell-out the
stuff in each city.
 VLSI Layout problem: In this problem, millions of components and connections are
positioned on a chip in order to minimize the area, circuit-delays, stray-capacitances,
and maximizing the manufacturing yield.

The layout problem is split into two parts:

 Cell layout: Here, the primitive components of the circuit are grouped into cells, each
performing its specific function. Each cell has a fixed shape and size. The task is to
place the cells on the chip without overlapping each other.
 Channel routing: It finds a specific route for each wire through the gaps between the
cells.
 Protein Design: The objective is to find a sequence of amino acids which will fold into
3D protein having a property to cure some disease.

8
Searching for solutions

A simplified Road Map of part of Romania

It is needed to search for a solution to start from Arad and reach Bucharest.

9
 Since it is not the goal state expand the current state.
 Expanding the state means, applying some legal actions to the current state and
there by generating new set of states.

 There are three leaf nodes now.


 The set of all leaf nodes available for expansion at any given point, is called frontier.
 Now it is needed to decide which node to consider next to expand.It depends upon
the search strategy.

 But here starting from the state Arad again leads to repeated state(loop).Such
repeated states may takes the tree infinite.
 We continue choosing, testing, and expanding until either a solution is found or there
are no more states to expand.

10
 The looping problem here can be solved using graph search algorithm.
 Each state appears in the graph only once. But, it may appear in the tree multiple
times
 contains at most one copy of each state.
 Don’t add a node if its state has already been expanded or a node pointing to the
same state is already in the frontier.
 Every path from the initial state to an unexplored state has to pass through a state in
the frontier.

Infrastructure for search algorithms


Search algorithms require a data structure to keep track of the search tree that is being
constructed. For each node n of the tree, we have a structure that contains four
components:
 n.STATE: the state in the state space to which the node corresponds;
 n.PARENT: the node in the search tree that generated this node;
 n.ACTION: the action that was applied to the parent to generate the node;
 n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial
state to the node, as indicated by the parent pointers.

11
Measuring problem-solving performance
We can evaluate an algorithm’s performance in four ways:
• Completeness: Is the algorithm guaranteed to find a solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the search?
Complexity is expressed in terms of three quantities: b, the branching factor or maximum
number of successors of any node; d, the depth of the shallowest goal node and m, the
maximum length of any path in the state space.

Uninformed Search Strategies

 No additional information about states beyond that provided in the problem


definition

12
 All they can do is generate successors and distinguish a goal state from a non-goal
state.
 They that do not take into account the location of the goal. These algorithms ignore
where they are going until they find a goal and report success.
 Important uninformed search stratagies are

o Breadth-first search
o Uniform cost search
o Depth-first search
o Depth-limited search
o Iterative deepening search

Breadth-first search (BFS)

 Root node is expanded first, then all the successors of the root node are expanded
next, then their successors, and so on.
 All the nodes are expanded at a given depth in the search tree before any nodes at
the next level are expanded.
 Breadth-first search is an instance of the general graph-search algorithm
 In BFS the shallowest unexpanded node is chosen for expansion. This is achieved very
simply by using a FIFO queue for the frontier.
 The newly generated nodes always go to the back of the queue, while the older nodes
get expanded first.
 The goal test is applied to each node when it is generated rather than when it is
selected for expansion.

13
Breadth-first search tree

In the above figure, it is seen that the nodes are expanded level by level starting from the
root node A till the last node I in the tree. Therefore, the BFS sequence followed is: A->B->C-
>D->E->F->G->I.

The performance measure of BFS is as follows:

 Completeness: It is a complete strategy as it definitely finds the goal state.


 Optimality: It gives an optimal solution if the cost of each node is same.

14
 Space Complexity: The space complexity of BFS is O(bd), i.e., it requires a huge
amount of memory. Here, b is the branching factor and d denotes the depth/level of
the tree
 Time Complexity: BFS consumes much time to reach the goal node for large
instances.

Disadvantages of BFS

 The biggest disadvantage of BFS is that it requires a lot of memory space, therefore it
is a memory bounded strategy.
 BFS is time taking search strategy because it expands the nodes breadthwise.

Note: BFS expands the nodes level by level, i.e., breadthwise, therefore it is also known as
a Level search technique.

Uniform-cost search

 Unlike BFS, this uninformed search explores nodes based on their path cost from the
root node.
 It expands a node n having the lowest path cost g(n), where g(n) is the total cost
from a root node to node n.
 Uniform-cost search is significantly different from the breadth-first search because
of the following two reasons:

 First, the goal test is applied to a node only when it is selected for expansion not
when it is first generated because the first goal node which is generated may be on a
suboptimal path.
 Secondly, a goal test is added to a node, only when a better/ optimal path is found.

Thus, uniform-cost search expands nodes in a sequence of their optimal path cost because
before exploring any node, it searches the optimal path.

15
Example

 Problem is to get from Sibiu to Bucharest. The successors of Sibiu are Rimnicu Vilcea
and Fagaras, with costs 80 and 99, respectively.
 The least-cost node, Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80 +
97 = 177.
 The least-cost node is now Fagaras, so it is expanded, adding Bucharest with cost 99
+ 211 = 310.
 Now a goal node has been generated, but uniform-cost search keeps going, choosing
Pitesti for expansion and adding a second path to Bucharest with cost 80+ 97+ 101 =
278.
Now the algorithm checks to see if this new path is better than the old one; it is, so the old
one is discarded. Bucharest, now with g-cost 278, is selected for expansion and the solution
is returned.

The performance measure of Uniform-cost search

16
 Completeness: It guarantees to reach the goal state.
 Optimality: It gives optimal path cost solution for the search.
 Time Complexity: # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
 where C* is the cost of the optimal solution
 Space Complexity: # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))

Disadvantages of Uniform-cost search

 It does not care about the number of steps a path has taken to reach the goal state.
 It may stick to an infinite loop if there is a path with infinite zero cost sequence.
 It works hard as it examines each node in search of lowest cost path.

Depth-first search
 Depth-first search always expands the deepest node in the current frontier of the
search tree.
 The search proceeds immediately to the deepest level of the search tree, where the
nodes have no successors.
 As those nodes are expanded, they are dropped from the frontier, so then the
search “backs up” to the next deepest node that still has unexplored successors.
 Depth-first search uses a LIFO queue
 A LIFO queue means that the most recently generated node is chosen for expansion.
This must be the deepest unexpanded node because it is one deeper than its
parent—which, in turn, was the deepest unexpanded node when it was selected.

DFS search tree

In the figure shown below, DFS works starting from the initial node A (root node) and
traversing in one direction deeply till node I and then backtrack to B and so on.
Therefore, the sequence will be A->B->D->I->E->C->F->G.

17
The performance measure of DFS

 Completeness: DFS does not guarantee to reach the goal state.


 Optimality: It does not give an optimal solution as it expands nodes in one direction
deeply.
 Space complexity: It needs to store only a single path from the root node to the leaf
node. Therefore, DFS has O(bm) space complexity where b is the branching
factor(i.e., total no. of child nodes, a parent node have) and m is the maximum
length of any path.
 Time complexity: DFS has O(bm) time complexity.

Disadvantages of DFS

 It may get trapped in an infinite loop.


 It is also possible that it may not reach the goal state.
 DFS does not give an optimal solution.

Note: DFS uses the concept of backtracking to explore each node in a search tree.

Depth limited search

18
 Depth limited search (DLS) is a form of depth-first search.

 It expands the search tree depth-first up to a maximum depth l

 The nodes at depth l are treated as if they had no successors.

 The depth limit solves the infinite-path problem.

 Depth-first search can be viewed as a special case of DLS with l = ∞

 The depth bound can sometimes be chosen based on knowledge of the problem

Depth-limited search on a binary tree

In the above figure, the depth-limit is 1. So, only level 0 and 1 get expanded in A->B->C DFS
sequence, starting from the root node A till node B. It is not giving satisfactory result
because we could not reach the goal node I.

The performance measure of Depth-limited search

 Completeness: Depth-limited search does not guarantee to reach the goal node.
 Optimality: It does not give an optimal solution as it expands the nodes till the depth-
limit.
 Space Complexity: The space complexity of the depth-limited search is O(bl).
 Time Complexity: The time complexity of the depth-limited search is O(bl).

19
Disadvantages of Depth-limited search

 This search strategy is not complete.


 It does not provide an optimal solution.

Iterative deepening depth-first search

 This search is a combination of BFS and DFS, as BFS guarantees to reach the goal
node and DFS occupies less memory space.
 Therefore, iterative deepening search combines these two advantages of BFS and
DFS to reach the goal node.
 It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.

20
21
In the above figure, the goal node is H and initial depth-limit =[0-1]. So, it will expand level 0
and 1 and will terminate with A->B->C sequence. Further, change the depth-limit =[0-3], it
will again expand the nodes from level 0 till level 3 and the search terminate with A->B->D-
>F->E->H sequence where H is the desired goal node.

The performance measure of Iterative deepening search

 Completeness: Iterative deepening search may or may not reach the goal state.
 Optimality: It does not give an optimal solution always.
 Space Complexity: It has the same space complexity as BFS, i.e., O(bd).
 Time Complexity: It has O(d) time complexity.

Disadvantages of Iterative deepening search

 The drawback of iterative deepening search is that it seems wasteful because it


generates states multiple times.

22

You might also like