0% found this document useful (0 votes)
731 views10 pages

AI Unit 2 Notes

This document provides an overview of problem solving techniques in artificial intelligence, including state space search and control strategies. It discusses several key search algorithms used to solve problems: 1. Depth-first search which explores branches as deeply as possible before backtracking. 2. Breadth-first search which explores all neighboring nodes at the current depth before moving to the next depth level. 3. Uniform cost search which is like breadth-first search but considers edge costs, always exploring the lowest-cost path first. These search algorithms are described as uninformed searches, which find solutions through exploring the problem space without using heuristics. The document also discusses problem representation, state spaces, operators, and goals in problem

Uploaded by

Satya Narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
731 views10 pages

AI Unit 2 Notes

This document provides an overview of problem solving techniques in artificial intelligence, including state space search and control strategies. It discusses several key search algorithms used to solve problems: 1. Depth-first search which explores branches as deeply as possible before backtracking. 2. Breadth-first search which explores all neighboring nodes at the current depth before moving to the next depth level. 3. Uniform cost search which is like breadth-first search but considers edge costs, always exploring the lowest-cost path first. These search algorithms are described as uninformed searches, which find solutions through exploring the problem space without using heuristics. The document also discusses problem representation, state spaces, operators, and goals in problem

Uploaded by

Satya Narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-II:

Problem solving: state-space search and control strategies :Introduction, general


problem solving, characteristics of problem, exhaustive searches, heuristic search
techniques, iterative-deepening a*, constraint satisfaction Problem reduction and game
playing: Introduction, problem reduction, game playing, alpha-beta pruning, two-player
perfect information games.

A problem solving system that uses either forward and backward reasoning whose each
operator works to produce a single new object/ new state in the database is said to represent
problems in a state space. From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.

A plan can then be seen as a sequence of operations that transform the initial state into
the goal state, i.e. the problem solution. Typically we will use some kind of search algorithm
to find a good plan.

Search and Control Strategies:

Problem solving is an important aspect of Artificial Intelligence. A problem can be


considered to consist of a goal and a set of actions that can be taken to lead to the goal. At any
given time, we consider the state of the search space to represent where we have reached as a
result of the actions we have applied so far. For example, consider the problem of looking for
a contact lens on a football field. The initial state is how we start out, which is to say we know
that the lens is somewhere on the field, but we don’t know where. If we use the representation
where we examine the field in units of one square foot, then our first action might be to examine
the square in the top-left corner of the field. If we do not find the lens there, we could consider
the state now to be that we have examined the top-left square and have not found the lens. After
a number of actions, the state might be that we have examined 500 squares, and we have now
just found the lens in the last square we examined. This is a goal state because it satisfies the
goal that we had of finding a contact lens.

Search is a method that can be used by computers to examine a problem space like this
in order to find a goal. Often, we want to find the goal as quickly as possible or without using
too many resources. A problem space can also be considered to be a search space because in
order to solve the problem, we will search the space for a goal state. We will continue to use
the term search space to describe this concept. In this chapter, we will look at a number of
methods for examining a search space. These methods are called search methods.

 The Importance of Search in AI


 It has already become clear that many of the tasks underlying AI can be
phrased in terms of a search for the solution to the problem at hand.
 Many goal based agents are essentially problem solving agents which must
decide what to do by searching for a sequence of actions that lead to their
solutions.
 For production systems, we have seen the need to search for a sequence of
rule applications that lead to the required fact or action.
 For neural network systems, we need to search for the set of connection
weights that will result in the required input to output mapping.
 Which search algorithm one should use will generally depend on the problem
domain? There are four important factors to consider:
 Completeness – Is a solution guaranteed to be found if at least one solution
exists?
 Optimality – Is the solution found guaranteed to be the best (or lowest cost)
solution if there exists more than one solution?
 Time Complexity – The upper bound on the time required to find a solution,
as a function of the complexity of the problem.
 Space Complexity – The upper bound on the storage space (memory) required
at any point during the search, as a function of the complexity of the problem.

State space search


• Formulate a problem as a state space search by showing the legal problem states, the
legal operators, and the initial and goal states .
• A state is defined by the specification of the values of all attributes of interest in the
world
• An operator changes one state into the other; it has a precondition which is the value
of certain attributes prior to the application of the operator, and a set of effects,
which are the attributes altered by the operator
• The initial state is where you start
• The goal state is the partial description of the solution
The search problem is to find a sequence of actions which transforms the agent from the
initial state to a goal state g∈G. A search problem is represented by a 4-tuple {S, s0, A,G}.
S: set of states
s0 ∈ S : initial state
A: S S operators/ actions that transform one state to another state
G : goal, a set of states. G ⊆ S
his sequence of actions is called a solution plan. It is a path from the initial state to a
goal state. A plan P is a sequence of actions.
P = {a0, a1, … , aN} which leads to traversing a number of states {s0, s1, … ,
sN+1∈G}.
A sequence of states is called a path. The cost of a path is a positive number. In many
cases the path cost is computed by taking the sum of the costs of each action.
An agent acts in an environment.
An agent perceives its environment through
sensors. The complete set of inputs at a given time
is called a percept. The current percept, or a
sequence of percepts can influence the actions of an
agent. The agent can change the environment
through actuators or effectors.
An operation involving an effector is called
an action. Actions can be grouped into action
sequences. The agent can have goals which it tries
to achieve.
Thus, an agent can be looked upon as a system that implements a mapping from percept
sequences to actions.
A performance measure has to be used in order to evaluate an agent.
Robots are agents. Robots may have camera, sonar, infrared, bumper, etc. for sensors. They
can have grippers, wheels, lights, speakers, etc. for actuators. Some examples of robots are
Xavier from CMU, COG from MIT, etc.
Then we have the AIBO entertainment robot from SONY.

An Intelligent Agent must sense, must act, must be autonomous (to some extent),. It also must
be rational.

AI is about building rational agents. An agent is something that perceives and acts.
A rational agent always does the right thing.
1. What are the functionalities (goals)?
2. What are the components?
3. How do we build them?
Intelligence is often defined in terms of what we understand as intelligence in humans.
Allen Newell defines intelligence as the ability to bring all the knowledge a system has at its
disposal to bear in the solution of a problem.
A more practical definition that has been used in the context of building artificial systems with
intelligence is to perform better on tasks that humans currently do better.
Artificial Intelligence is the study of building agents that act rationally. Most of the time,
these agents perform some kind of search algorithm in the background in order to achieve
their tasks.
 A search problem consists of:
 A State Space. Set of all possible states where you can be.
 A Start State. The state from where the search begins.
 A Goal Test. A function that looks at the current state returns whether or not it is
the goal state.
 The Solution to a search problem is a sequence of actions, called the plan that
transforms the start state to the goal state.
 This plan is achieved through search algorithms.
Types of Search Algorithms
 There are far too many powerful search algorithms out there to fit in a single article.
Instead, this article will discuss six of the fundamental search algorithms, divided
into two categories, as shown below.

The search algorithms in this section have no additional information on the goal node
other than the one provided in the problem definition. The plans to reach the goal state from
the start state differ only by the order and/or length of actions. Uninformed search is also
called Blind search.
The following uninformed search algorithms are discussed in this section.

1. Depth First Search


2. Breath First Search
3. Uniform Cost Search
Each of these algorithms will have:

 A problem graph, containing the start node S and the goal node G.
 A strategy, describing the manner in which the graph will be traversed to get to G .
 A fringe, which is a data structure used to store all the possible states (nodes) that you
can go from the current states.
 A tree, that results while traversing to the goal node.
 A solution plan, which the sequence of nodes from S to G.

Depth First Search


Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as the root node
in the case of a graph) and explores as far as possible along each branch before backtracking.
Breadth First Search
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data
structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as
a ‘search key’), and explores all of the neighbor nodes at the present depth prior to moving on
to the nodes at the next depth level.

Uniform Cost Search

UCS is different from BFS and DFS because here the costs come into play. In other
words, traversing via different edges might not have the same cost. The goal is to find a path
where the cumulative sum of costs is least.

Informed Search Algorithms

Here, the algorithms have information on the goal state, which helps in more efficient
searching. This information is obtained by something called a heuristic.
In this section, we will discuss the following search algorithms.

1. Greedy Search
2. A* Tree Search
3. A* Graph Search
Search Heuristics:
In an informed search, a heuristic is a function that estimates how close a state is to
the goal state. For examples – Manhattan distance, Euclidean distance, etc. (Lesser the
distance, closer the goal.)
Different heuristics are used in different informed algorithms discussed below.

Greedy Search
In greedy search, we expand the node closest to the goal node. The “closeness” is estimated by
a heuristic h(x) .
Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the goal node.
Lower the value of h(x), closer is the node from the goal.
Strategy: Expand the node closest to the goal state, i.e. expand the node with lower h value.
Question. Find the path from S to G using greedy search. The heuristic values h of each node below
the name of the node.

Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the lower heuristic
cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with lower heuristic cost. Finally,
from E, we go to G(h=0). This entire traversal is shown in the search tree below, in blue.

Path: S -> D -> E -> G


Advantage: Works well with informed search problems, with fewer steps to reach a goal.
Disadvantage: Can turn into unguided DFS in the worst case.

A* Tree Search

A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost


search and greedy search. In this search, the heuristic is the summation of the cost in UCS,
denoted by g(x), and the cost in greedy search, denoted by h(x). The summed cost is denoted
by f(x).
Heuristic: The following points should be noted w.r.to heuristics in A*
search. f(x)=g(x)+h(x).
 Here, h(x) is called the forward cost, and is an estimate of the distance of the
current node from the goal node.
 And, g(x) is called the backward cost, and is the cumulative cost of a node
from the root node.
 A* search is optimal only when for all nodes, the forward cost for a
node h(x) underestimates the actual cost h*(x) to reach the goal. This property
of A* heuristic is called admissibility.
Admissibility: 0 ≤ h(x) ≤ ℎ∗ (𝑥)
Strategy: Choose the node with lowest f(x) value.
Example:
Question. Find the path to reach from S to G using A* search.

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each
step, choosing the node with the lowest sum. The entire working is shown in the table below.
Note that in the fourth set of iteration, we get two paths with equal summed cost f(x), so we
expand them both in the next set. The path with lower cost on further expansion is the chosen
path.
PATH H(X) G(X) F(X)

S 7 0 7

S -> A 9 3 12

S -> D 5 2 7
S -> D -> B 4 2+1=3 7

S -> D -> E 3 2+4=6 9

S -> D -> B -> C 2 3+2=5 7

S -> D -> B -> E 3 3+1=4 7

S -> D -> B -> C -> G 0 5+4=9 9

S -> D -> B -> E -> G 0 4+3=7 7

Path: S->D->B->E->G
Cost: 7

A* Graph Search
 A* tree search works well, except that it takes time re-exploring the branches it has already
explored. In other words, if the same node has expanded twice in different branches of the
search tree, A* search might explore both of those branches, thus wasting time
 A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do not
expand the same node more than once.
 Heuristic. Graph search is optimal only when the forward cost between two successive
nodes A and B, given by h(A) - h (B) , is less than or equal to the backward cost between those
two nodes g(A -> B). This property of graph search heuristic is called consistency.

Consistency:

Example
Question. Use graph search to find path from S to G in the following graph.

Solution. We solve this question pretty much the same way we solved last question, but in
this case, we keep a track of nodes explored so that we don’t re-explore them.

Path: S -> D -> B -> C -> E -> G


Cost: 7

Iterative Deepening A* Algorithm :

You might also like