0% found this document useful (0 votes)
7 views52 pages

AI Lecture SEVEN

The document discusses problem-solving agents in artificial intelligence, focusing on search problems and strategies for finding solutions. It outlines the formulation of problems, the components involved, and distinguishes between toy and real-world problems, providing examples such as the vacuum world cleaner and the 8-puzzle. Additionally, it covers uninformed search strategies, including breadth-first and depth-first search, and evaluates their performance based on completeness, optimality, time, and space complexity.

Uploaded by

oseedigne0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views52 pages

AI Lecture SEVEN

The document discusses problem-solving agents in artificial intelligence, focusing on search problems and strategies for finding solutions. It outlines the formulation of problems, the components involved, and distinguishes between toy and real-world problems, providing examples such as the vacuum world cleaner and the 8-puzzle. Additionally, it covers uninformed search strategies, including breadth-first and depth-first search, and evaluates their performance based on completeness, optimality, time, and space complexity.

Uploaded by

oseedigne0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Artificial Intelligence and Expert System

Lecture Seven
Metropolitan
International University

Twaha Kateete
[email protected]
Today’s outline
 Search problems
 Problem Solving Agents
 Searching
 Formulating Problems
 Searching For Solutions
 Uninformed Search Strategies
 Simple Assignment
PROBLEM
FORMULATION &
SOLVING
by
SEARCHING
Search problems
Intro.
 Here, we see how an agent can find a
sequence of actions that achieves its goals
when no single action will do.

➢ We are to describe one type of Goal-Based


agents known as - Problem-Solving
Agents
Problem Solving Agents
 Intelligent agents are supposed to
maximize their performance measure.
achieving this is sometimes simplified if
the agent can adopt a goal and aim at
satisfying it.
Problem Solving is associated with:
 Goal formulation, based on the current
situation and the agent’s performance
measure, is the first step in problem solving.

 Problem formulation which is the process


of deciding what actions and states to
consider, given a goal.
 In general, an agent with several immediate
options of unknown value can decide what to
do by first examining future actions that
eventually lead to states of known value.
For Agent to Solve Problems,
 We assume its environment is;
 Observable, so the agent always knows
the current state.
 Discrete, so at any given state there are
only finitely many actions to choose from.
 Known, so the agent knows which states
are reached by each action it takes.
 Deterministic, so each action has exactly
one outcome if performed.
 Under these assumptions, the solution to
any problem is a fixed sequence of actions
Searching
 The process of looking for a sequence of
actions that reaches the goal is called
search.
 A search algorithm takes a problem as
input and returns a solution in the form
of an action sequence.
 Once a solution is found, the actions it
recommends can be carried out.

 This is called the execution phase.


Well-defined problems and solutions
 Five components of problem definition;
o The initial state that the agent starts in.
o Description of the possible actions available to the
agent.(from the start state what next)
o Transition model-description of what each action
does.
o The goal test, which determines whether a given
state is a goal state or not. Sometimes there is an
explicit set of possible goal states.
o A path cost function that assigns a numeric cost to
each path. e.g time or kilometers or fairs.
Formulating Problems
 We propose formulation of problems in terms of the initial
state, actions, transition model, goal test, and path cost. This
formulation seems reasonable, but it is still a model, not
something real.
 Compare the simple state description of an actual cross-
country trip, where the state of the world includes so many
things: the traveling companions, the current radio program,
the scenery out of the window, the proximity of law
enforcement officers, the distance to the next rest stop, the
condition of the road, the weather, and so on.
 All these considerations are left out of our state descriptions
because they are irrelevant to the problem of finding a route to
our destination. The process of removing detail from a
representation is called abstraction.
Formulating Problems cont’d
 In addition to abstracting the state description, we must
abstract the actions themselves.
 A driving action has many effects. Besides changing the
location of the vehicle and its occupants, it takes up time,
consumes fuel, generates pollution, and changes the agent
(as they say, travel is broadening). Our formulation takes
into account only the change in location.
 Also, there are many actions that we omit altogether:
turning on the radio, looking out of the window, slowing
down for law enforcement officers, and so on.
 And of course, we don’t specify actions at the level of “turn
steering wheel to the left by one degree.”
Toy and Real-World Problems
 The problem-solving approach has been applied to a vast array
of task environments. We list some of the best known here,
distinguishing between toy and real-world problems.

 A toy problem is intended to illustrate or exercise various


problem-solving methods. It can be given a concise, exact
description and hence is usable by different researchers to
compare the performance of algorithms.

 A real-world problem is one whose solutions people actually


care about. Such problems tend not to have a single agreed-
upon description, but we can give the general flavor of their
formulations.
Examples of Toy Problems
 The vacuum world cleaner.
Vacuum world cleaner
 • States: The state is determined by both the agent location
and the dirt locations. The agent is in one of two locations, each
of which might or might not contain dirt. Thus, there are 2 ×
2^2 = 8 possible world states. A larger environment with n
locations has n・2^n states.
 • Initial state: Any state can be designated as the initial state.
 • Actions: In this simple environment, each state has just three
actions: Left, Right, and Suck. Larger environments might also
include Up and Down.
 • Transition model: The actions have their expected effects,
except that moving Left in the leftmost square, moving Right in
the rightmost square, and Sucking in a clean square have no
effect.
 • Goal test: This checks whether all the squares are clean.
 • Path cost: Each step costs 1, so the path cost is the number
of steps in the path.
8 – Puzzle Problem
 Typical Instance of the 8-puzzle.
 States: A state description specifies the location of each of
the eight tiles and the blank in one of the nine squares.
 • Initial state: Any state can be designated as the initial
state. Note that any given goal can be reached from exactly
half of the possible initial states.
 • Actions: The simplest formulation defines the actions as
movements of the blank space Left, Right, Up, or Down.
Different subsets of these are possible depending on where
the blank is.
 • Transition model: Given a state and action, this returns
the resulting state; for example, if we apply Left to the
start state in above figure, the resulting state has the 5 and
the blank switched.
 • Goal test: This checks whether the state matches the
goal configuration shown in the goal state. (Other goal
configurations are possible.)
 • Path cost: Each step costs 1, so the path cost is the
number of steps in the path.
8-Queens Problem/Solution
 This is almost a solution to the 8-queens problem.
Formulations of the 8-queens prob.
 There are two main kinds of formulation.
 An incremental formulation involves operators
that augment the state description, starting with
an empty state; for the 8-queens problem, this
means that each action adds a queen to the
state.
 A complete-state formulation starts with all 8
queens on the board and moves them around. In
either case, the path cost is of no interest
because only the final state counts.
Incremental Formulation
 • States: Any arrangement of 0 to 8
queens on the board is a state.
 • Initial state: No queens on the board.
 • Actions: Add a queen to any empty
square.
 • Transition model: Returns the board
with a queen added to the specified
square.
 • Goal test: 8 queens are on the board,
but none is attacked.
Real World Problems
 Route-finding problem is defined in terms of
specified locations and transitions along links
between them.

 Route-finding algorithms are used in a variety of


applications. such as Web sites and in-car systems
that provide driving directions, are relatively
straightforward extensions of the Romania example.
Others, such as routing video streams in computer
networks, military operations planning, and airline
travel-planning systems, involve much more
complex specifications.
Airline Travel Problems

 That must be solved by a travel planning Web site.


 • States: Each state obviously includes a location
(e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight
segment) may depend on previous segments, their
fare bases, and their status as domestic or
international, the state must record extra information
about these “historical” aspects.
 • Initial state: This is specified by the user’s query.
 • Actions: Take any flight from the current location,
in any seat class, leaving after the current time,
leaving enough time for within-airport transfer if
needed.
Airline Travel problem cont’d
 Transition model: The state resulting
from taking a flight will have the flight’s
destination as the current location and the
flight’s arrival time as the current time.
 Goal test: Are we at the final destination
specified by the user?
 • Path cost: This depends on monetary
cost, waiting time, flight time, customs
and immigration procedures, seat quality,
time of day, type of airplane, frequent-
flyer mileage awards, and so on..
SEARCHING FOR SOLUTIONS
 Having formulated some problems, we
now need to solve them.
 A solution is an action sequence, so
search algorithms work by considering
various possible action sequences.
 The possible action sequences starting at
the initial state form a search tree with
the initial state at the root; the branches
are actions and the nodes correspond to
states in the state space of the problem.
Search Tree
 The search tree expansion includes;

 The Parent Nodes

 The Child Nodes

 The Leaf Nodes


Infrastructure for search Algorithms
 Search algorithms require a data structure to keep track of
the search tree that is being constructed.
 For each node n of the tree, we have a structure that
contains four components:
 • n.STATE: the state in the state space to which the node
corresponds;
 • n.PARENT: the node in the search tree that generated
this node;
 • n.ACTION: the action that was applied to the parent to
generate the node;
 • n.PATH-COST: the cost, traditionally denoted by g(n), of
the path from the initial state
 to the node, as indicated by the parent pointers.
Queues
 Now that we have nodes, we need somewhere to put them.
The frontier needs to be stored in such a way that the
search algorithm can easily choose the next node to expand
according to its preferred strategy.
 The appropriate data structure for this is a queue.
 The operations on a queue are as follows:
 • EMPTY?(queue) returns true only if there are no more
elements in the queue.
 • POP(queue) removes the first element of the queue and
returns it.
 • INSERT(element, queue) inserts an element and
returns the resulting queue.

 Read more about queues from the Text Book


Measuring Problem Solving Performances
 Before we get into the design of specific search algorithms,
we need to consider the criteria that might be used to
choose among them. We can evaluate an algorithm’s
performance in four ways:

• Completeness: Is the algorithm guaranteed to find a


solution when there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to
perform the search?

 Time and space complexity are always considered with


respect to some measure of the problem difficulty.
b, d and m
 In AI, the graph is often represented implicitly by the initial
state, actions, and transition model and is frequently infinite.
 For these reasons, complexity is expressed in terms of three
quantities:

 b, the branching factor or maximum number of successors


of any node;
 d, the depth of the shallowest goal node (i.e., the number of
steps along the path from the root); and
 m, the maximum length of any path in the state space.

 Time is often measured in terms of the number of nodes


generated during the search, and space in terms of the
maximum number of nodes stored in memory.
UNINFORMED SEARCH STRATEGIES
 Uninformed search (also called blind
search). Means that the strategies have no
additional information about states beyond
that provided in the problem definition.

 All they can do is generate successors and


distinguish a goal state from a non-goal state.

 All search strategies are distinguished by the


order in which nodes are expanded.
UNINFORMED SEARCH STRATEGIES
 Breadth-First Search
 Uniform-cost Search
 Depth-First Search
 Iterative Deepening Depth-First Search
 Depth-limited Search
Breadth First Search
Breadth-first search is a simple strategy in
which the root node is expanded first, then
all the successors of the root node are
expanded next, then their successors, and so
on.
 In general, all the nodes are expanded at a
given depth in the search tree before any
nodes at the next level are expanded.
Breadth First Search
 Level by Level
Depth First Search
 Depth-first search always expands the
deepest node in the current frontier of the
search tree.
Cost associated search
Uniform cost Search
 Take a look
Iterative Deepening search
Comparing uninformed search strategies
 Evaluation of tree-search strategies. b is the branching factor; d is
the depth of the shallowest solution; m is the maximum depth of
the search tree; l is the depth limit. Superscript caveats are as
follows: a complete if b is finite; b complete if step costs ≥ for
positive ; c optimal if step costs are all identical; d if both
directions use breadth-first search.
Simple Individual Assignment
 By drawing search trees, work out the order in
which states are expanded as well as the path
returned for Breadth first search, Depth first
search and uniform cost search.

You might also like