0% found this document useful (0 votes)
6 views3 pages

Problem Solving Agents

The document discusses the problem-solving process of an agent on a vacation in Romania, focusing on navigating from Arad to Bucharest. It outlines a four-phase approach: problem formulation, search for a solution, execution of actions, and the distinction between open-loop and closed-loop systems. Additionally, it defines search problems formally, including state space, initial and goal states, actions, transition models, and action cost functions.

Uploaded by

AROCKIA PRINCE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views3 pages

Problem Solving Agents

The document discusses the problem-solving process of an agent on a vacation in Romania, focusing on navigating from Arad to Bucharest. It outlines a four-phase approach: problem formulation, search for a solution, execution of actions, and the distinction between open-loop and closed-loop systems. Additionally, it defines search problems formally, including state space, initial and goal states, actions, transition models, and action cost functions.

Uploaded by

AROCKIA PRINCE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Problem solving agents

Imagine an agent enjoying a touring vacation in Romania. The agent wants to


take in the sights, improve its Romanian, enjoy the nightlife, avoid hangovers,
and so on. The decision problem is a complex one. Now, suppose the agent is
currently in the city of Arad and has a nonrefundable ticket to fly out of Bucharest
the following day.

The agent observes street signs and sees that there are three roads leading out of
Arad: one toward Sibiu, one to Timisoara, and one to Zerind. None of these are
the goal, so unless the agent is familiar with the geography of Romania, it will
not know which road to follow.

If the agent has no additional information—that is, if the environment is


unknown—then the agent can do no better than to execute one of the actions at
random. With that information, the agent can follow this four-phase problem-

solving process:

Problem formulation: The agent devises a description of the states and actions
necessary to reach the goal—an abstract model of the relevant part of the world.
For our agent, one good model is to consider the actions of traveling from one
city to an adjacent city, and therefore the only fact about the state of the world
that will change due to an action is the current city.

Search • Search: Before taking any action in the real world, the agent simulates
sequences of actions in its model, searching until it finds a sequence of actions
that reaches the goal.
Solution Such a sequence is called a solution. The agent might have to simulate
multiple sequences that do not reach the goal, but eventually it will find a solution
(such as going from Arad to Sibiu to Fagaras to Bucharest), or it will find that no
solution is possible.

Execution • Execution: The agent can now execute the actions in the solution,
one at a time. I It is an important property that in a fully observable, deterministic,
known environment, the solution to any problem is a fixed sequence of actions:
drive to Sibiu, then Fagaras, then Bucharest.

If the model is correct, then once the agent has found a solution, it can ignore its
percepts while it is executing the actions—closing its eyes, so to speak—because
the solution

Open-loop is guaranteed to lead to the goal. Control theorists call this an open-
loop system: ignoring the percepts breaks the loop between agent and
environment. If there is a chance that the model is incorrect, or the environment
is nondeterministic

Closed-loop closed-loop approach that monitors the percepts (see Section 4.4).
In partially observable or nondeterministic environments, a solution would be a
branching, strategy that recommends different future actions depending on what
percepts arrive. For example, the agent might plan to drive from Arad to Sibiu
but might need a contingency plan in case it arrives in Zerind by accident or finds
a sign saying “Drum ˆInchis” (Road Closed).

Search problems and solutions


A search problem can be defined formally as follows:
Problem
• A set of possible states that the environment can be in. We call this the state
space.
States
• The initial state that the agent starts in. For example: Arad.
State space
• A set of one or more goal states. Sometimes there is one goal state (e.g.,
Bucharest),
Initial state
sometimes there is a small set of alternative goal states, and sometimes the goal
is
Goal states
defined by a property that applies to many states (potentially an infinite number).
For example, in a vacuum-cleaner world, the goal might be to have no dirt in any
location, regardless of any other facts about the state. We can account for all three
of these possibilities by specifying an IS-GOAL method for a problem. In this
chapter we will sometimes say “the goal” for simplicity, but what we say also
applies to “any one of the possible goal states.”

• The actions available to the agent. Given a state s, ACTIONS(s) returns a finite2
set of
Action
actions that can be executed in s. We say that each of these actions is applicable
in s.
Applicable
An example:
ACTIONS(Arad) = fToSibiu;ToTimisoara;ToZerindg:
• A transition model, which describes what each action does. RESULT(s, a)
returns the
Transition model
state that results from doing action a in state s. For example,

RESULT(Arad; ToZerind) = Zerind :

• An action cost function, denoted by ACTION-COST(s;a; s0) when we are


programming
Action cost function
or c(s;a; s0) when we are doing math, that gives the numeric cost of applying
action a in state s to reach state s0. A problem-solving agent should use a cost
function that reflects its own performance measure; for example, for route-finding
agents, the cost of an action might be the length in miles (as seen in Figure 3.1),
or it might be the time it takes to complete the action.

A sequence of actions forms a path, and a solution is a path from the initial state
to a goal
Path
state. We assume that action costs are additive; that is, the total cost of a path is
the sum of the individual action costs. An optimal solution has the lowest path
cost among all solutions.

Formulating problems
Our formulation of the problem of getting to Bucharest is a model—an abstract
mathematical description—and not the real thing. Compare the simple atomic
state description Arad to an actual cross-country trip, where the state of the world
includes so many things: the traveling companions, the current radio program, the
scenery out of the window, the proximity of law enforcement officers

You might also like