Agent Types
Agent Types
a nondeterministic environment
is one in which actions are characterized by their possible outcomes, but no probabilities are
attached to them.
In sequential environments, on the other hand, the current decision could affect all future
decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have
long-term consequences.
Static vs. dynamic: If STATIC the environment can change while an agent is deliberating,
then DYNAMIC we say the environment is dynamic for that agent; otherwise, it is static. Static
environments are easy to deal with because the agent need not keep looking at the world while
it is deciding on an action, nor need it worry about the passage of time. Dynamic environments,
on the other hand, are continuously asking the agent what it wants to do. Taxi driving is clearly
dynamic. Crossword puzzles are static.
Discrete vs. continuous: The discrete/continuous distinction applies to the state of the
CONTINUOUS environment, to the way time is handled, and to the percepts and actions of
the agent. For example, the chess environment has a finite number of distinct states (excluding
the clock). Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-
state and continuous-time problem.
The job of AI is to design an agent program that implements the agent function the mapping
from percepts to actions. We assume this program will run on some sort of ARCHITECTURE
computing device with physical sensors and actuators we call this the architecture:
agent = architecture + program
The agent program takes just the current percept as input because nothing more is available
A more general and flexible approach is first to build a general-purpose interpreter for
condition action rules and then to create rule sets for specific task environments. Figure 2.9
gives the structure of this general program in schematic form, showing how the condition
action rules allow the agent to make the connection from percept to action.
The agent program, which is also very simple, is shown in Figure 2.10. The INTERPRET-
INPUT function generates an abstracted description of the current state from the percept, and
the RULE-MATCH function returns the first rule in the set of rules that matches the given state
description. The agent in Figure 2.10 will work only if the correct decision can be made on the
basis of only the current percept that is, only if the environment is fully observable.
Figure 2.11 gives the structure of the model-based reflex agent with internal state, showing
how the current percept is combined with the old internal state to generate the updated
Program is shown in Figure 2.12. The interesting part is the function UPDATE-STATE, which
is responsible for creating the new internal state description. The details of how models and
states are represented vary widely depending on the type of environment and the particular
technology used in the agent design.
Goal-based agents:
Knowing something about the current state of the environment is not always enough to decide what to
do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct
decision depends on where the taxi is trying to get to. In other words, as well as a current state
description, the GOAL agent needs some sort of goal information that describes situations that are
desirable
the model (the same information as was used in the model based reflex agent) to choose actions that
achieve the goal. Figure 2.13 shows the goal-
Utility-based agents:
Goals alone are not enough to generate high-quality behavior in most environments. For example, many
action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker,
safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between
different world states according to exactly how happy they would make the agent.
1. Reference:
Pearson Education/ Prentice Hall 2019. (In debt thanks)