Ai Unit 1
Ai Unit 1
“The study of how to make computers do “AI ...is concerned with intelligent behavior
things at which, at the moment, people are in artifacts.” (Nilsson, 1998)
better.” (Rich and Knight, 1991)
Turing
Test
• The Turing Test, proposed by Alan Turing (1950), was designed to
provide a satisfactory operational definition of intelligence
• The computer would need to possess the following capabilities:
• Natural Language Processing : to enable it to communicate successfully in
English;
• Knowledge Representation: to store what it knows or hears;
• Automated Reasoning: to use the stored information to answer questions
and to draw new conclusions;
• Machine Learning: to adapt to new circumstances and to detect and extrapolate
patterns.
Intelligent Agents
Agents and
Environments
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for sensors and hands,
legs, vocal tract, and so on for actuators.
• A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets
as sensory inputs and acts on the environment by displaying on the
screen, writing files, and sending network packets.
Percept and Percept Sequence
Agent Performance
• PEAS description of the task environment for Environment Actuators Sensors
an automated taxi. Type Measure
Taxi Safe, fast, legal, Roads, other Steering, Cameras,
driver comfortable traffic, accelerator, sonar,
trip, maximize pedestrians, brake, signal, speedometer,
profits customers horn, display GPS,
odometer,
accelerometer,
engine
sensors,
keyboard
THE STRUCTURE OF AGENTS
• The simplest kind of agent is the simple reflex agent. These agents select actions on the
basis of the current percept, ignoring the rest of the percept history. For example, the
vacuum agent whose agent function is tabulated, is a simple reflex agent, because its
decision is based only on the current location and on whether that location contains
dirt.
• An agent program for this agent is shown.
• The agent program for a simple reflex agent in the two-state vacuum environment.
Simple reflex agents
• The most effective way to handle partial observability is for the agent to keep track of the part of the
world it can’t see now.
• That is, the agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state. For the braking
problem, the internal state is not too extensive— just the previous frame from the camera, allowing
the agent to detect when two red lights at the edge of the vehicle go on or off simultaneously.
• For other driving tasks such as changing lanes, the agent needs to keep track of where the other cars
are if it can’t see them all at once. And for any driving to be possible at all, the agent needs to keep
track of where its keys are.
• This knowledge about “how the world works”—whether implemented in simple Boolean circuits or
in complete scientific theories—is called a model of the world. An agent that uses such a model is
called a model-based agent.
Model-based
reflex agents
• Goals alone are not enough to generate high-quality behavior in most environments. For
example, many action sequences will get the taxi to its destination (thereby achieving the
goal) but some are quicker, safer, more reliable, or cheaper than others. Goals just provide
a crude binary distinction between “happy” and “unhappy” states.
• A more general, performance measure should allow a comparison of different world
states according to exactly how happy they would make the agent. Because “happy” does
not sound very scientific, economists and computer scientists use the term utility instead.
• An agent’s utility function is essentially an internalization of the performance measure. If
the internal utility function and the external performance measure are in agreement, then
an agent that chooses actions to maximize its utility will be rational according to the
external performance measure.
Utility-based agents
• The simplest agents discussed were the reflex agents, which base their actions on a
direct mapping from states to actions. Such agents cannot operate well in environments
for which this mapping would be too large to store and would take too long to learn.
• Goal-based agents, on the other hand, consider future actions and the desirability of
their outcomes.
• We describes one kind of goal-based agent called a problem-solving agent.
• Problem-solving agents use atomic representations —that is, states of the world are
considered as wholes, with no internal structure visible to the problem-solving
algorithms.
• Goal-based agents that use more advanced factored or structured representations are
usually called planning agents.