0% found this document useful (0 votes)
29 views30 pages

Ai Unit 1

The document provides an overview of Artificial Intelligence (AI), including its definitions, history, and the concept of intelligent agents. It discusses various types of agents, such as reflex, model-based, goal-based, utility-based, and learning agents, along with their functions and architectures. Additionally, it emphasizes the importance of rationality, task environments, and the performance measures that guide agent behavior.

Uploaded by

cosmiccanvas47
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views30 pages

Ai Unit 1

The document provides an overview of Artificial Intelligence (AI), including its definitions, history, and the concept of intelligent agents. It discusses various types of agents, such as reflex, model-based, goal-based, utility-based, and learning agents, along with their functions and architectures. Additionally, it emphasizes the importance of rationality, task environments, and the performance measures that guide agent behavior.

Uploaded by

cosmiccanvas47
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Subject: Artificial Intelligence

TE COMPUTER (2019 pattern)


Unit I Introduction 07
Hours
CO1: Identify and apply suitable Intelligent agents for various AI
applications

• Introduction to Artificial Intelligence,


• Foundations of Artificial Intelligence,
• History of Artificial Intelligence,
• State of the Art, Risks and Benefits of AI,
• Intelligent Agents,
• Agents and Environments,
• Good Behavior: Concept of Rationality, Nature of Environments,
Structure of Agents
What is Artificial Intelligence

• Artificial intelligence (AI), sometimes called machine intelligence, is


intelligence demonstrated by machines, in contrast to the natural
intelligence displayed by humans and other animals.
• Computer science defines AI research as the study of “intelligent
agents” any device that perceives its environment and takes actions
that maximize its chance of successfully achieving its goals.
• Some researchers define AI as “a system’s ability to correctly interpret
external data, to learn from such data, and to use those learnings to
achieve specific goals and tasks through flexible adaptation”
Artificial Intelligence
Thinking Humanly Thinking Rationally
“The exciting new effort to make “The study of mental faculties through the
computers think . . . machines with minds, use of computational models.” (Charniak
in the full and literal sense.” (Haugeland, and McDermott, 1985)
• Some definitions of artificial 1985)
intelligence, organized into four
categories. “[The automation of] activities that we “The study of the computations that make it
associate with human thinking, activities possible to perceive, reason, and act.”
such as decision-making, problem solving, (Winston, 1992)
learning . . .” (Bellman, 1978)
Acting Humanly Acting Rationally
“The art of creating machines that per- “Computational Intelligence is the study of
form functions that require intelligence the design of intelligent agents.” (Poole et
when performed by people.” (Kurzweil, al., 1998)
1990)

“The study of how to make computers do “AI ...is concerned with intelligent behavior
things at which, at the moment, people are in artifacts.” (Nilsson, 1998)
better.” (Rich and Knight, 1991)
Turing
Test
• The Turing Test, proposed by Alan Turing (1950), was designed to
provide a satisfactory operational definition of intelligence
• The computer would need to possess the following capabilities:
• Natural Language Processing : to enable it to communicate successfully in
English;
• Knowledge Representation: to store what it knows or hears;
• Automated Reasoning: to use the stored information to answer questions
and to draw new conclusions;
• Machine Learning: to adapt to new circumstances and to detect and extrapolate
patterns.
Intelligent Agents
Agents and
Environments
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for sensors and hands,
legs, vocal tract, and so on for actuators.
• A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets
as sensory inputs and acts on the environment by displaying on the
screen, writing files, and sending network packets.
Percept and Percept Sequence

• We use the term percept to refer to the agent’s perceptual inputs at


any given instant. An agent’s percept sequence is the complete
history of everything the agent has ever perceived.
• In general, an agent’s choice of action at any given instant can depend
on the entire percept sequence observed to date, but not on anything it
hasn’t perceived.
Percept and Percept Sequence
Agent Function and Agent
Program
• Mathematically speaking, we say that an agent’s behaviour is
described by the agent function that maps any given percept
sequence to an action.
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
• It is important to keep these two ideas distinct. The agent function is
an abstract mathematical description; the agent program is a
concrete implementation, running within some physical system.
Good Behavior: The Concept of Rationality

• A rational agent is one that does the right thing—conceptually


speaking, every entry in the table for the agent function is filled out
correctly. Obviously, doing the right thing is better than doing the
wrong thing, but what does it mean to do the right thing?
• If the sequence is desirable, then the agent has performed well. This
notion of desirability is captured by a performance measure that
evaluates any given sequence of environment states.
Rationality

• What is rational at any given time depends on four things:


1. The performance measure that defines the criterion of success.
2. The agent’s prior knowledge of the environment.
3. The actions that the agent can perform.
4. The agent’s percept sequence to date.
• This leads to a definition of a rational agent:
• For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent
has.
Omniscience, learning, and
autonomy
• We need to be careful to distinguish between rationality and omniscience. An
omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
• Doing actions in order to modify future percepts—sometimes called information
gathering—is an important part of rationality.
• Our definition requires a rational agent not only to gather information but also to learn
as much as possible from what it perceives. The agent’s initial configuration could reflect
some prior knowledge of the environment, but as the agent gains experience this may be
modified and augmented.
• To the extent that an agent relies on the prior knowledge of its designer rather than on its
own percepts, we say that the agent lacks autonomy. A rational agent should be
autonomous—it should learn what it can to compensate for partial or incorrect prior
knowledge.
Task
Environments
• Now that we have a definition of rationality, we are almost ready to
think about building rational agents. First, however, we must think
about task environments, which are essentially the “problems” to
which rational agents are the “solutions.”
Specifying the task
environment
• In our discussion of the rationality of the simple agent, we had to
specify the performance measure, the environment, and the agent’s
actuators and sensors.
• We group all these under the heading of the task environment. For
the acronym, we call this the PEAS (Performance, Environment,
Actuators, Sensors) description.
• In designing an agent, the first step must always be to specify the task
environment as fully as possible.
Specifying the task environment

Agent Performance
• PEAS description of the task environment for Environment Actuators Sensors
an automated taxi. Type Measure
Taxi Safe, fast, legal, Roads, other Steering, Cameras,
driver comfortable traffic, accelerator, sonar,
trip, maximize pedestrians, brake, signal, speedometer,
profits customers horn, display GPS,
odometer,
accelerometer,
engine
sensors,
keyboard
THE STRUCTURE OF AGENTS

• So far we have talked about agents by describing behavior—the action


that is performed after any given sequence of percepts.
• Now we talk about how the insides work. The job of AI is to design an
agent program that implements the agent function— the mapping
from percepts to actions.
• We assume this program will run on some sort of computing device
with physical sensors and actuators—we call this the architecture:
• agent = architecture + program .
Architecture

• The program we choosehasto be one that is appropriate


for the
architecture.
• Ifthe program is going to recommend actions like Walk,
the
architecture had better have legs.
• The architecture might be just an ordinary PC, or it might be a robotic
car with several onboard computers, cameras, and other sensors.
• In general, the architecture makes the percepts from the sensors
available to the program, runs the program, and feeds the program’s
Agent
programs
• The agent programs that we design, all have the same skeleton: they take the
current percept as input from the sensors and return an action to the
actuators.
• Notice the difference between the agent program, which takes the current
percept as input, and the agent function, which takes the entire percept
history.
• The agent program takes just the current percept as input because nothing
more is available from the environment; if the agent’s actions need to depend
on the entire percept sequence, the agent will have to remember the
percepts.
Agent programs

• The TABLE-DRIVEN-AGENT program


is invoked for each new percept and
returns an action each time. It retains
the complete percept sequence in
memory.
Simple reflex agents

• The simplest kind of agent is the simple reflex agent. These agents select actions on the
basis of the current percept, ignoring the rest of the percept history. For example, the
vacuum agent whose agent function is tabulated, is a simple reflex agent, because its
decision is based only on the current location and on whether that location contains
dirt.
• An agent program for this agent is shown.
• The agent program for a simple reflex agent in the two-state vacuum environment.
Simple reflex agents

• In other words, some processing is done on


the visual input to establish the condition
we call “The car in front is braking.” Then,
this triggers some established connection in
the agent program to the action “initiate
braking.” We call such a connection a
condition–action rule, written as
• if car-in-front-is-braking then initiate-
braking.
• A simple reflex agent. It acts according to a
rule whose condition matches the current
state, as defined by the percept.
Model-based reflex agents

• The most effective way to handle partial observability is for the agent to keep track of the part of the
world it can’t see now.
• That is, the agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state. For the braking
problem, the internal state is not too extensive— just the previous frame from the camera, allowing
the agent to detect when two red lights at the edge of the vehicle go on or off simultaneously.
• For other driving tasks such as changing lanes, the agent needs to keep track of where the other cars
are if it can’t see them all at once. And for any driving to be possible at all, the agent needs to keep
track of where its keys are.
• This knowledge about “how the world works”—whether implemented in simple Boolean circuits or
in complete scientific theories—is called a model of the world. An agent that uses such a model is
called a model-based agent.
Model-based
reflex agents

• A model-based reflex agent. It keeps track


of the current state of the world, using an
internal model. It then chooses an action
in the same way as the reflex agent.
Goal-based agents

• Knowing something about the current state of the


environment is not always enough to decide what to do.
For example, at a road junction, the taxi can turn left,
turn right, or go straight. The correct decision depends
on where the taxi is trying to get to.
• In other words, as well as a current state description,
the agent needs some sort of goal information that
describes situations that are desirable—for example,
being at the passenger’s destination. The agent
program can combine this with the model (the same
information as was used in the model- based reflex
agent) to choose actions that achieve the goal. Figure
shows the goal-based agent’s structure.
• A model-based, goal-based agent. It keeps track of the
world state as well as a set of goals it is trying to
achieve and chooses an action that will (eventually)
lead to the achievement of its goals.
Utility-based agents

• Goals alone are not enough to generate high-quality behavior in most environments. For
example, many action sequences will get the taxi to its destination (thereby achieving the
goal) but some are quicker, safer, more reliable, or cheaper than others. Goals just provide
a crude binary distinction between “happy” and “unhappy” states.
• A more general, performance measure should allow a comparison of different world
states according to exactly how happy they would make the agent. Because “happy” does
not sound very scientific, economists and computer scientists use the term utility instead.
• An agent’s utility function is essentially an internalization of the performance measure. If
the internal utility function and the external performance measure are in agreement, then
an agent that chooses actions to maximize its utility will be rational according to the
external performance measure.
Utility-based agents

• A model-based, utility-based agent. It uses a model


of the world, along with a utility function that
measures its preferences among states of the world.
Then it chooses the action that leads to the best
expected utility, where expected utility is computed
by averaging over all possible outcome states,
weighted by the probability of the outcome.
Learning agents

• A learning agent can be divided into four


conceptual components, as shown in Figure.
• The most important distinction is between the
learning element, which is responsible for
making improvements, and the performance
element, which is responsible for selecting
external actions.
• The performance element is what we have
previously considered to be the entire agent: it
takes in percepts and decides on actions.
• The learning element uses feedback from the
critic on how the agent is doing and determines
how the performance element should be modified
to do better in the future.
• The last component of the learning agent is the
problem generator. It is responsible for suggesting
actions that will lead to new and informative
experiences.
How the components of agent programs work?

• Roughly speaking, we can place the representations along an axis of


increasing complexity and expressive power—atomic, factored, and
structured.
Conclusion

• The simplest agents discussed were the reflex agents, which base their actions on a
direct mapping from states to actions. Such agents cannot operate well in environments
for which this mapping would be too large to store and would take too long to learn.
• Goal-based agents, on the other hand, consider future actions and the desirability of
their outcomes.
• We describes one kind of goal-based agent called a problem-solving agent.
• Problem-solving agents use atomic representations —that is, states of the world are
considered as wholes, with no internal structure visible to the problem-solving
algorithms.
• Goal-based agents that use more advanced factored or structured representations are
usually called planning agents.

You might also like