0% found this document useful (0 votes)
979 views9 pages

AI Agent PDF

An agent is anything that perceives its environment and acts upon it. There are three main types of agents: human agents, robotic agents, and software agents. An intelligent agent is an autonomous entity that acts to achieve goals using sensors and actuators. For an agent to be rational, it must perceive the environment, make decisions based on observations, take actions as a result of decisions, and take rational actions. The rationality of an agent depends on its performance measures, percept sequence, prior knowledge, and possible actions. A rational agent is one that acts to maximize its performance measure given its percepts and knowledge.

Uploaded by

RAHUL KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
979 views9 pages

AI Agent PDF

An agent is anything that perceives its environment and acts upon it. There are three main types of agents: human agents, robotic agents, and software agents. An intelligent agent is an autonomous entity that acts to achieve goals using sensors and actuators. For an agent to be rational, it must perceive the environment, make decisions based on observations, take actions as a result of decisions, and take rational actions. The rationality of an agent depends on its performance measures, percept sequence, prior knowledge, and possible actions. A rational agent is one that acts to maximize its performance measure given its percepts and knowledge.

Uploaded by

RAHUL KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

WhaWhat is an Agent?

An agent can be anything that perceives its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting.
An agent can be:

● Human-Agent: A human agent has eyes, ears, and other organs that work for sensors and
hand, legs, vocal tract work for actuators.
● Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
● Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.

Intelligent Agents:

An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve
their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

● Rule 1: An AI agent must have the ability to perceive the environment.


● Rule 2: The observation must be used to make decisions.
● Rule 3: Decision should result in an action.
● Rule 4: The action taken by an AI agent must be a rational action.
Agent Terminology:

● Performance Measure of Agent − It is the criteria, which determines how successful an


agent is.
● The behavior of Agent − It is the action that the agent performs after any given sequence
of percepts.
● Percept − It is an agent’s perceptual inputs at a given instance.
● Percept Sequence − It is the history of all that an agent has perceived till date.
● Agent Function − It is a map from the precept sequence to an action.

Rationality:

The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:

● Performance measure which defines the success criterion.


● Agent prior knowledge of its environment.
● Best possible actions that an agent can perform.
● The sequence of percepts.

Rational Agent:

A rational agent is an agent that has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.

A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.

Rationality of an agent depends on the following −

● The performance measures, which determine the degree of success.


● Agent’s Percept Sequence till now.
● The agent’s prior knowledge about the environment.
● The actions that the agent can carry out.

A rational agent always performs right action, where the right action means the action
that causes the agent to be most successful in the given percept sequence. The
problem the agent solves is characterized by Performance Measure, Environment,
Actuators, and Sensors (PEAS).

Example of Agents with their PEAS representation

Agent Performance Environment Actuators Sensors


measure

1.
● ● ● Keyboard
Healthy Patient Tests
Medical
patient
Diagnose ● Hospital ● Treatm (Entry of
● Minimized ents symptoms)
● Staff
cost

2.
● ● ● ●
Cleanness Room Wheels Camera
Vacuum
Cleaner ● Efficiency ● Table ● Brushes ● Dirt
detection
● Battery ● Wood ● Vacuum
sensor
life floor Extract
or ● Cliff sensor
● Security ● Carpet
● Bump
● Various
Sensor
obstacle
s ● Infrared Wall
Sensor

3.
● ● ● ●
Percentag Conveyo Jointed Camera
Part-picki
e of parts r belt Arms
ng Robot
in correct with
● Hand
bins. parts,
● Bins ● Joint angle
sensors.

The Structure of Intelligent Agents

Agent’s structure can be viewed as −

● Agent = Architecture + Agent Program


● Architecture = the machinery that an agent executes on.
● Agent Program = an implementation of an agent function.

Types of Intelligent Agents:

1. Simple Reflex Agents

● They choose actions only based on the current percept.


● They are rational only if a correct decision is made only on the basis of current
precept.
● Their environment is completely observable.

2. ​Model-Based Reflex Agents

This type of model considers the perception history along with the current perception to
decide the action. The environment here is not fully observable.
They use a model of the world to choose their actions. They maintain an internal state.

Model ​− knowledge about “how things happen in the world”.

Internal State ​− It is a representation of unobserved aspects of the current state


depending on percept history.

3. ​Goal-Based Agents

They choose their actions in order to achieve goals. A goal-based approach is more
flexible than a reflex agent since the knowledge supporting a decision is explicitly
modeled, thereby allowing for modifications.

Goal ​− It is the description of desirable situations.


4. Utility-Based Agents

They choose actions based on preference (utility) for each state.

Goals are inadequate when −

● There are conflicting goals, out of which only few can be achieved.
● Goals have some uncertainty of being achieved and you need to weigh likelihood
of success against the importance of a goal.

Environments:
Some programs operate in an entirely artificial environment c​ onfined to keyboard input,
database, computer file systems, and character output on a screen. The most famous
artificial environment is the T​uring Test environment​, in which one real and other
artificial agents are tested on equal ground. This is a very challenging environment as it
is highly difficult for a software agent to perform as well as a human.

Properties of Environment:

1. Fully observable vs Partially Observable:

● If an agent sensor can sense or access the complete state of an environment at


each point of time then it is a fully observable environment, else it is partially
observable.
● A fully observable environment is easy as there is no need to maintain the
internal state to keep track history of the world.
● An agent with no sensors in all environments then such an environment is called
as unobservable.

2. Deterministic vs Stochastic:

● If an agent's current state and selected action can completely determine the next
state of the environment, then such environment is called a deterministic
environment.
● A stochastic environment is random in nature and cannot be determined
completely by an agent.
● In a deterministic, fully observable environment, agent does not need to worry
about uncertainty.

4. Single-agent vs Multi-agent

● If only one agent is involved in an environment, and operating by itself then such
an environment is called single-agent environment.
● However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
● The agent design problems in the multi-agent environment are different from
single-agent environment.
5. Static vs Dynamic:

● If the environment can change itself while an agent is deliberating then such an
environment is called a dynamic environment else it is called a static
environment.
● Static environments are easy to deal with because an agent does not need to
continue looking at the world while deciding for an action.
● However, for dynamic environment, agents need to keep looking at the world at
each action.
● Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.

6. Discrete vs Continuous:

● If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment
else it is called a continuous environment.
● A chess game comes under a discrete environment as there is a finite number of
moves that can be performed.
● A self-driving car is an example of a continuous environment.

7. Known vs Unknown

● Known and unknown are not actually a feature of an environment, but it is an


agent's state of knowledge to perform an action.
● In a known environment, the results for all actions are known to the agent. While
in an unknown environment, the agent needs to learn how it works in order to
perform an action.
● It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.

8. Accessible vs Inaccessible

● If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else
it is called inaccessible.
● An empty room whose state can be defined by its temperature is an example of
an accessible environment.
● Information about an event on earth is an example of an Inaccessible
environment.

You might also like