AI Agents

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Agents in Artificial Intelligence

An AI system can be defined as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through actuators.
An AI agent can have mental properties such as knowledge, belief, intention, etc.

What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.

Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using sensors
and actuators for achieving goals. An intelligent agent may learn from the environment to
achieve their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.

A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.

Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:

o Performance measure which defines the success criterion.


o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.

Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
Agent = Architecture + Agent program  

Following are the main three terms involved in the structure of an AI agent:

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.

1. f:P* → A  

Agent program: Agent program is an implementation of agent function. An agent program


executes on the physical architecture to produce function f.

PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:

o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors

Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:


Let's suppose a self-driving car then PEAS representation will be:

Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.


Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not a part of
an agent itself. An environment can be described as a situation in which an agent is present.

The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.

Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view
of an agent:

1. Fully observable vs Partially Observable


2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs Partially Observable:

o If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially
observable.
o A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
o An agent with no sensors in all environments then such an environment is called
as unobservable.

2. Deterministic vs Stochastic:

o If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely
by an agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

3. Episodic vs Sequential:

o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.

4. Single-agent vs Multi-agent

o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single
agent environment.

5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.

6. Discrete vs Continuous:

o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite number of moves
that can be performed.
o A self-driving car is an example of a continuous environment.

7. Known vs Unknown

o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
o It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.

8. Accessible vs Inaccessible

o If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it is
called inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.
Characteristics of an intelligent agent
• Rationality : Perfect Rationality assumes that the rational agent knows all and will
take the action that maximizes her utility. Rational Action is the action that
maximizes the expected value of the performance measure given the precept sequence
to date.
• Bounded Rationality : The property of an agent that behaves in a manner that is nearly
optimal with respect to its goals as its resources will allow –an intelligent agent will
be expected to act optimally to the best of its abilities and its resource constraints
under the agent environment.
• Agent Environment : Environments in which agents operate can be defined in
different ways.
• Observability : In terms of observability, an environment can be characterized as fully
observable or partially observable.
• In a fully observable environment all of the environment relevant to the action being
considered is observable - the agent does not need to keep track of the changes in the
environment. Ex: chess playing system.
• In a partially observable environment, the relevant features of the environment are
only partially observable. Ex: A bridge playing program
• Determinism : In deterministic environments, the next state of the environment is
completely described by the current state and the agent’s action.Ex: Image analysis
systems - the processed image is determined completely by the current image and the
processing operations. 
• Episodicity : An episodic environment means that subsequent episodes do not depend
on what actions occurred in previous episodes. In a sequential environment, the agent
engages in a series of connected episodes.
• Dynamism:
• Static Environment: does not change from one state to the next while the agent is
considering its course of action. The only changes to the environment are those
caused by the agent itself.
• A static environment does not change while the agent is thinking
• The passage of time as an agent deliberates is irrelevant.
• The agent doesn’t need to observe the world during deliberation.
• A Dynamic Environment changes over time independent of the actions of the agent
-- and thus if an agent does not respond in a timely manner, this counts as a choice to
do nothing.
• Continuity : If the number of distinct percepts and actions is limited, the environment
is discrete, otherwise it is continuous. 

You might also like