Unit 2 Intellegent Agent
Unit 2 Intellegent Agent
An intelligent agent (IA) is an entity that makes a decision that enables artificial
intelligence to be put into action. It can also be described as a software entity
that conducts operations in the place of users or programs after sensing the
environment. It uses actuators to initiate action in that environment.
This agent has some level of autonomy that allows it to perform specific,
predictable, and repetitive tasks for users or applications.
It’s also termed as ‘intelligent’ because of its ability to learn during the process
of performing tasks.
The two main functions of intelligent agents include perception and action.
Perception is done through sensors while actions are initiated through actuators.
The higher-level agents and lower-level agents form a complete system that can
solve difficult problems through intelligent behaviours or responses.
Examples of Agent:
A software agent has Keystrokes, file contents, received network packages
which act as sensors and displays on the screen, files, sent network packets
acting as actuators.
A Human-agent has eyes, ears, and other organs which act as sensors, and
hands, legs, mouth, and other body parts acting as actuators.
A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors acting as actuators.
They have some level of autonomy that allows them to perform certain
tasks on their own.
They have a learning ability that enables them to learn even as tasks are
carried out.
They can interact with other entities such as agents, humans, and systems.
New rules can be accommodated by intelligent agents incrementally.
They exhibit goal-oriented habits.
They are knowledge-based. They use knowledge regarding
communications, processes, and entities.
The IA structure consists of three main parts: architecture, agent function, and
agent program.
PAGE
These principles are intended to ensure that AI technologies are used in ways
that are beneficial, fair, and transparent, promoting trust and accountability in
AI systems across different sectors and countries.
Properties of IA
1. Sensors:
o Perception Module: The sensors allow the agent to perceive its
environment by collecting data. This could include cameras,
microphones, GPS, temperature sensors, or any other type of input
device relevant to the agent's environment.
o Data Preprocessing: Raw sensor data is often noisy or incomplete.
Preprocessing includes filtering, normalization, and extraction of
relevant features to prepare the data for further processing.
2. Effectors:
o Actuation Module: Effectors are the components that allow the
agent to take actions in the environment. These could include
motors, displays, speakers, or any other output device that the
agent can control to affect the environment.
3. Knowledge Base:
o World Model: A representation of the environment, including
static and dynamic information. This model helps the agent
understand the context in which it operates.
o Domain Knowledge: Specific information about the tasks the
agent is designed to perform, including rules, constraints, and
historical data.
4. Decision-Making Module:
o Goal Management: Defines the objectives the agent aims to
achieve. This includes both short-term and long-term goals.
o Planning: Generates a sequence of actions to achieve the agent’s
goals. Planning algorithms can range from simple rule-based
systems to complex methods like A* or probabilistic planners.
o Reasoning: Involves logical inference and decision-making
processes that help the agent determine the best course of action
based on its knowledge base and current perceptions.
5. Learning Module:
o Machine Learning Algorithms: Techniques such as supervised
learning, unsupervised learning, reinforcement learning, and neural
networks that allow the agent to learn from experience and adapt
its behavior over time.
o Training Data: Historical data and experiences that the agent uses
to improve its models and decision-making processes.
6. Communication Module:
o Inter-Agent Communication: Protocols and interfaces for
communicating with other agents, either to share information or to
coordinate actions.
o Human-Agent Interaction: Interfaces for interacting with human
users, including natural language processing for understanding and
generating human language.
7. Control Architecture:
o Reactive Layer: Handles immediate responses to environmental
changes. This layer often includes simple, rule-based systems for
rapid reaction.
o Deliberative Layer: Manages long-term planning and decision-
making, using more complex reasoning and planning algorithms.
o Hybrid Systems: Combines reactive and deliberative approaches
to balance quick responses with thoughtful planning.
8. Monitoring and Evaluation:
o Performance Metrics: Criteria for evaluating the agent’s success
in achieving its goals and performing tasks efficiently.
o Feedback Loop: Mechanisms for receiving feedback from the
environment and adjusting behavior accordingly.
PEAS Description of an IA
1. Performance Measure
The performance measure defines the criteria for evaluating the success of the
agent’s actions. These criteria are specific to the agent’s goals and can include
various metrics depending on the application.
Examples:
2. Environment
The environment is the context within which the agent operates. It includes
everything the agent can interact with, and it can vary significantly based on the
type of agent.
Examples:
3. Actuators
Actuators are the components that allow the agent to take actions and affect the
environment. These vary based on whether the agent is a physical entity or a
software system.
Examples:
4. Sensors
Sensors are the components that allow the agent to perceive its environment.
They provide the necessary input data for the agent to understand and react to
its surroundings.
Examples:
Performance Measure
Environment
Actuators
Steering mechanism
Accelerators and brakes
Indicator lights
Horn
Sensors
Cameras
LIDAR and RADAR systems
Ultrasonic sensors
GPS and inertial measurement units
Accuracy of responses
User satisfaction and engagement
Task completion rate (e.g., setting reminders, answering queries)
Response time
Environment
Actuators
Display screen
Speakers
Network interface (for sending emails, fetching information)
Sensors
Using the PEAS framework helps in designing and understanding the specific
requirements and capabilities of an intelligent agent, ensuring that all aspects of
its operation are considered and integrated effectively.
Simple reflex agents ignore the rest of the percept history and act only on the
basis of the current percept. Percept history is the history of all that an agent
has perceived to date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an
action. If the condition is true, then the action is taken, else not. This agent
function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are
often unavoidable. It may be possible to escape from infinite loops if the agent
can randomize its actions.
Example: A thermostat that turns on the heating when the temperature drops
below a certain threshold.
Problems with Simple reflex agents are :
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose
among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behaviourcan easily be changed.
Example: A navigation system that finds the shortest path to a destination.
Utility-based agents
The agents which are developed having their end uses as building blocks are
called utility-based agents. When there are multiple possible alternatives, then
to decide which one is best, utility-based agents are used. They choose actions
based on a preference (utility) for each state. Sometimes achieving the
desired goal is not enough. We may look for a quicker, safer, cheaper trip to
reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the
world, a utility agent chooses the action that maximizes the expected utility. A
utility function maps a state onto a real number which describes the associated
degree of happiness.
Example: An autonomous trading agent that makes investment decisions to
maximize profit.
Learning Agent:
A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities. It starts to act with basic knowledge
and then is able to act and adapt automatically through learning.
A learning agent has mainly four conceptual components, which are:
Agent Environment in AI
The environment is where agent lives, operate and provide the agent with
something to sense and act upon it. An environment is mostly said to be non-
feministic.
Features of Environment
An environment can have various features from the point of view of an agent:
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine
the next state of the environment, then such environment is called a
deterministic environment.
o A stochastic environment is random in nature and cannot be determined
completely by an agent.
o In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
o Example:
Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined
Self Driving Cars – the actions of a self-driving car are not unique, it
varies time to time
3. Competitive vs Collaborative:
o An agent is said to be in a competitive environment when it competes
against another agent to optimize the output.
o The game of chess is competitive as the agents compete with each other
to win the game which is the output.
o An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
o When multiple self-driving cars are found on the roads, they cooperate
with each other to avoid collisions and reach their destination which is
the output desired.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself
then such an environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such
an environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different
from single agent environment.
o The game of football is multi-agent as it involves 11 players in each
team.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then
such environment is called a dynamic environment else it is called a static
environment.
o Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the
world at each action.
o Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that
can be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite
number of moves that can be performed.
o A self-driving car is an example of a continuous environment.
o Self-driving cars are an example of continuous environments as their
actions is driving, parking, etc. which cannot be numbered.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it
is an agent's state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it works
in order to perform an action.
o It is quite possible that a known environment to be partially observable
and an Unknown environment to be fully observable.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible
environment else it is called inaccessible.
o An empty room whose state can be defined by its temperature is an
example of an accessible environment.
o Information about an event on earth is an example of Inaccessible
environment.
9. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only
the current percept is required for the action.
o However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.
P = Performance Measure
E = Environment
A = Actuators
S = Sensors
It is a valuable method for creating and evaluating intelligent systems. Now, let
us see how the PEAS representation can be implemented in self-driving cars.
Performance Measure:
Environment:
A self-driving car's environment refers to the exterior circumstances in
which it operates. It must be capable of driving on a variety of terrains
(hilly roads) or road conditions (wet surfaces). Traffic signals, road
signs (speed limits, exits, bends, etc.), pedestrians, and other vehicles on
the road also influence driving conditions.
Actuators:
Actuators enable self-driving cars to interact with the environment. The
steering wheel, accelerator, brake pedals, indicators, and horn
are examples of actuators. The steering wheel is an important actuator as
it enables the car to move and change directions. The accelerator and
brake pedals are also significant since they allow control of the car's
speed. Moreover, indicators and the horn are important as they will
enable the car to communicate lane changes to drivers or pedestrians.
Sensors:
Sensors are essential for self-driving cars to sense their
environment. Cameras, GPS, speedometers, accelerometers, and
sonars are examples of sensors. Cameras are especially important as they
allow the car to detect objects such as other vehicles, pedestrians, and
traffic signs. Another essential sensor is the GPS, which assists the car in
determining its location and planning its route. The speedometer also
monitors the vehicle's speed, while the accelerometer measures its
acceleration. Moreover, the sonar detects items in the vicinity of the car
using sound waves, allowing the car to drive the road safely and quickly,
particularly for identifying objects outside the camera's range.
Examples of Agents
Driverless Cars
Medical Diagnosis AI
Playing soccer
Sensors: HTML
Environment: wall
Knitting a sweater
Performance
Agent Measure Environment Actuator Sensor
Patient’s
Nurses,
Hospital health, Prescription, Symptoms,
Hospital,
Management Admission Diagnosis, Patient’s
Doctors,
System process, Scan report response
Patients, staff
Payment
The
Steering
comfortable Roads, Camera,
Automated wheel,
trip, Safety, Traffic, GPS,
Car Drive Accelerator,
Maximum Vehicles Odometer
Brake, Mirror
Distance
Maximize Classroom,
Smart
Subject scores, Desk, Chair, Eyes, Ears,
displays,
Tutoring Improvement Board, Staff, Notebooks
Corrections
is students Students
Percentage of Conveyor belt
Part-picking Jointed arms Camera, joint
parts in with parts;
robot and hand angle sensors
correct bins bins
Satellite
Downlink Display
image Correct image Color pixel
from orbiting categorization
analysis categorization arrays
satellite of scene
system
Vacuum Cleanliness, Room, table, Wheels, Camera,
cleaner security, carpet, floors brushes sensors
battery
Chatbot Helpful Messaging Sender NLP
system responses, platform, mechanism, algorithms
accurate internet, typer
responses website
Efficient Roads, traffic, Brake, Cameras,
Autonomous navigation, pedestrians, accelerator, GPS,
vehicle safety, time, road signs steer, horn speedometer
comfort