0% found this document useful (0 votes)
196 views25 pages

03 - Agents

An intelligent agent is a software entity that uses sensors to perceive its environment and actuators to take actions. It can learn from its experiences to better accomplish tasks. Intelligent agents work through a cycle of perception, decision-making, and action using sensors to gather information and actuators to perform tasks in the real world. They are designed to be rational so their actions are logical given their perceptions and goals.

Uploaded by

Gilbert Khoueiry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views25 pages

03 - Agents

An intelligent agent is a software entity that uses sensors to perceive its environment and actuators to take actions. It can learn from its experiences to better accomplish tasks. Intelligent agents work through a cycle of perception, decision-making, and action using sensors to gather information and actuators to perform tasks in the real world. They are designed to be rational so their actions are logical given their perceptions and goals.

Uploaded by

Gilbert Khoueiry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Intelligent Agent

❑ An intelligent agent is a software entity that enables artificial


intelligence to take action.

❑ Intelligent agent senses the environment and uses actuators


to initiate action and conducts operations in the place of
users.

❑ Simply an Intelligent Agent (IA) is an entity that makes


decisions.
What is an Agent?
Anything that recognizes the environment through sensors and acts upon the
environment initiated through actuators is called AGENTS. An agent performs the
tasks of recognition, thinking, and acting cyclically. An agent can be:

▪ Human-Agent: eyes, ears, and other organs as sensors and hand, legs, vocal
track as actuators.

▪ Robotic Agent: cameras, infrared range finder, sensors, and various motors

▪ Software Agent: Set of programs designed for particular tasks like checking
the contents of received emails and grouping them as junk, important, very
important.
What is an Intelligent Agent?
An intelligent agent is an agent that can perform specific, predictable, and
repetitive tasks for applications with some level of individualism. These
agents can learn while performing tasks. These agents are with some
human mental properties like knowledge, belief, intention, etc. A
thermostat, Alexa, and Siri are examples of intelligent agents.

The main functions of intelligent agents are:

▪ Perception: Done through sensors

▪ Actions: Initiated through actuators.


4 Rules for an AI Agent

❑ Rule 1: Must have the ability to recognize the environment.

❑ Rule 2: Decisions are made from observations.

❑ Rule 3: Decision should result in actions.

❑ Rule 4: The action must be rational.


How do Intelligent Agents Work?
Sensors, actuators, and effectors are the three main components the
intelligent agents work through:

▪ Sensor: A device that detects environmental changes and sends the


information to other devices. An agent observes the environment through
sensors. E.g.: Camera, GPS, radar.

▪ Actuators: Machine components that convert energy into motion.


Actuators are responsible for moving and controlling a system. E.g.: electric
motor, gears, rails, etc.

▪ Effectors: Devices that affect the environment. E.g.: wheels, and display
screen.
How do Intelligent Agents Work?
▪ Percepts or inputs from the environment are received through sensors by the
intelligent agent.

▪ Using this acquired information or observations this agent uses artificial intelligence
to make decisions.

▪ Actuators will then trigger actions.

▪ Percept history and past


actions will influence
future decisions.
Characteristics of Intelligent Agents
❑ Intelligent agents have some level of individualism that allows
them to perform certain tasks on their own.

❑ IA can learn even as tasks are carried out.

❑ They can make interactions with other entities like agents,


humans, and systems.

❑ New rules can be accommodated.

❑ Goal-oriented habits
Structure of an AI Agent
IA structure consists of three main parts:

▪ Architecture: The machinery that the agent executes on or the devices


that consist of actuators and sensors. PC, camera, etc. are examples.

▪ Agent function: It is used to map a percept to an action. Percept


Sequence refers to the history of the recognized information of the
intelligent agent.

▪ Agent program: It is an implementation of the agent function. The


execution of the agent program on the physical architecture produces
the agent function.
Rational Agent
For artificial intelligence, the actions based on logic(rational) are much
important because the agent gets a positive reward for each best possible
action and a negative reward for each wrong action.

An ideal rational agent is an agent that can perform in the best possible
action and maximize the performance measure. The actions from the
alternatives are selected based on:
▪ Percept sequence
▪ Built-in knowledge base

The actions of the rational agent make the agent most successful in the
percept sequence given. The highest performing agents are rational agents.
Rationality
Rationality defines the level of being reasonable, sensible, and having good
judgment sense. It is concerned with actions and results depending on what
the agent has recognized. Rationality is measured based on the following:

➢ Performance measure

➢ Prior knowledge about the environment

➢ Best possible actions that can be performed by an agent

➢ Percepts sequence
PEAS Representation in AI

It is a type of model on which an AI agent works on. It is used to group


similar agents. Environment, actuators, and sensors of the respective agent
are considered to make performance measure by PEAS.

PEAS stands for Performance Measure, Environment, Actuator, and Sensor.

(1) Performance Measure: The performance of each agent varies based on


their percepts and the success of agents is described using the performance
measure unit.
PEAS Representation in AI
(2) Environment: The surrounding of the agent for every instant. The
environment will change with time if the respective agent is set in motion.
Environments are of 5 major types:
o Fully observable & Partially observable
o Episodic & Sequential
o Static & Dynamic
o Discrete & Continuous
o Deterministic & Stochastic

(3) Actuator: Part of the agent which initiates the action and delivers the
output of action to the environment.

(4) Sensors: Part of the agent which takes inputs for the agent.
Examples of PEAS
Agent Performance Measure Environment Actuators Sensors

Vacuum Cleanness Room Wheels Camera


Cleaner Efficiency Table Brushes Dirt detection
Battery Life Wood floor Vacuum Extractor sensor
Security Carpet Cliff sensor
Various Bump sensor
Obstacles Infrared wall
sensor
Automated Car Comfortable trip Roads Steering wheel Camera
Drive Safety Traffic Accelerator GPS
Maximum Distance Vehicles Brake Odometer
Mirror
Hospital Patient’s health Hospital Prescription Symptoms
Management Admission process Doctors Diagnosis Patient’s response
System Payment Patients Scan report
Agent Types
Based on the capabilities and level of perceived intelligence intelligent
agents can be grouped into five main categories.

❑ Simple Reflex Agents

❑ Model-Based Reflex Agents

❑ Goal-Based Agents

❑ Utility-Based Agents

❑ Learning Agent
Simple Reflex Agents
The current percept is used rather than the percept history to act by these agents.
The basis for the agent function is the condition-action rule. The Condition-action
rule is a rule that maps a condition to an action. (e.g.: a room cleaner agent works
only if there is dirt in the room). The environment is fully observable, and a fully
observable environment is ideal for the success of the agent function. The challenges
to the design approach of the simple reflex agent are:
Very limited intelligence

❑ No knowledge of Unrecognized parts of the current state.

❑ Size is difficult to store.

❑ Environmental changes are not adaptable.


Model-Based Reflex Agents
The percept history is considered by Model-based reflex agents in their actions.
These agents can still work well in an environment that is not fully observable. They
use a model of the world to choose respective actions and they maintain an internal
state.

❑ Model: understand How things are happening in the world, so it is called a model-
based agent.

❑ Internal State: The unnoticed features of the current state is represented with the
percept history.

Updating the agent state requires the information about


▪ How the world evolves.
▪ How the world is affected by the agent’s action.
Goal-Based Agents
❑ For describing capabilities, goal-based agents use goal information. These agents
have higher capabilities than model-based reflex agents since the knowledge
supporting a decision is explicitly modeled and thereby modifications are allowed.

❑ For these agents, the knowledge about the current state environment is not
sufficient to decide what to do. The goal must describe the desirable situations.
The agent needs to know about this goal. The agents choose actions to achieve
the goal.

❑ Before deciding whether the goal is achieved or not these agents may have to
consider a long sequence of possible actions. Goal − description of desirable
situations
Utility-Based Agents

❑ Choices made by these agents are based on utility. Extra components of utility
measurement made them more advanced than goal-based agents. They act
based not only on goals but also on the best way to achieve the goal.

❑ The utility-based agents are useful when the agent has to perform the best action
from multiple alternatives. The efficiency of each action to achieve the goal is
checked by mapping each state to a real number.
Learning Agents
Agents with the capability of learning from their previous experience are learning
agents. They start to act with their basic knowledge and then through learning they can
act and adapt automatically. Learning agents can learn, analyze the performance, and
improve the performance. Learning agents have the following conceptual components:

▪ Learning element: Element enables learning from previous experience.

▪ Critic: Provides feedback on how well the agent is doing concerning a fixed
performance standard.

▪ Performance element: The actions to be performed are selected.

▪ Problem generator: Acts as a feedback agent that performs certain tasks such as
making suggestions that will lead to new and informative experiences.
Multi-Agent Systems
❑ MAS (multi-agent systems): an agent interacts with neighboring
agents

❑ A complex task is divided into multiple smaller tasks, each of which is


assigned to a distinct agent

❑ Applications:
o Modeling complex systems
o Smart grids
o Computer networks
o Cloud computing, social networking, security, routing
Properties of a MAS
▪ Coordination: managing agents to collaboratively reach their goals
o Consensus - achieving a global agreement
o Controllability - using certain regulations to transmit a state
o Synchronization - aligning each agent in time with other agents
o Connection - connecting to each other
o Formation - organizing in a structure

▪ Communication: communicating among agents

▪ Fault detection and isolation (FDI): a faulty agent may infect other agents that it
collaborates with

▪ Task allocation: allocation of tasks to agents considering the associated cost and
time

▪ Localization: each agent has limited view (only its neighbors)


Agent Organization
The way agents communicate and connect:

▪ Flat: all agents are regarded as equals

▪ Hierarchy: agents have tree-like relations

▪ Holon: agents are organized in multiple groups which are known as holons based on particular
features (e.g., heterogeneity), holons are then multiply layered

▪ Coalition: agents are temporarily grouped based on their goal

▪ Team: agents create a team and define a team goal which differs with their own goal

▪ Matrix: each agent is managed by at least two head agents

▪ Congregation: agents in a location form a congregation to achieve their requirements that they
cannot achieve alone
Applications: Robotic Logistics and Planning
▪ Coordinating a large swarm of robots requires advanced planning algorithms, and
this a fundamental cornerstone of MAS research. Although the Amazon Warehouse
Robots shown below act behind the scenes, they have a significant impact on the
delivery efficiency, helping Amazon the distribute its vast number of sales (over 5
billion in 2017).

▪ Included within this coordination system are task allocation, scheduling and path-
finding algorithms, which are all pertinent and active research topics for MAS.

▪ Agent logistics and planning is not constrained to indoor, well-defined environments


or just robot-only teams; it can be also be used for human-agent teaming in
complex environments, for example in disaster response.

https://medium.com/swlh/whats-hot-in-multi-agent-systems-4b0f348e68bd
Applications: Autonomous Vehicles
▪ The application of AI to vehicles has come into its own in recent years, with many
exciting developments and experiments happening all the time. Driving on the road
is inherently a multi-agent system; the road consists of other drivers, pedestrians,
cyclists etc., and now (or in the near future) self-driving cars. The development of
self-driving cars makes use of MAS research through simulation of agents’ (human
or otherwise) actions on the road and could even be utilized to enable traffic
negotiation between autonomous vehicles.

▪ Autonomous vehicle research represents an interesting crossover of AI


technologies. Complex deep learning and computer vision methods are used to
make sense of the world around the car, but this information is still fed into an
agent planning system, relying heavily on existing MAS research. Without the
collaboration between the two technologies, it would not be possible to put self-
driving cars on the road in the near future.
Applications: Games
▪ Due to their entirely digital format and potentially complex environments, video games
are a terrific resource for experimenting with MAS research. They can provide both
cooperative and competitive environments, and sometimes both at the same time.

▪ Research based on a multiplayer capture the flag game, where two teams of two
agents competed in a complex 3D environment, requiring cooperation within a
competitive situation. This led to the development of new techniques for approaching
complex MAS problems.

You might also like