03 - Agents
03 - Agents
▪ Human-Agent: eyes, ears, and other organs as sensors and hand, legs, vocal
track as actuators.
▪ Robotic Agent: cameras, infrared range finder, sensors, and various motors
▪ Software Agent: Set of programs designed for particular tasks like checking
the contents of received emails and grouping them as junk, important, very
important.
What is an Intelligent Agent?
An intelligent agent is an agent that can perform specific, predictable, and
repetitive tasks for applications with some level of individualism. These
agents can learn while performing tasks. These agents are with some
human mental properties like knowledge, belief, intention, etc. A
thermostat, Alexa, and Siri are examples of intelligent agents.
▪ Effectors: Devices that affect the environment. E.g.: wheels, and display
screen.
How do Intelligent Agents Work?
▪ Percepts or inputs from the environment are received through sensors by the
intelligent agent.
▪ Using this acquired information or observations this agent uses artificial intelligence
to make decisions.
❑ Goal-oriented habits
Structure of an AI Agent
IA structure consists of three main parts:
An ideal rational agent is an agent that can perform in the best possible
action and maximize the performance measure. The actions from the
alternatives are selected based on:
▪ Percept sequence
▪ Built-in knowledge base
The actions of the rational agent make the agent most successful in the
percept sequence given. The highest performing agents are rational agents.
Rationality
Rationality defines the level of being reasonable, sensible, and having good
judgment sense. It is concerned with actions and results depending on what
the agent has recognized. Rationality is measured based on the following:
➢ Performance measure
➢ Percepts sequence
PEAS Representation in AI
(3) Actuator: Part of the agent which initiates the action and delivers the
output of action to the environment.
(4) Sensors: Part of the agent which takes inputs for the agent.
Examples of PEAS
Agent Performance Measure Environment Actuators Sensors
❑ Goal-Based Agents
❑ Utility-Based Agents
❑ Learning Agent
Simple Reflex Agents
The current percept is used rather than the percept history to act by these agents.
The basis for the agent function is the condition-action rule. The Condition-action
rule is a rule that maps a condition to an action. (e.g.: a room cleaner agent works
only if there is dirt in the room). The environment is fully observable, and a fully
observable environment is ideal for the success of the agent function. The challenges
to the design approach of the simple reflex agent are:
Very limited intelligence
❑ Model: understand How things are happening in the world, so it is called a model-
based agent.
❑ Internal State: The unnoticed features of the current state is represented with the
percept history.
❑ For these agents, the knowledge about the current state environment is not
sufficient to decide what to do. The goal must describe the desirable situations.
The agent needs to know about this goal. The agents choose actions to achieve
the goal.
❑ Before deciding whether the goal is achieved or not these agents may have to
consider a long sequence of possible actions. Goal − description of desirable
situations
Utility-Based Agents
❑ Choices made by these agents are based on utility. Extra components of utility
measurement made them more advanced than goal-based agents. They act
based not only on goals but also on the best way to achieve the goal.
❑ The utility-based agents are useful when the agent has to perform the best action
from multiple alternatives. The efficiency of each action to achieve the goal is
checked by mapping each state to a real number.
Learning Agents
Agents with the capability of learning from their previous experience are learning
agents. They start to act with their basic knowledge and then through learning they can
act and adapt automatically. Learning agents can learn, analyze the performance, and
improve the performance. Learning agents have the following conceptual components:
▪ Critic: Provides feedback on how well the agent is doing concerning a fixed
performance standard.
▪ Problem generator: Acts as a feedback agent that performs certain tasks such as
making suggestions that will lead to new and informative experiences.
Multi-Agent Systems
❑ MAS (multi-agent systems): an agent interacts with neighboring
agents
❑ Applications:
o Modeling complex systems
o Smart grids
o Computer networks
o Cloud computing, social networking, security, routing
Properties of a MAS
▪ Coordination: managing agents to collaboratively reach their goals
o Consensus - achieving a global agreement
o Controllability - using certain regulations to transmit a state
o Synchronization - aligning each agent in time with other agents
o Connection - connecting to each other
o Formation - organizing in a structure
▪ Fault detection and isolation (FDI): a faulty agent may infect other agents that it
collaborates with
▪ Task allocation: allocation of tasks to agents considering the associated cost and
time
▪ Holon: agents are organized in multiple groups which are known as holons based on particular
features (e.g., heterogeneity), holons are then multiply layered
▪ Team: agents create a team and define a team goal which differs with their own goal
▪ Congregation: agents in a location form a congregation to achieve their requirements that they
cannot achieve alone
Applications: Robotic Logistics and Planning
▪ Coordinating a large swarm of robots requires advanced planning algorithms, and
this a fundamental cornerstone of MAS research. Although the Amazon Warehouse
Robots shown below act behind the scenes, they have a significant impact on the
delivery efficiency, helping Amazon the distribute its vast number of sales (over 5
billion in 2017).
▪ Included within this coordination system are task allocation, scheduling and path-
finding algorithms, which are all pertinent and active research topics for MAS.
https://medium.com/swlh/whats-hot-in-multi-agent-systems-4b0f348e68bd
Applications: Autonomous Vehicles
▪ The application of AI to vehicles has come into its own in recent years, with many
exciting developments and experiments happening all the time. Driving on the road
is inherently a multi-agent system; the road consists of other drivers, pedestrians,
cyclists etc., and now (or in the near future) self-driving cars. The development of
self-driving cars makes use of MAS research through simulation of agents’ (human
or otherwise) actions on the road and could even be utilized to enable traffic
negotiation between autonomous vehicles.
▪ Research based on a multiplayer capture the flag game, where two teams of two
agents competed in a complex 3D environment, requiring cooperation within a
competitive situation. This led to the development of new techniques for approaching
complex MAS problems.