We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 14
What is an agent?
Definition: In artificial intelligence, an agent is a computer program or system that is designed to
perceive its environment, make decisions and take actions to achieve a specific goal or set of goals.
The agent operates autonomously, meaning it is not directly controlled by a human operator,
Alternatively saying, an agent is anything that can be viewed as:
Perceiving its environment through sensors and
Acting upon that environment through actuatorsStructure of Al agent
To understand the structure of Intelligent Agents, we should be familiar with Architecture and Agent
programs. Architecture is the machinery that the agent executes on. It is a device with sensors and
actuators, for example, a robotic car, a camera, and a PC, An agent program is an implementation of
an agent function. An agent function is a map from the percept sequence(history of all that an agent
has perceived to date) to an action.
+ Alternatively saying, Agent = Architecture + Agent Program
Abiaieg
Goals Preferences,
rie Krowleege
observations (
\ Environment |
Past Experiences \_C™" 2
Examples: Siri and online chess playingExample of agents
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
+ Learning Agent
Multi-agent systems
Hierarchical agentsSimple Reflex agent
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept. Percept
history is the history of all that an agent has perceived to date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e., a condition to an action. if the condition is true, then the
action is taken, else not. This agent function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are often unavoidable, It may be possible
to escape from infinite loops if the agent can randomize its actions.
~.Percerts
/ ngent Sensors
iia eo
r
3
[onto] i
ston es 3
eons
Example: Robotic vacuum cleanerModel-Based Reflex agent
It works by finding @ rule whose condition matches the current situation, A model-based agent can handle partially
observable environments by the use of a model about the world. The agent has to keep track of the internal state
which is adjusted by each percept and that depends on the percept history. The current state is stored inside the
agent which maintains some kind of structure describing the part of the world which cannot be seen
Sensors
V
auowuosus
Actuators
Example: Self driving mobile visionGoal based agent
These kinds of agents take decisions based on how far they are currently from their goal (description of desirable
situations). Their every action is intended to reduce their distance from the goal. This allows the agent a way to
choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that supports its
decisions is represented explicitly and can be modified, which makes these agents more flexible. They usually require
d planning, The goal-based agent's behavior can easily be changed.
Example: Searching robotUtility based agent
‘The agents which are developed having their end uses as building blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose
actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may
look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real number which describes the
associated degree of happiness.
Example: Route recommendation systemsUtility based agent
The agents which are developed having their end uses as bullding blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one is best, utility-based agents are used, They choose
actions based on a preference (utility) for each state, Sometimes achieving the desired goal is not enough. We may
look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real number which describes the
associated degree of happiness.
Example: Route recommendation systemsLearning agent
Alearning agent in Al is the type of agent that can learn from its past experiences or it has learning capabilities. It
starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent
has mainly four conceptual components, which are:
Learning element: itis responsible for making improvements by learning from the environment.
Critic: The learning element takes feedback from critics which describes how well the agent is doing with respect to a
fixed performance standard.
Performance element: It is responsible for selecting external action.
Problem Generator: This component is responsible for suggesting actions that will lead to new and informative
experiences
‘clement
lca
bien |r
Gaverter setione
ce
Example: Google self learning Al alpha zeroUse of agents
Robotics: Agents can be used to control robots and automate tasks in manufacturing, transportation, and other
industries.
Smart homes and buildings: Agents can be used to control heating, lighting, and other systems in smart homes
and buildings, optimizing energy use and improving comfort.
Transportation systems: Agents can be used to manage traffic flow, optimize routes for autonomous vehicles, and
improve logistics and supply chain management.
Healthcare: Agents can be used to monitor patients, provide personalized treatment plans, and optimize
healthcare resource allocation.
Finance: Agents can be used for automated trading, fraud detection, and risk management in the financial
industry.
Games: Agents can be used to create intelligent opponents in games and simulations, providing a more
challenging and realistic experience for players.
Natural language processing: Agents can be used for language translation, question answering, and chatbots that
can communicate with users in natural language.
Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network security.
Environmental monitoring: Agents can be used to monitor and manage natural resources, track climate change,
and improve environmental sustainability.
Social media: Agents can be used to analyze social media data, identify trends and patterns, and provide
personalized recommendations to users.Search algorithms in Al
Type of Search algorithms
ex ci)
Each of these algorithms will have:
ey oes c>
A problem graph, containing the start node S and the goal node G.
A strategy, describing the manner in which the graph will be traversed to get to G.
A fringe, which isa data structure used to store all the possible states (nodes) that you can go from the
current states.
A tree, that results while traversing to the goal node.
solution plan, which the sequence of nodes from S to G.Depth First Search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm
starts at the root nade (selecting some arbitrary node as the root node in the case of a graph) and explores as
far as possible along each branch before backtracking. It uses last in- first-out strategy and hence it is
implemented using a stack.
Path followed: S->A->B->C>GBreadth First Search
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the
tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’), and explores all of the
neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. its implemented
using a queue.
*%
Path followed: S > D-> GWhat is an Agent in Al
Structure of an Al agent
Agents — Example of Agents
Type of agents
Heuristic Search Techniques
BFS Algo
Depth First Search