0% found this document useful (0 votes)
28 views44 pages

AI - Intelligent Agent 4-1

Ai

Uploaded by

jihadmonir460
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views44 pages

AI - Intelligent Agent 4-1

Ai

Uploaded by

jihadmonir460
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Intelligent Agent

Lubna Yasmin Pinky


Assistant Professor
Dept. of CSE, MBSTU,
Santosh, Tangail.
Instructional Objective
Define an agent
Define an Intelligent agent
Define a Rational agent
Discuss different types of environment
Explain classes of intelligent agents
Applications of Intelligent agent
Agents in AI

• In artificial intelligence, an agent is a computer program or system


that is designed to perceive its environment, make decisions and
take actions to achieve a specific goal or set of goals.

• The agent operates autonomously, meaning it is not directly


controlled by a human operator.

• Agents can be classified into different types based on their


characteristics, such as whether they are reactive or proactive,
whether they have a fixed or dynamic environment, and whether
they are single or multi-agent systems.
Agents in AI
• Reactive agents are those that respond to immediate stimuli from
their environment and take actions based on those stimuli.
• Proactive agents, on the other hand, take initiative and plan ahead to
achieve their goals.
• The environment in which an agent operates can also be fixed or
dynamic. Fixed environments have a static set of rules that do not
change, while dynamic environments are constantly changing and
require agents to adapt to new situations.

• Multi-agent systems involve multiple agents working together to


achieve a common goal. These agents may have to coordinate their
actions and communicate with each other to achieve their objectives.
Agents are used in a variety of applications, including robotics,
gaming, and intelligent systems.
Classification of Agents
Agent System
• An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain other
agents.
• An agent is anything that can perceive its environment
through sensors and acts upon that environment through effectors.
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors,
and various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
Agent Terminology
• Performance Measure of Agent − It is the criteria, which determines
how successful an agent is.

• Behavior of Agent − It is the action that agent performs after any


given sequence of percepts.

• Percept − It is agent’s perceptual inputs at a given instance.

• Percept Sequence − It is the history of all that an agent has perceived


till date.

• Agent Function − It is a map from the precept sequence to an action.


Structure of Agent System
Agent’s structure can be viewed as −
•Agent = Architecture + Agent Program
•Architecture = the machinery that an agent executes on.
•Agent Program = an implementation of an agent function.
Example of Agent : Vacuum-cleaner world

• Percepts: location and state of the environments /contents, e.g.,


[A,Dirty]
• Actions: Left, Right, Suck, NoOp
• Agent’s function  look-up table
• For many agents this is a very large table

Artificial Intelligence a modern approach


Rational Agent

• Artificial intelligence is defined as the study of rational agents.


• A rational agent could be anything that makes decisions, such as a
person, firm, machine, or software.
• It carries out an action with the best outcome after considering past and
current percepts(agent’s perceptual inputs at a given instance).
• Rationality is nothing but status of being reasonable, sensible, and
having good sense of judgment.
• A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the
given percept sequence. The problem the agent solves is characterized
by Performance Measure, Environment, Actuators, and Sensors (PEAS).
Rationality
Perfect Rationality:
Assumes that the rational agent knows
all and will take the action that
maximize the utility.
Human beings do not satisfy this
definition of rationality.
Task Environment
Before we design an intelligent agent we must specify its task
environment.

PEAS:
P: Performance Measure
E: Environment
A: Actuators
S: Sensors

Must first specify the settings for intelligent agent design.


PEAS
Ex: Consider the task for designing an automated taxi driver.
Performance Measure: safe, fast, legal, comfortable trip,
maximize profits.
Environment: Roads, Other traffic, pedestrians, customers.
Actuators: Steering wheel, accelerator, brake, signal, horn.
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard.
Agent Environment
Environments in which agents operate can
be defined in different ways.

It is helpful to view the following


definitions as referring to the way the
environment appears from the point of view
of the agent itself.
Environment Types
Environment:Observability

Fully Observable: All of the environment


relevant to the action being considered is
• observable. Such environments are convenient, since the agent
is freed from the task of keeping track of changes in the
environment.
Partially Observable: The relevant features ofthe
environment are only partially observable.

Example: Fully obs: Chess; Partially obs: Poker


Environment:Determinism

Deterministic: The next state of the


environment is completely described by the
current state and the agent’s action. e.g. Image
Analysis
Stochastic: If an element of interference or
uncertainty occurs then the environment is
stochastic. A partially observable environment will
appear to be stochastic to the agent. e.g. ludo
Strategic: Environment state wholly determined
by the preceding state and the actions of multiple
agents is called strategic. e.g. Chess
Environment: Episodicity
Episodic/Sequential:
An Episodic Environment means that
subsequent episodes do not depend on
what actions occurred in previous
episodes.
In a Sequential Environment, the
agent engages in a series of connected
episodes.
Environment: Dynamism
Static Environment: does not change
from one sate to next while the agent is
considering its course of action. The
only changes to the environment as
those caused by the agent itself.
Dynamic Environment: Changes over time
independent of the actions of the agent-
and thus if an agent does not respond
in a timely manner, this counts as a
choice to do nothing.
Environment: Continuity

Discrete/Continuous: If the number of


distinct percepts and actions is limited,
the environment is discrete, otherwise it
is continuous.
Single agent vs. Multi agent

 If the environment contains other intelligent


agents, the agent needs to be concerned about
strategic, game-theoretic aspects of the
environment (for either cooperative or competitive
agents).
 Most engineering environments don’t have multi-
agent properties, whereas most social and
economic systems get their complexity from the
interactions of (more or less) rational agents.
Example of Task Environment
Classes of IntelligentAgents
Intelligent agents are grouped in to five
classes based on their degree of
perceived intelligence and capability.
 Simple reflex agents
 Model based reflex agents
 Goal based agents
 Utility based agents
 Learning agents
Condition Rule
• Condition-Action Rule − It is a rule that maps a state (condition) to an
action.
Simple Reflex agent:
• The Simple reflex agents are the simplest agents.
• These agents take decisions on the basis of the current percepts and ignore the rest of the
percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Simple Reflex agent:
Toy example:
Vacuum world.

Percepts: robot senses it’s location and “cleanliness.”


So, location and contents, e.g., [A, Dirty], [B, Clean].
With 2 locations, we get 4 different possible sensor inputs.
Actions: Left, Right, Suck, NoOp
Model-based reflex agent
• The Model-based agent can work in a partially observable environment,
and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it
is called a Model-based agent.
• Internal State: It is a representation of the current state based on
percept history.
• These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model-based reflex agents
 Know how world evolves
 Overtaking car gets closer from
behind
 How agents actions affect the
world
 Wheel turned clockwise takes you
right

 Model base agents update their


state

Artificial Intelligence a modern approach


Goal based agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable
situations.
• Goal-based agents expand the capabilities of the model-based
agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not.
Such considerations of different scenario are called searching
and planning, which makes an agent proactive.
Goal-based agents

• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake

Artificial Intelligence a modern approach


Utility based agents
•These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
•Utility-based agent act based not only goals but also the best way
to achieve the goal.
•The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
•The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Utility based agents
Learning agents
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by
learning from environment
• Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance
standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
Learning agents(Taxi driver)

• Performance element
• How it currently drives
• Taxi driver Makes quick left turn across 3 lanes
• Critics observe shocking language by passenger and other drivers and informs bad action
• Learning element tries to modify performance elements for future
• Problem generator suggests experiment out something called Brakes on different Road
conditions
• Exploration vs. Exploitation
• Learning experience can be costly in the short run
• shocking language from other drivers
• Less tip
• Fewer passengers

Artificial Intelligence a modern approach


The Big Picture: AI for Model-Based Agents

Planning
Action
Decision Theory Reinforcement
Learning
Game Theory

Knowledge Learning
Logic Machine Learning
Probability Statistics
Heuristics
Inference

Artificial Intelligence a modern approach


Applications ofIntelligent Agents

• Intelligent personal assistants: Siri, Alexa, and Google Assistant.


• Autonomous robots: Roomba vacuum cleaner and the Amazon
delivery robot.
• Gaming agents: chess-playing agents and poker-playing agents.
• Fraud detection agents
• Traffic management agents
End of Class

You might also like