0% found this document useful (0 votes)
8 views46 pages

01288f - Lectur 3

The document provides an overview of artificial intelligence, focusing on intelligent agents and their rationality, types, and environments. It discusses various agent types, including simple reflex, model-based, goal-based, utility-based, and learning agents, along with their characteristics and applications. Additionally, it covers problem-solving through search algorithms and the formulation of problems in the context of agent behavior.

Uploaded by

ali.zain2k23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views46 pages

01288f - Lectur 3

The document provides an overview of artificial intelligence, focusing on intelligent agents and their rationality, types, and environments. It discusses various agent types, including simple reflex, model-based, goal-based, utility-based, and learning agents, along with their characteristics and applications. Additionally, it covers problem-solving through search algorithms and the formulation of problems in the context of agent behavior.

Uploaded by

ali.zain2k23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Artificial Intelligence

Intelligent Agents
An agent is just something that acts
Rationality

■ What is rational at given time depends upon four things


-The performance measure that defines criteria for success
- The agent’s prior knowledge of the environment
- The actions that agent can perform
- The agent’s percept sequence to-date
Rational Agent

■ For each possible percept sequence, a rational agent


should select an action that is expected to maximize its
performance measure, given the evidence provided by
the percept sequence and whatever built-in knowledge the
agent has.
■ Vacuum-cleaner agent?
Agents and Environments
An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators.
A human agent has eyes, ears, and other organs for sensors
and hands, legs, vocal tract, and so on for actuators.

Human, robot, software agents


Vacuum-Cleaner Agent
■The performance
awards one pointmeasure
for each
clean
step, square
over a at each
"lifetime" time
of 1000
time steps.
■ environment
The "geography" of the
but the dirt is known
distribution a priori
and
the
agentinitial
are location
not. of the
■ are
TheLeft,
onlyRight,
available
and actions
Suck.
■ its
The agent
location correctly
and perceives
whether
that location contains dirt.
Omniscience

■ An omniscience agent knows the actual


outcomes of its actions and can act
accordingly
■ Omniscience is impossible in reality
Omniscience, Perfection

■ Rationality is NOT the same as Perfection


■ Rationality maximizes expected
performance
■ Perfection maximizes actual performance
Exploration, Learning

■ Doing actions in order to modify future percepts,


sometimes called information gathering, is an
important part of rationality.
■ It performs such actions to increase its perception
■ This is also called Exploration
- as taken by vacuum cleaner agent
■ A rational agent not only gather information but
also
learn as much as possible from what it perceives
Agent Autonomy

■ The capacity to compensate for partial or


incorrect prior knowledge by learning
■ An agent is called autonomous if its behavior is
determined by its own experience (with ability to
learn and adopt)
■ A truly autonomous agent should be able to
Operate successfully in a wide variety of
environments
Task Environment
■ PEAS
— P — Performance Measure
— E — Environment
- A - Actuators
- S - Sensors
■ First step in designing an agent must be
to define the task environment
PEAS - Example
■ Automated Taxi Driver Agent
- Performance measure: Safe, correct destination, minimizing
fuel consumption, min wear and tear, fast, legal, comfortable
trip, maximize profit

- Environment: Roads, other traffic, pedestrians, customers, stray


animals, police cars, signals, potholes

- Actuators: Steering wheel, accelerator, brake, signal, horn

- Sensors: TV Cameras, sonar, speedometer, accelerometer,


GPS, odometer, engine sensors, keyboard, mic
Agent Type and PEAS

Agent Type Performance Environment Actuators Sensors


Measures

Medical Diagnostic Healthy patients, Patients, hospital, Display questions, Keyboard entry of
minimize costs, staff tests, diagnoses, symptoms, findings,
treatments, referrals patients’ answers

Color pixel arrays


Satellite image Correct image Downlink from Display
analysis system characterization orbiting satellite categorization of
scene
Part picking robot Percentage of parts Conveyor belt with Jointed arm and Cameras, joint angle
in correct bins parts, bins hand sensors

Refinery controller Maximize purity, Refinery, operators Valves, pumps,


Temperature,
yield safety heaters, displays pressure, chemical
sensors
Environment Types
■ Fully observable vs. partially observable:
- An agent's sensors give it access to the complete state of the
environment at each point in time.
- Partially observable because of noisy and inaccurate sensors
■ Deterministic vs. stochastic:
- The next state of the environment is completely determined by
the current state and the action executed by the agent.
- If the environment is partially observable, then it could appear
to be stochastic.
■ Episodic vs. sequential:
- The agent's experience is divided into atomic ”episodes” (each
episode consists of the agent perceiving and then performing a
single action)
- the choice of action in each episode depends on the episode
itself.
Environment Types
■ Static vs. dynamic:
- The environment is unchanged while an agent is deliberating.
- The environment is semi-dynamic if the environment itself does
not change with the passage of time but the agent's
performance score does
■ Discrete vs. continuous:
- A limited number of distinct states
- Clearly defined percepts and actions.
■ Single agent vs. multiagent:
- An agent operating by itself in an environment.
- Chess is a competitive multiagent environment.
- Taxi-driving is a partially cooperative multiagent environment
Structure of Agents
■ Agent Program
- Implements the agent function— the mapping
from percepts to actions

- Runs on some sort of computing device, we may


call it the architecture

- It may be a plain computer or may include


special hardware
Architecture

■ The architecture makes the percepts from the sensors


available to the program, runs the program, and feeds the
program's action choices to the actuators as they are
generated.
■ The relationship
- Agent = architecture + program
Types of Agents
■ 5 types:
■ Simple reflex agents
- respond directly to percepts
■ Model-based reflex agents
- maintain internal state to track aspects of the world
- that are not evident in the current percept.
■ Goal-based agents
- act to achieve their goals, and
■ Utility-based agents
- try to maximize their own expected "happiness” or utility
- Utility function
■ Learning Agents
- Agents can improve their performance through learning
Simple Reflex Agents

Environment
Simple Reflex Agents

•The Simple reflex agents are the simplest


agents. These agents take decisions on the
basis of the current percepts and ignore the rest
of the percept history.

• These agents only succeed in the fully


observable environment.
• The Simple reflex agent does not consider
any part of percepts history during their
decision and action process.

• The Simple reflex agent works on Condition-


action rule, which means it maps the current
state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the
room.
Model-based Reflex Agents

Environment
Model-based Reflex Agents

The Model-based agent can work in a partially


observable environment, and track the
situation.

A model-based agent has two important factors:

Model: It is knowledge about "how things


happen in the world," so it is called a Model-
based agent.
Internal State: It is a representation of the
current state based on percept history

These agents have the model, "which is


knowledge of the world" and based on the
model they perform actions.
Model-based Reflex Agents

Updating the agent state requires information


about:

• How the world evolves


• How the agent's action affects the world
Goal-based Agents

Environment
Goal-based Agents

The knowledge of the current state environment


is not always sufficient to decide for an agent to
what to do.

The agent needs to know its goal which


describes desirable situations.

Goal-based agents expand the capabilities of


the model-based agent by having the "goal"
information
Goal-based Agents

They choose an action, so that they can achieve


the goal.

These agents may have to consider a long


sequence of possible actions before deciding
whether the goal is achieved or not.

Such considerations of different scenario are


called searching and planning, which makes an
agent proactive.
Utility-based Agents

Environment
Utility-based Agents
These agents are similar to the goal-based agent
but provide an extra component of utility
measurement which makes them different by
providing a measure of success at a given state

Utility-based agent act based not only goals but


also the best way to achieve the goal.

The Utility-based agent is useful when there are


multiple possible alternatives, and an agent has
to choose in order to perform the best action.
Utility-based Agents
The utility function maps each state to a real
number to check how efficiently each action
achieves the goals
Learning Agents
Performance standard

Environment
Learning Agents
A learning agent in AI is the type of agent which
can learn from its past experiences, or it has
learning capabilities.

It starts to act with basic knowledge and then


able to act and adapt automatically through
learning

A learning agent has mainly four conceptual


components, which are:
Learning Agents

Learning element: It is responsible for making


improvements by learning from environment

Critic: Learning element takes feedback from critic


which describes that how well the agent is doing
with respect to a fixed performance standard.

Performance element: It is responsible for


selecting external action

Problem generator: This component is


responsible for suggesting actions that will lead to
new and informative experiences.

Hence, learning agents are able to learn, analyze


performance, and look for new ways to improve the
performance.
Solving Problems by Searching
Problem Solving Agent
■ Problem solving agent decides what to do by finding
sequence of actions that lead to desirable states and hence
solution
■ 2 types of search algorithms:
- Uninformed search algorithms: that are given no information
about the problem other than its definition.
- Informed search algorithms: can do quite well given some
guidance on where to look for solutions
■ Intelligent agents are supposed to maximize their
performance measure.
■ Achieving this is sometimes simplified if the agent can
adopt a goal and aim at satisfying it
Example
On holiday
currently

in in Romania;
Arad.
■ Flight
Bucharest leaves tomorrow to

■ Formulate goal:
- be in Bucharest

■ Formulate problem:
- states: various cities
- actions: drive between cities

■ Find solution:
- sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Goal & Problem Formulation
Goals help
objectives organize
that the behavior by limiting the
agent is trying to achieve and hence the actions

it needs to consider
■ Goal formulation,
situation and thebased on current
agent’s
performance measure,
step in problem solvingis the first

■ Problem Formulation is the process of deciding what actions and


states to consider, given a goal

■ In general, an agent with several immediate options of unknown


value can decide what to do by first examining different possible
sequences of actions that leads to a state of known value, and then
choosing the best sequence
Example
One possible route
Arad -> Sibiu -> Ramnincu Valcea -> Pitesti -> Bucharest
Search and Solution

■ The process of looking for sequence of actions to arrive


at a goal is called search
■ Search algorithm takes problem as input and returns
solution in form of action sequence
■ Once a solution is found, the recommended actions can
be carried out. This is called execution.
■ Formulate, search, execute design for agent
Problem Definition
■ A Problem can be defined by the following 5
components:
- Initial state defines the start state
- Actions (s) A description of the possible actions available to
the agent. Given a particular state s, ACTIONS(s) returns the
set of actions that can be executed in s
- Transition model/Result (s, a) returns the state that results
from doing action a in state s
Problem Definition (cont’d)
- Goal Test (s) a function, when a state is passed to it, returns
True or False value if it is a goal or not
- Path Cost an additive function which assigns a numeric cost
to
each path. This function also reflect agent’s own performance
measure.
• There may be step cost also if a path contains more than one steps. It
may be denoted by c(s, a, s’), where a is action and s & s’ are the
current and new states respectively.

■ A path or solution in the state space is the sequence of


states connected by sequence of actions
■ Together, the initial state, actions and new states
implicitly defines the State space of the problem
Find a Route - Arid to Bucharest
Problem Formulation - Example
■ Problem Description: find an optimal path from Arad to Bucharest
■ Initial State= In(Arad)
■ Actions (s) = set of possible actions agent can take
{goto(Zerind), goto(Sibiu), goto(Timisora)}
■ Result (s, a): Result (In(Arad), Go(Zerind)) = In(Zerind) .
■ Goal Test: determine if at goal
— can be explicit, e.g., In(Bucharest)
■ Path Cost: cost of each step added together
— e.g., sum of distances, number of actions executed, etc.
— The step cost is assumed to be > 0
■ A solution is a sequence of actions leading from the initial state to
a goal state, solution quality is measured by path cost
Vacuum World State Space Graph

■ states? The agent is in one of two locations, 2*2*2 = 8 possible world states
■ initial state: any state can be designated as initial state
■ Actions (a): {<left>, <right>, <suck>, <noop>}
■ Result(s,a): <right clean>, <left clean>
■ goal test: no dirt at all locations
■ path cost: 1 per action
Example: The 8-puzzle

■ States: locations of each tile and blank,


9!/2=181,440 reachable world states
■ Initial state: any state can be designated
■ Action(s): {<left>, <right>, <up>, <down>}
■ Result(s,a): new state after taking any of above actions
■ goal test: matches the goal state (given)
■ path cost: 1 per move
Example: 8-Queen Problem

States: any arrangement of 0 to


8 queens on the board
Initial State: no queen on the
board
Actions: add a queen at any
square
Result: new board state with
queen added
Goal test: 8 queens on the board
- none attacked
Path Cost: 1 per move

You might also like