0% found this document useful (0 votes)
28 views32 pages

AI File - 2 PDF

Uploaded by

Nahim's kitchen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views32 pages

AI File - 2 PDF

Uploaded by

Nahim's kitchen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

CSE 359 (CEN 307) &

CSE 360 (CEN 308)


Artificial Intelligence
By
Md. Palash Uddin
Assistant Professor
Dept. of CSE
Hajee Mohammad Danesh Science and Technology
University, Dinajpur. 1
Intelligent Agent
Agents

• An agent is anything that can be viewed as perceiving its


environment through sensors and acting upon that
environment through actuators.

Human agent:
eyes, ears, and other organs for sensors; hands, legs, mouth,
and other body parts for actuators

Robotic agent:
cameras and infrared range finders for sensors; various motors
for actuators
Agents and environments

• The agent function maps from percept histories to actions:


[f: P*  A]

• The agent program runs on the physical architecture to produce f

agent = architecture + program


Vacuum-cleaner world

• Percepts: location and state of the environment, e.g., [A,Dirty],


[A,Clean], [B,Dirty]

• Actions: Left, Right, Suck, NoOp


Agents
Agents
Rational (Intelligent) Agent
• Characteristics of Rational Agent
– tries to maximize expected value of performance measure
• performance measure = degree of success
– on the basis of evidence obtained by percept sequence
– using its built-in prior world knowledge
– how to maximize expected future performance, given only
historical data

• Rational Agent : agent that does the RIGHT things


Rational not= Omniscient, clairvoyant, successful
Rationality
• Rationality  Information-Gathering, learning, autonomy
• Information-Gathering
– Modify future percepts
– Exploration unknown environment
• Learning
– Modify prior knowledge
• Autonomy
– Learn to compensate for partial and incorrect prior knowledge
– Become independent of prior knowledge
– Successful in variety of environment

importance of learning
Performance Measure, Environment,
Actuators, Sensors (PEAS)
• To design a rational agent, we must specify task environment which
consists of PEAS (Performance Measure, Environment, Actuators,
Sensors)
• Taxi Driver
– Performance measure : safety, fast, legal, confortable trip,
maximize profits
– Environment : Roads, other traffic, pedestrains, customers
– Actuators : steering, accelerator, brake, signal, horn, display
– Sensors : cameras, sonar, speedometer, GPS, odometer,
accelerometer, engine sensors, keyboard or microphone to
accept destination
PEAS
• Example: Agent = robot driver

– Performance measure:
• Time to complete course

– Environment:
• Roads, other traffic, obstacles

– Actuators:
• Steering wheel, accelerator, brake, signal, horn

– Sensors:
• Optical cameras, lasers, sonar, accelerometer,
speedometer, GPS, odometer, engine sensors,
PEAS
• Agent: Interactive English tutor
• Performance measure:
Maximize student's score on test
• Environment:
Set of students
• Actuators:
Screen display (exercises, suggestions, corrections)
• Sensors:
Keyboard
PEAS
• Agent: Medical diagnosis system
• Performance measure:
Healthy patient, minimize costs, lawsuits
• Environment:
Patient, hospital, staff
• Actuators:
Screen display (questions, tests, diagnoses, treatments, referrals)
Sensors:
Keyboard (entry of symptoms, findings, patient's answers)
Internet Shopping Agent
• Performance Measure :
• Environment :
• Actuators :
• Sensors :
Environment Types
• Fully observable (vs. partially observable):
– An agent's sensors give it access to the complete state of the
environment at each point in time.

• Deterministic (vs. stochastic):


– The next state of the environment is completely determined by the
current state and the action executed by the agent.
– If the environment is deterministic except for the actions of other
agents, then the environment is strategic
– Deterministic environments can appear stochastic to an agent (e.g.,
when only partially observable)

• Episodic (vs. sequential):


– An agent’s action is divided into atomic episodes. Decisions do not
depend on previous decisions/actions.
Environment Types
• Static (vs. dynamic):
– The environment is unchanged while an agent is deliberating.
– The environment is semidynamic if the environment itself does not
change with the passage of time but the agent's performance score
does

• Discrete (vs. continuous):


– A discrete set of distinct, clearly defined percepts and actions.
– How we represent or abstract or model the world

• Single agent (vs. multi-agent):


– An agent operating by itself in an environment. Does the other agent
interfere with my performance measure?
Environment Types
task observable deterministic/ episodic/ static/ discrete/ agents
environment stochastic sequential dynamic continuous

crossword fully determ. sequential static discrete single


puzzle

chess with fully strategic sequential semi discrete multi


clock

taxi partial stochastic sequential dynamic continuous multi


driving

image fully determ. episodic semi continuous single


analysis

partpicking partial stochastic episodic dynamic continuous single


robot

refinery partial stochastic sequential dynamic continuous single


controller

interact. partial stochastic sequential dynamic discrete multi


tutor
Task Environment Types
8- Internet Medical
Backgammon poker
puzzle Shopping diagnosis

Observable ?

Deterministic ?

Episodic ?

Static ?

Discrete ?

Single-agent ?
Structure of Intelligent Agents

perception action
?

Agent = architecture + program


Types of agents
• Five basic types
– Table look up Agent
– Simple Reflex Agent
– Reflex Agent with state

Generality
• that keeps track of the world
• Also called model-based reflex agent
– Goal-based agent
– Utility-based agent

• All these can be turned into Learning Agents


Table-lookup agent
• use a percept sequence/ action table in memory to find the next
action. They are implemented by a (large) lookup table.

• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries
Simple Reflex Agent
• Characteristics
– no plan, no goal
– do not know what they want to achieve
– do not know what they are doing

• Condition-action rule
– If condition then action
Simple reflex agent (Architecture)
Simple Reflex Agent (Program)
Model-based Reflex Agents
• Characteristics
– Reflex agent with internal state
– Sensor does not provide the complete state of the world.

• Updating the internal world


– requires two kinds of knowledge which is called model
• How world evolves
• How agent’s action affect the world
A model-based Agent
Goal-based agents
• Characteristics
– Action depends on the GOAL . (consideration of future)
• Goal is desirable situation
– Choose action sequence to achieve goal
• Needs decision making
• fundamentally different from the condition-action rule.
– Search and Planning
• Appears less efficient, but more flexible
– Because knowledge can be provided explicitly and modified
A model-based, Goal-based Agent
Utility-based agents
• Utility function
– Degree of happiness
– Quality of usefulness
– map the internal states to a real number
• (e.g., game playing)
• Characteristics
– to generate high-quality behavior
– Rational decisions are made
– Looking for higher Utility value
• Expected Utility Maximizer
– Explore several goals
A Model-based, Utility-based Agent
Learning Agents
• Improve performance based on the percepts
• 4 components
– Learning elements
• Making improvement
– Performance elements
• Selecting external actions
– Critic
• Tells how well the agent doing based on fixed performance
standard
– Problem generator
• Suggest exploratory actions
General Model of Learning Agents

You might also like