0% found this document useful (0 votes)
21 views87 pages

Ch01 02

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views87 pages

Ch01 02

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Introduction to AI

1
Outline
• What is Artificial Intelligence?
• Some Achievements
• AI’s Genesis
• Overview of Problems and Techniques
• The major sub-areas of AI
• Summary
• Exercise
2
Introduction to Artificial Intelligence
“There are three great events in history. The
creation of the universe. Two, the appearance
of life. The third one, which I think is equal
in importance, is the appearance of artificial
intelligence” Edward Fredkin
? What is intelligence?
? Is it possible for machine to be intelligent?

Intelligence

Natural Artificial

Human Animal Other worlds?


3
Introduction

Big questions
• Can machines think?../AI/computing machinery
and intelligence - a_m_ turing, 1950.htm
• And if so, how?
• And if not, why not?
• And what does this say about human beings?
• And what does this say about the mind?
• Is it possible for machines to be intelligent? 4
What is Intelligence?
• Webster’s Dictionary
The faculty of acquiring and applying knowledge
• More scientific
Intelligence is a measure of the success/performance of an
entity in achieving its objectives/goals by interaction with
its environment.
- The level of intelligence should be judged by considering the
difficulty of the goals, and the success in achieving them
• Intelligent quotient (amount, percentage) is measured by
tests, involving a variety of skills and knowledge
- Do all antonyms of the following words start with ‘c’?
Open, courage, etc. 5
What is Intelligence?
The definition mentioned in the previous slide has
some important consequences:
• Intelligent behaviour can only be observed in the
presence of an environment
• To measure intelligence there must be goals and a
scale to determine how well these are achieved.
• The ability to “express” intelligence depends on the
“richness” of interaction with the environment, and
on the subtlety (delicacy) of the goals, as well as
internal mechanisms
• This definition clearly allows the possibility of
6
intelligent machine.
Aspects/Characteristics of Intelligence
• An intelligent entity should have the ability to:
- interact with its environment;
- learn from information and experience;
- make rational decisions;
- deduce new facts from given ones
- make sensible deductions when insufficient facts
are available
This implies some form of getting input (i.e.
senses), a way to produce output and an ability to
process the inputs to give the output some
7
relevance
See
Hear Inputs Internal Processes
Touch
Taste
Smell

Has Has
knowledge understanding/
Senses intentionally
environment
Can reason

outputs
An Entity Exhibits
Intelligent behaviour
Entity 8
Scale of Intelligence
• To be able to measure intelligence, an
environment and set of goals must clearly
be defined
• For example if intelligence is only
measured on an entities ability to avoid
obstacles then mice would prove to be more
intelligent than a person who is blindfolded

9
Entity

interaction Set of Goals


Evaluation
Environment
Success/
performance

Components of Intelligence
10
Signs of Intelligence
• Learn or understand from experience
• Make sense out of ambiguous or contradictory
messages
• Respond quickly and successfully to new situations
• Use reasoning to solve problems
• Understand and infer in ordinary, rational ways
• Apply knowledge to manipulate the environment
• Think and reason
• Recognize the relative importance of different
elements in a situation 11
Intelligent Machines: The Behaviorist View
• Is it possible for machines to be intelligent?
- Machines can exhibit intelligent behaviour but how
intelligent are they really and is this the same as “human
intelligence”?
- Many scientists believe that only things that can be directly
observed are “scientific” (behaviourism)
- Therefore, if a machine behaves “as if it were intelligent”, it
is meaningless to argue that this is an illusion
- This view can be summarised as: “ It it walks like a duck,
quacks like a duck and looks like a duck - it is a duck”
- Turing was of this opinion and proposed the “Turing Test”
12
More on AI

What is artificial intelligence?


• There are no clear consensus on the definition of AI
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent
machines, especially intelligent computer programs. It is
related to the similar task of using computers to understand
human intelligence, but AI does not have to confine itself to
methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve
goals in the world. Varying kinds and degrees of intelligence
occur in people, many animals and some machines.
13
Other possible AI definitions
• AI is a collection of hard problems which can be solved by
humans and other living things, but for which we don’t have
good algorithms for solving.
– e. g., understanding spoken natural language, medical
diagnosis, learning, self-adaptation, reasoning, chess
playing, proving math theories, etc.
• Definition from R & N book: a program that
Systems that act like human Systems that act rationally
Systems that think like human Systems that think rationally

Human-centred approach must be an Rationalist approach involves a


empirical science, involving hypothesis Combination of mathematics and
14
and experimental confirmation engineering
Acting Humanly: The Turing Test
Turing (1950s): Intelligent behavior as the ability to achieve
human-level performance in all cognitive tasks.

Interrogating the system via a teletype without physical


interaction.

Systems need to possess : Knowledge, reasoning, language


understanding and learning.
15
The Turing Machine
• A universal theoretical model of computation
• Proposed by Alan Turing (1950) designed to provide
a satisfactory operational definition of intelligence
• Turing defined intelligent behaviour as the ability to
achieve human level performance in all cognitive
tasks
• The approach deliberately avoids direct physical
interaction between the interrogator and the
computer because physical simulation of a person is
unnecessary for intelligence 16
The Turing Test (Acting humanly)
• A judge has to find out which of two hidden
agents is a human and which is a machine.
• To pass total Turing test computer need
- computer vision to perceive objects
- robotics to move the objects

17
Thinking Humanly: Cognitive Process
1960s “cognitive revolution”: information-processing psychology
replaced behaviorism.

Understanding how humans think requires scientific theories of


internal activities of the brain
1. What level of abstraction? ‘Knowledge’ or ‘circuits’?
2. How to validate? Requires
1. Predicting and testing behavior of human subjects
2. Identification of neurological data.

Cognitive science and cognitive neuroscience are distinct from


18
AI but share one principal direction.
Rules of thought
• Logic: Aristotle developed the first formal
approach to reasoning in his syllogism.
- All Greeks are mortal
- Socrates is a Greek
- .............................................
- Socrates is mortal

19
Acting Rationally
Acting rationally = to achieve one’s goals, given one’s beliefs.

Difference between acting and thinking rationally: correct


inference is not all of rationality.

Need ability to represent knowledge and reason with it.


Reach good decisions in a wide variety of situations.

AI: study and construction of rational


agents
20
Rational Agents

An agent is an entity that perceives and acts.

For any given class of environments and tasks, we seek the


agent (or class of agent) with the best performance.

Computational limitations make perfect rationality


unachievable. …. Design best program of given machine
resources.

This class is about construction rational agents……..

21
AI Objectives
• The objective of AI is to understand and
model cognition, intelligence, and
intelligent behaviour.
• Write the names of three skills.

22
Think what is required to do this?
• Counting
• Calculating
• Doing Mathematics
• Painting a picture
• Recognising a face of a friend
• Undersatnding a story or a fairy tale.
• Reading newspapers
• Making decisions
• Finding the shortest tour to visit a number of places
• Writing a program
23

• Chatting at a party to other guests


Some of AI’s achievements
• In 1997, the chess program Deep Blue of
IBM defeated the chess world champion
Garry Kasparov in a match under regular
tournament conditions
• Expert systems are routinely employed in a
variety of industries
• Machines mange to understand (recognise)
spoken language
• Write ten others (assignment) 24
AI’s Genesis
• From philosophy - theories of reasoning and
learning have emerged
• From Mathematics - formal theories of logic,
probability, decision-making and computation
• From Psychology - tools to investigate human mind
• From Linguistics - theory of the structure and
meaning of language
• From Computer Science - tools with which to make
AI a reality
25
AI’s Genesis
• For long have people wondered about how the human mind
works, how to judge properly, how to act wisely, etc.
• The ancient Greek philosophers were already quite
advanced in their analysis of the foundations of our human
intelligence
• They were concerned with the substance and composition
of the world as well as with the way how we humans
perceive and understands the world around us and how to
love a good life
• Plato and Aristotle (about 400 BC) laid the foundations of
two opposing views of how humans come to their concepts
to understand the world around us 26
Rationalism
• Rationalism (Plato) is the idea that our
concepts come from the ratio (relationship) ,
i.e. from our mind and we just apply our
natural concepts to our sense experience.
• Plato spoke of remembering the
“natural/innate” concepts of the world
system when we perceive objects as
belonging to a certain kind (Object Model).
We just apply
Concepts Sense Experience

27
Empricism
• Empricism (Aristotle) is the idea that our
concepts to classify things are derived from
our senses experiences i.e. we shape our
concepts by perceiving the world system
around us and gradually learn to cope with
the world
derived
Concepts Sense Experience

28
The ‘critical turn’
• The German philosopher Kant (18th
centuary) presented a foundational analysis
in his “Critique of Pure Reason”, which
essentially rejects both, Rationalism and
Empiricism, and shows that a synthesis of
both is necessary to account for the process
underlying human intelligence

29
AI’s Genesis
• A logical reasoning calculus was conceived
and a calculating machine built by Leibniz
in the 17th century
• Leibniz wanted to resolve intellectual
arguments by calculation
• 1847: Boole developed “Boolean logic”
• 1897: Frege developed today’s Predicate
Logic in his Begriffsschrift.
• 1931: Godel proved his incompleteness
theorem 30
History of AI
• AI has roots in a number of scientific disciplines
– computer science and engineering (hardware and software)
– philosophy (rules of reasoning)
– mathematics (logic, algorithms, optimization)
– cognitive science and psychology (modeling high level
human/animal thinking)
– neural science (model low level human/animal brain activity)
– Linguistics, economics, etc.
• The birth of AI (1943 – 1956)
– Pitts and McCulloch (1943): simplified mathematical model of
neurons (resting/firing states) can realize all propositional
logic primitives (can compute all Turing computable functions)
– Allen Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of chess
playing computers
– Tracing back to Boole, Aristotle, Euclid (logics, syllogisms)
31
• Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
John McCarthy (Lisp);
Marvin Minsky (first neural network machine);
Alan Newell and Herbert Simon (GPS);
– Emphasize on intelligent general problem solving
GSP (means-ends analysis);
Lisp (AI programming language);
Resolution by John Robinson (basis for automatic theorem
proving);
heuristic search (A*, AO*, game tree search)
• Emphasis on knowledge (1966 – 1974)
– Domain specific knowledge is the key to overcome existing
difficulties
– Knowledge representation (KR) paradigms

32
• Knowledge-based systems (1969 – 1979)
– DENDRAL: the first knowledge intensive system (determining
3D structures of complex chemical compounds)
– MYCIN: first rule-based expert system (containing 450 rules
for diagnosing blood infectious diseases)
EMYCIN: an ES shell
– PROSPECTOR: first knowledge-based system that made
significant profit (geological ES for mineral deposits)
• AI became an industry (1980 – 1989)
– wide applications in various domains
– commercially available tools
• Current trends (1990 – present)
– more realistic goals
– more practical (application oriented)
– distributed AI and intelligent software agents
– resurgence of neural networks and emergence of genetic
algorithms 33
History

34
Overview: Problems and
Techniques
• Lack of early AI researcher’s understanding
of the problem tackled
• Lack of knowledge in the machine
• Difficulty in both formalizing everyday
knowledge as well as expert knowledge

35
State of the Art

Which of the following can be done at present?


1. Play a decent game of bridge.
2. Understand human speech.
3. Discover and prove a new mathematical theorem.
4. Write a funny story.
5. Translate spoken English into spoken Swedish in real
time.
6. Perform a complex surgical operation.
7. Drive along a curving mountain road.
36
Summary
• The objective of AI is to understand and
model cognition, intelligence and intelligent
behaviour.
• This is not a homogeneous objective.
Different researchers have different
interests which overlap and have different
foci in the field of AI
• Important aspects of this overall objective
have been studied in Computing,
Cybernetics, Linguistics, Mathematics,
Philosophy and Psychology 37
Summary
• AI has considerably matured over the past
decades and is now becoming increasingly
important from a commercial point of view
• So far, AI has had a number of impressive
success stories
• Some techniques went more or less
unacknowledged into main steam Computer
Science, such as Object Oriented
Programming
• But the real impact lies still ahead of us. 38
Assignment
• Write ten definitions of AI
• Write ten achievements of AI
• Solve the following exercises from R&N
– 1.1,1.2,1.3,1.5, 1.6, 1.7, 1.9 and 1.10

39
Intelligent Agents
Dr. Mohammad Shahadat Hossain
Professor
Department of Computer Science &
Engineering
University of Chittagong
40
Lecture Outlines
• Define Intelligent Agents

41
Intelligent Agents
• Definition: An Intelligent Agent/entity that perceives it environment
via sensors and acts rationally upon that environment with its
effectors.
- Example: human, robotic, software agents
• Hence, an agent gets percepts one at a time, and maps this percept
into sequence to actions.
• Properties
– Autonomous
– Interacts with other agents
plus the environment
– Reactive to the environment
– Pro-active (goal- directed)

42
What do you mean,
sensors/percepts and effectors/actions?

• Humans
– Sensors: Eyes (vision/act of seeing), ears (hearing), skin (touch),
tongue (gustation/sense of taste /salt), nose (olfaction/the sense of
smell), neuromuscular system (Proprioception is an automatic sensitivity
mechanism in the body that sends messages through the central nervous system (CNS).
The CNS then relays information to rest of the body about how to react and with what
amount of tension)

– Effectors: limbs, digits, eyes, tongue, mouth …

– Actions: lift a finger, turn left, walk, run, carry an object, …

• The Point: percepts and actions need to be carefully defined, possibly


at different levels of abstraction 43
How Agents Should Act
• A rational agent is one that does the right
thing.
• That is, the rational agent will act such that
it will be successful.
• A performance measure determines how
successful an agent is.

44
Performance Measures
• Subjective vs objective

• An outside measure may be more objective

• To create a sensible performance measure is


often rather difficult

45
Examples of Performance
Measures
• Vacuum cleaner: how much dirt removed

• Call centre: how many calls handlled?

• Teaching:” how many students passed?

46
Rationality versus Omniscience
• An omniscience agent knows everything,
including the outcome of the actions

• A rational agent works with “ reasonable


expectations” based on what has been
perceived
• Example
• Seeing an old friend across the street

47
What is rational is determined by
• The performance measure that defines
degree of success
• The perceptions of the agents so far (the
percept story)/percept sequence
• The agent’s knowledge on the environment
• The actions the agent can perform

48
An Ideal Rational Agent
• An ideal rational agent should, for each possible percept sequence,
do whatever actions that will maximize its performance measure
based on
(1) the percept sequence, and
(2) its built-in and acquired knowledge.
• This includes information gathering as a rational activity
• Examples
•Crossing the road
•Clock
Rationality: maximizes expected performance. Doesn’t mean
success every time.
Rationality ≠ perfection Rationality ≠ Success always.
• Types of performance measures: payoffs, false alarm and false
dismissal rates, speed, resources required, effect on environment,
49 etc.
Simple example: Vacuum-cleaner world

Percepts: room and its cleanness, e.g. [B, clean]


Actions: Go Left, Go Right, Vacuum, Do nothing.
50
In this simple world, the function f (the agent function) is
simple. We can tabulate all function values in many ways.
For example:
Percept Sequence Action
[A, clean] (Go Right)
[A, dirty] (Vacuum)
[B, clean] (Go Left)
[B, dirty] (Vacuum)
[A, clean], [A, clean] (Go Right)
[A, clean], [A, dirty] (Vacuum)

Our task: fill out the table in a correct way so that the vacuum
cleaner looks intelligent.
51
Performance measures: (Keeping scores)

Criterion for success of an agent’s behavior. Keeping scores

-- One point per ounce of dirt cleaned.


-- One point per square cleaned up in time T.
-- One point per clean square per time step, minus one per move.

What you ask for is what you get…. If you really know what
ask for.
52
Ideal Mapping from Percept
Sequence to Actions
• Square root function example
• lookup table

53
Mapping Percept Sequences to
Actions - Lookup Table
Percept x Action z
1.0 1.000
1.1 1.048
1.2 1.095
1.3 1.140
1.4 1.183
1.5 1.224
1.6 1.264
1.7 1.303
1.8 1.341
1.9 1.378

54
Mapping Percept Sequences to
Actions
Percept x Action z
1.0 1.000
1.1 1.048 Function
1.2 1.095 function SQRT(x)
1.3 1.140 z:=1.0
1.4 1.183 repeat until|z*z - x| < 0.001
1.5 1.224 z:=z-(z*z-x)/(2z)
1.6 1.264 end
1.7 1.303 return z
1.8 1.341
1.9 1.378

55
Autonomy
• A system is autonomous to the extent that its own behavior is
determined by its own experience and knowledge.
• Therefore, a system is not autonomous if it is guided by its
designer according to a priori decisions.
• To survive agents must have:
– Enough built- in knowledge to survive.
– Ability to learn.

56
We have discussed agents by describing their
Behavior (external) – the action that is
performed after any Given sequence of
precepts.

57
Structure of Intelligent Agents
• The job of AI is to design the agent program
- the program is a function that implement the agent mapping
from percepts to actions
- the program will run on some sort of computing device,
which we will call the architecture
- the architecture might be a plain computer, it might also
include software that provide a degree of insulation between
the raw computer and agent program
AGENT = ARCHITECTURE + PROGRAM
architecture make available percepts received from sensor to
the agent program, it runs the program and feeds the
program’s actions to the effectors 58
Building a rational agent….. first step.

Specify the task environment : performance measure, the


environment, agent’s actuators and sensors.

Building a rational robot professor…..

Performance measure/goals : Student evaluation(?) Average class


grade (?) Average final exam grade(?)

Environment: hostile students (?) friendly students (?) well-equipped


classroom (?), outdoor classroom(?), no TA(?)

Actuators: speech mechanism(?), projectors (?) ….

Sensors: cameras(?), voice recorders(?) ….


59
Examples of Agent Types and their Descriptions

60
AGENT PROGRAM

Function SKELETON-AGENT (percept) returns action //single percept


Static: memory // the agent’s memory of the world

memory = UPDATE-MEMORY (memory, percept)


action = CHOOSE-BEST-ACTION (memory)
memory = UPDATE-MEMORY (memory, action) //action is also
stored
return action
Performance measure is not the part of this program as it is applied
Externally to judge the behaviour of the agent.
61
Some Agent Types
• Table-driven agents
– use a percept sequence/ action table in memory to find the next action.
They are implemented by a (large) lookup table.
• Simple reflex agents
– are based on condition-action rules and implemented with an
appropriate production (rule-based) system. They are stateless devices
which do not have memory of past world states.
• Agents with memory
– have internal state which is used to keep track of past states of the
world.
• Agents with goals
– are agents which in addition to state information have a kind of goal
information which describes desirable situations. Agents of this kind
take future events into consideration.
• Utility-based agents
– base their decision on classic axiomatic utility-theory in order to act
rationally. 62
A more specific example:
Automated taxi driving system

• Percepts: Video, sonar, speedometer, odometer, engine


sensors, keyboard input, microphone, GPS, …
• Actions: Steer, accelerate, brake, horn, speak/display, …
• Goals: Maintain safety, reach destination, maximize profits
(fuel, tire wear), obey laws, provide passenger comfort, …
• Environment: urban streets, freeways, traffic, pedestrians,
weather, customers, …
• Different aspects of driving may require different types
of agent programs!

63
Table Driven Agent
Function TABLE-DRIVEN-AGENT (percept) returns action
Static: percepts // a sequence, initially empty
table // a table, indexed by percept sequences, initially fully
specified
append percept to the end of percepts
action = LOOKUP (percepts, table)
return action

Limitation
- the table needed for something as simple as an agent that can play
chess would be about 35100 entries
- it would take quite a long time for the designer to build the table
- no autonomy
64
Simple Reflex Agent
• Use condition-action rules to summarize portions of
the table

If car-in-front-is braking (condition)


Then initiate braking (action - rule)
Function SIMPLE-REFLEX-AGENT (percept) returns action
Static: rules // a set of condition-action rules

state = INTERPRET-INPUT (percept) //generate an abstracted


description of the current state from the percept
rule = RULE-MATCH (state, rules) /
action = RULE-ACTION [Rule]
return action 65
A Simple Reflex Agent: Schema

Agent Sensors

Environment
What the world
is like now

What action I
Condition-action rules should do now

Effectors

Act on current percept, ignoring percept history


66
Reflex Agent with Internal State
• Encode "internal state" of the world to remember the past
as contained in earlier percepts
• Needed because sensors do not usually give the entire
state of the world at each input, so perception of the
environment is captured over time. "State" used to
encode different "world states" that generate the same
immediate percept.
• Requires ability to represent change in the world; one
possibility is to represent just the latest state, but then
can't reason about hypothetical courses of action

67
Function REFLEX-AGENT (percept) returns action
Static: state // a description of the current world state
rules // a set of condition-action rules

state = UPDATE-STATE (state, percept)


rule = RULE-MATCH (state, rules)
action = RULE-ACTION (rule)
state = UPDATE-STATE (state, action)
return action

68
Agents that Keep Track of the World

Use internal states (or models) to deal with the world


that is only partially observable. 69
Goal- Based Agent
• Choose actions so as to achieve a (given or computed) goal.
• A goal is a description of a desirable situation
• Keeping track of the current state is often not enough -- need
to add goals to decide which situations are good
• Deliberative instead of reactive
• May have to consider long sequences of possible actions
before deciding if goal is achieved -- involves consideration of
the future, “what will happen if I do...?”

70
Agents with Explicit Goals
Knowing the world and respond appropriately is not the whole story
Anticipating future … involving planning and search.

71
Goal-Based Agent
function AGENT-WITH-EXPLICIT_GOAL(percept, goal) returns
action
static: state // a description of the current world state
rules // a set of condition-action rules
state = UPDATE_STATE(state, percept)
rule =RULE-MATCH(state, rules, goal)
action = RULE-ACTION(rule, goal)
state = UPDATE-STATE(state, action)
return action

72
Utility- Based Agent
• When there are multiple possible alternatives, how to decide
which one is best?
• A goal specifies a crude distinction between a happy and
unhappy state, but often need a more general performance
measure that describes "degree of happiness"
• Utility function U: States --> Reals indicating a measure of
success or happiness when at a given state
• Allows decisions comparing choice between conflicting goals,
and choice between likelihood of success and importance of
goal (if achievement is uncertain)

73
Utility-based agents

Utility (cost) function maps a state onto a real number.


Improve upon the goal-based agents by having high-quality
behavior in most environment.

Goal

74
Utility-Based Agent
function UTILITY-BASED-AGENT(percept, utility-function) returns
action
static: state // a description of the current world state
rules // a set of condition-action rules
state = UPDATE_STATE(state, percept)
action = UTILITY-MAXIMISER(state, rules, utility-function)
state = UPDATE-STATE(state, action)
return action

75
Properties of Environments
• Accessible/ Inaccessible.
– If an agent's sensors give it access to the complete state of the
environment needed to choose an action, the environment is
accessible.
– Such environments are convenient, since the agent is freed from the
task of keeping track of the changes in the environment.
• Deterministic/ Non-deterministic.
– An environment is deterministic if the next state of the environment
is completely determined by the current state of the environment and
the action of the agent.
– In an accessible and deterministic environment the agent need not
deal with uncertainty.
• Episodic/ Nonepisodic.
– An episodic environment means that subsequent episodes do not
depend on what actions occurred in previous episodes.
– Such environments do not require the agent to plan ahead. 76
Properties of Environments
• Static/ Dynamic.
– A static environment does not change while the agent is thinking.
– In a static environment the agent need not worry about the passage of
time while he is thinking, nor does he have to observe the world while
he is thinking.
– In static environments the time it takes to compute a good strategy does
not matter.
• Discrete/ Continuous.
– If the number of distinct percepts and actions is limited the environment
is discrete, otherwise it is continuous.
• With/ Without rational adversaries.
– If an environment does not contain other rationally thinking, adversary
(opponent) agents, the agent need not worry about strategic, game
theoretic aspects of the environment
– Most engineering environments are without rational adversaries,
whereas most social and economic systems get their complexity from
the interactions of (more or less) rational agents.

77
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire

Backgammon

Taxi driving

Internet
shopping
Medical
diagnosis

78
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon

Taxi driving

Internet
shopping
Medical
diagnosis

79
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving

Internet
shopping
Medical
diagnosis

80
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet
shopping
Medical
diagnosis

81
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet No No No No No
shopping
Medical
diagnosis

82
Characteristics of environments

Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet No No No No No
shopping
Medical No No No No No
diagnosis

→ Lots of real-world domains fall into the hardest case! 83


Environment Accessible Deterministic Episodic Static Discrete

Chess with a clock Yes Yes No Semi Yes

Chess without a clock Yes Yes No Yes Yes

Poker No No No Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Medical diagnosis system No No No No No

Image-analysis system Yes Yes Yes Semi No

Part-picking robot No No Yes No No

Refinery controller No No No No No

Interactive English tutor No No No No Yes

84
Environment Programs
• Read from R&N (pages 47 – 49)

85
Summary
• An agent perceives and acts in an environment, has an architecture
and is implemented by an agent program.
• An ideal agent always chooses the action which maximizes its
expected performance, given percept sequence received so far.
• An autonomous agent uses its own experience rather than built- in
knowledge of the environment by the designer.
• An agent program maps from percept to action & updates its
internal state.
– Reflex agents respond immediately to percepts.
– Goal-based agents act in order to achieve their goal( s).
– Utility-based agents maximize their own utility function.
• Representing knowledge is important for successful agent design.
• Some environments are more difficult for agents than others. The
most challenging environments are inaccessible, non-deterministic,
non-episodic, dynamic, and continuous.
86
Assignment
• What is the difference between a performance measure and
a utility function?
• For each of the environments in slide , determine what
type of agent architecture is most appropriate (table
lookup, simple reflex, goal-based or utility-based)
• Choose a domain that you are familiar with, and write a
PAGE description of an agent for the environment.
Characterize the environment as being accessible,
deterministic, episodic, static and continuous or not. What
agent architecture is best for this domain?
• Exercise 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 2.10, 2.11

87

You might also like