Ch01 02
Ch01 02
1
Outline
• What is Artificial Intelligence?
• Some Achievements
• AI’s Genesis
• Overview of Problems and Techniques
• The major sub-areas of AI
• Summary
• Exercise
2
Introduction to Artificial Intelligence
“There are three great events in history. The
creation of the universe. Two, the appearance
of life. The third one, which I think is equal
in importance, is the appearance of artificial
intelligence” Edward Fredkin
? What is intelligence?
? Is it possible for machine to be intelligent?
Intelligence
Natural Artificial
Big questions
• Can machines think?../AI/computing machinery
and intelligence - a_m_ turing, 1950.htm
• And if so, how?
• And if not, why not?
• And what does this say about human beings?
• And what does this say about the mind?
• Is it possible for machines to be intelligent? 4
What is Intelligence?
• Webster’s Dictionary
The faculty of acquiring and applying knowledge
• More scientific
Intelligence is a measure of the success/performance of an
entity in achieving its objectives/goals by interaction with
its environment.
- The level of intelligence should be judged by considering the
difficulty of the goals, and the success in achieving them
• Intelligent quotient (amount, percentage) is measured by
tests, involving a variety of skills and knowledge
- Do all antonyms of the following words start with ‘c’?
Open, courage, etc. 5
What is Intelligence?
The definition mentioned in the previous slide has
some important consequences:
• Intelligent behaviour can only be observed in the
presence of an environment
• To measure intelligence there must be goals and a
scale to determine how well these are achieved.
• The ability to “express” intelligence depends on the
“richness” of interaction with the environment, and
on the subtlety (delicacy) of the goals, as well as
internal mechanisms
• This definition clearly allows the possibility of
6
intelligent machine.
Aspects/Characteristics of Intelligence
• An intelligent entity should have the ability to:
- interact with its environment;
- learn from information and experience;
- make rational decisions;
- deduce new facts from given ones
- make sensible deductions when insufficient facts
are available
This implies some form of getting input (i.e.
senses), a way to produce output and an ability to
process the inputs to give the output some
7
relevance
See
Hear Inputs Internal Processes
Touch
Taste
Smell
Has Has
knowledge understanding/
Senses intentionally
environment
Can reason
outputs
An Entity Exhibits
Intelligent behaviour
Entity 8
Scale of Intelligence
• To be able to measure intelligence, an
environment and set of goals must clearly
be defined
• For example if intelligence is only
measured on an entities ability to avoid
obstacles then mice would prove to be more
intelligent than a person who is blindfolded
9
Entity
Components of Intelligence
10
Signs of Intelligence
• Learn or understand from experience
• Make sense out of ambiguous or contradictory
messages
• Respond quickly and successfully to new situations
• Use reasoning to solve problems
• Understand and infer in ordinary, rational ways
• Apply knowledge to manipulate the environment
• Think and reason
• Recognize the relative importance of different
elements in a situation 11
Intelligent Machines: The Behaviorist View
• Is it possible for machines to be intelligent?
- Machines can exhibit intelligent behaviour but how
intelligent are they really and is this the same as “human
intelligence”?
- Many scientists believe that only things that can be directly
observed are “scientific” (behaviourism)
- Therefore, if a machine behaves “as if it were intelligent”, it
is meaningless to argue that this is an illusion
- This view can be summarised as: “ It it walks like a duck,
quacks like a duck and looks like a duck - it is a duck”
- Turing was of this opinion and proposed the “Turing Test”
12
More on AI
17
Thinking Humanly: Cognitive Process
1960s “cognitive revolution”: information-processing psychology
replaced behaviorism.
19
Acting Rationally
Acting rationally = to achieve one’s goals, given one’s beliefs.
21
AI Objectives
• The objective of AI is to understand and
model cognition, intelligence, and
intelligent behaviour.
• Write the names of three skills.
22
Think what is required to do this?
• Counting
• Calculating
• Doing Mathematics
• Painting a picture
• Recognising a face of a friend
• Undersatnding a story or a fairy tale.
• Reading newspapers
• Making decisions
• Finding the shortest tour to visit a number of places
• Writing a program
23
27
Empricism
• Empricism (Aristotle) is the idea that our
concepts to classify things are derived from
our senses experiences i.e. we shape our
concepts by perceiving the world system
around us and gradually learn to cope with
the world
derived
Concepts Sense Experience
28
The ‘critical turn’
• The German philosopher Kant (18th
centuary) presented a foundational analysis
in his “Critique of Pure Reason”, which
essentially rejects both, Rationalism and
Empiricism, and shows that a synthesis of
both is necessary to account for the process
underlying human intelligence
29
AI’s Genesis
• A logical reasoning calculus was conceived
and a calculating machine built by Leibniz
in the 17th century
• Leibniz wanted to resolve intellectual
arguments by calculation
• 1847: Boole developed “Boolean logic”
• 1897: Frege developed today’s Predicate
Logic in his Begriffsschrift.
• 1931: Godel proved his incompleteness
theorem 30
History of AI
• AI has roots in a number of scientific disciplines
– computer science and engineering (hardware and software)
– philosophy (rules of reasoning)
– mathematics (logic, algorithms, optimization)
– cognitive science and psychology (modeling high level
human/animal thinking)
– neural science (model low level human/animal brain activity)
– Linguistics, economics, etc.
• The birth of AI (1943 – 1956)
– Pitts and McCulloch (1943): simplified mathematical model of
neurons (resting/firing states) can realize all propositional
logic primitives (can compute all Turing computable functions)
– Allen Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of chess
playing computers
– Tracing back to Boole, Aristotle, Euclid (logics, syllogisms)
31
• Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
John McCarthy (Lisp);
Marvin Minsky (first neural network machine);
Alan Newell and Herbert Simon (GPS);
– Emphasize on intelligent general problem solving
GSP (means-ends analysis);
Lisp (AI programming language);
Resolution by John Robinson (basis for automatic theorem
proving);
heuristic search (A*, AO*, game tree search)
• Emphasis on knowledge (1966 – 1974)
– Domain specific knowledge is the key to overcome existing
difficulties
– Knowledge representation (KR) paradigms
32
• Knowledge-based systems (1969 – 1979)
– DENDRAL: the first knowledge intensive system (determining
3D structures of complex chemical compounds)
– MYCIN: first rule-based expert system (containing 450 rules
for diagnosing blood infectious diseases)
EMYCIN: an ES shell
– PROSPECTOR: first knowledge-based system that made
significant profit (geological ES for mineral deposits)
• AI became an industry (1980 – 1989)
– wide applications in various domains
– commercially available tools
• Current trends (1990 – present)
– more realistic goals
– more practical (application oriented)
– distributed AI and intelligent software agents
– resurgence of neural networks and emergence of genetic
algorithms 33
History
34
Overview: Problems and
Techniques
• Lack of early AI researcher’s understanding
of the problem tackled
• Lack of knowledge in the machine
• Difficulty in both formalizing everyday
knowledge as well as expert knowledge
35
State of the Art
39
Intelligent Agents
Dr. Mohammad Shahadat Hossain
Professor
Department of Computer Science &
Engineering
University of Chittagong
40
Lecture Outlines
• Define Intelligent Agents
41
Intelligent Agents
• Definition: An Intelligent Agent/entity that perceives it environment
via sensors and acts rationally upon that environment with its
effectors.
- Example: human, robotic, software agents
• Hence, an agent gets percepts one at a time, and maps this percept
into sequence to actions.
• Properties
– Autonomous
– Interacts with other agents
plus the environment
– Reactive to the environment
– Pro-active (goal- directed)
42
What do you mean,
sensors/percepts and effectors/actions?
• Humans
– Sensors: Eyes (vision/act of seeing), ears (hearing), skin (touch),
tongue (gustation/sense of taste /salt), nose (olfaction/the sense of
smell), neuromuscular system (Proprioception is an automatic sensitivity
mechanism in the body that sends messages through the central nervous system (CNS).
The CNS then relays information to rest of the body about how to react and with what
amount of tension)
44
Performance Measures
• Subjective vs objective
45
Examples of Performance
Measures
• Vacuum cleaner: how much dirt removed
46
Rationality versus Omniscience
• An omniscience agent knows everything,
including the outcome of the actions
47
What is rational is determined by
• The performance measure that defines
degree of success
• The perceptions of the agents so far (the
percept story)/percept sequence
• The agent’s knowledge on the environment
• The actions the agent can perform
48
An Ideal Rational Agent
• An ideal rational agent should, for each possible percept sequence,
do whatever actions that will maximize its performance measure
based on
(1) the percept sequence, and
(2) its built-in and acquired knowledge.
• This includes information gathering as a rational activity
• Examples
•Crossing the road
•Clock
Rationality: maximizes expected performance. Doesn’t mean
success every time.
Rationality ≠ perfection Rationality ≠ Success always.
• Types of performance measures: payoffs, false alarm and false
dismissal rates, speed, resources required, effect on environment,
49 etc.
Simple example: Vacuum-cleaner world
Our task: fill out the table in a correct way so that the vacuum
cleaner looks intelligent.
51
Performance measures: (Keeping scores)
What you ask for is what you get…. If you really know what
ask for.
52
Ideal Mapping from Percept
Sequence to Actions
• Square root function example
• lookup table
53
Mapping Percept Sequences to
Actions - Lookup Table
Percept x Action z
1.0 1.000
1.1 1.048
1.2 1.095
1.3 1.140
1.4 1.183
1.5 1.224
1.6 1.264
1.7 1.303
1.8 1.341
1.9 1.378
54
Mapping Percept Sequences to
Actions
Percept x Action z
1.0 1.000
1.1 1.048 Function
1.2 1.095 function SQRT(x)
1.3 1.140 z:=1.0
1.4 1.183 repeat until|z*z - x| < 0.001
1.5 1.224 z:=z-(z*z-x)/(2z)
1.6 1.264 end
1.7 1.303 return z
1.8 1.341
1.9 1.378
55
Autonomy
• A system is autonomous to the extent that its own behavior is
determined by its own experience and knowledge.
• Therefore, a system is not autonomous if it is guided by its
designer according to a priori decisions.
• To survive agents must have:
– Enough built- in knowledge to survive.
– Ability to learn.
56
We have discussed agents by describing their
Behavior (external) – the action that is
performed after any Given sequence of
precepts.
57
Structure of Intelligent Agents
• The job of AI is to design the agent program
- the program is a function that implement the agent mapping
from percepts to actions
- the program will run on some sort of computing device,
which we will call the architecture
- the architecture might be a plain computer, it might also
include software that provide a degree of insulation between
the raw computer and agent program
AGENT = ARCHITECTURE + PROGRAM
architecture make available percepts received from sensor to
the agent program, it runs the program and feeds the
program’s actions to the effectors 58
Building a rational agent….. first step.
60
AGENT PROGRAM
63
Table Driven Agent
Function TABLE-DRIVEN-AGENT (percept) returns action
Static: percepts // a sequence, initially empty
table // a table, indexed by percept sequences, initially fully
specified
append percept to the end of percepts
action = LOOKUP (percepts, table)
return action
Limitation
- the table needed for something as simple as an agent that can play
chess would be about 35100 entries
- it would take quite a long time for the designer to build the table
- no autonomy
64
Simple Reflex Agent
• Use condition-action rules to summarize portions of
the table
Agent Sensors
Environment
What the world
is like now
What action I
Condition-action rules should do now
Effectors
67
Function REFLEX-AGENT (percept) returns action
Static: state // a description of the current world state
rules // a set of condition-action rules
68
Agents that Keep Track of the World
70
Agents with Explicit Goals
Knowing the world and respond appropriately is not the whole story
Anticipating future … involving planning and search.
71
Goal-Based Agent
function AGENT-WITH-EXPLICIT_GOAL(percept, goal) returns
action
static: state // a description of the current world state
rules // a set of condition-action rules
state = UPDATE_STATE(state, percept)
rule =RULE-MATCH(state, rules, goal)
action = RULE-ACTION(rule, goal)
state = UPDATE-STATE(state, action)
return action
72
Utility- Based Agent
• When there are multiple possible alternatives, how to decide
which one is best?
• A goal specifies a crude distinction between a happy and
unhappy state, but often need a more general performance
measure that describes "degree of happiness"
• Utility function U: States --> Reals indicating a measure of
success or happiness when at a given state
• Allows decisions comparing choice between conflicting goals,
and choice between likelihood of success and importance of
goal (if achievement is uncertain)
73
Utility-based agents
Goal
74
Utility-Based Agent
function UTILITY-BASED-AGENT(percept, utility-function) returns
action
static: state // a description of the current world state
rules // a set of condition-action rules
state = UPDATE_STATE(state, percept)
action = UTILITY-MAXIMISER(state, rules, utility-function)
state = UPDATE-STATE(state, action)
return action
75
Properties of Environments
• Accessible/ Inaccessible.
– If an agent's sensors give it access to the complete state of the
environment needed to choose an action, the environment is
accessible.
– Such environments are convenient, since the agent is freed from the
task of keeping track of the changes in the environment.
• Deterministic/ Non-deterministic.
– An environment is deterministic if the next state of the environment
is completely determined by the current state of the environment and
the action of the agent.
– In an accessible and deterministic environment the agent need not
deal with uncertainty.
• Episodic/ Nonepisodic.
– An episodic environment means that subsequent episodes do not
depend on what actions occurred in previous episodes.
– Such environments do not require the agent to plan ahead. 76
Properties of Environments
• Static/ Dynamic.
– A static environment does not change while the agent is thinking.
– In a static environment the agent need not worry about the passage of
time while he is thinking, nor does he have to observe the world while
he is thinking.
– In static environments the time it takes to compute a good strategy does
not matter.
• Discrete/ Continuous.
– If the number of distinct percepts and actions is limited the environment
is discrete, otherwise it is continuous.
• With/ Without rational adversaries.
– If an environment does not contain other rationally thinking, adversary
(opponent) agents, the agent need not worry about strategic, game
theoretic aspects of the environment
– Most engineering environments are without rational adversaries,
whereas most social and economic systems get their complexity from
the interactions of (more or less) rational agents.
–
77
Characteristics of environments
Solitaire
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
78
Characteristics of environments
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
79
Characteristics of environments
Taxi driving
Internet
shopping
Medical
diagnosis
80
Characteristics of environments
Taxi driving No No No No No
Internet
shopping
Medical
diagnosis
81
Characteristics of environments
Taxi driving No No No No No
Internet No No No No No
shopping
Medical
diagnosis
82
Characteristics of environments
Taxi driving No No No No No
Internet No No No No No
shopping
Medical No No No No No
diagnosis
Taxi driving No No No No No
Refinery controller No No No No No
84
Environment Programs
• Read from R&N (pages 47 – 49)
85
Summary
• An agent perceives and acts in an environment, has an architecture
and is implemented by an agent program.
• An ideal agent always chooses the action which maximizes its
expected performance, given percept sequence received so far.
• An autonomous agent uses its own experience rather than built- in
knowledge of the environment by the designer.
• An agent program maps from percept to action & updates its
internal state.
– Reflex agents respond immediately to percepts.
– Goal-based agents act in order to achieve their goal( s).
– Utility-based agents maximize their own utility function.
• Representing knowledge is important for successful agent design.
• Some environments are more difficult for agents than others. The
most challenging environments are inaccessible, non-deterministic,
non-episodic, dynamic, and continuous.
86
Assignment
• What is the difference between a performance measure and
a utility function?
• For each of the environments in slide , determine what
type of agent architecture is most appropriate (table
lookup, simple reflex, goal-based or utility-based)
• Choose a domain that you are familiar with, and write a
PAGE description of an agent for the environment.
Characterize the environment as being accessible,
deterministic, episodic, static and continuous or not. What
agent architecture is best for this domain?
• Exercise 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 2.10, 2.11
87