Foundations of Artificial Intelligence: Chapter 1: Introduction
Foundations of Artificial Intelligence: Chapter 1: Introduction
FOUNDATIONS OF
ARTIFICIAL INTELLIGENCE
Chapter 1: Introduction
Lecturer:
Dr. Ir. Gunadi Widi Nurcahyo, MSc.
Outline
• Course overview
• What is AI?
• A brief history
• The state of the art
1
Course overview
• Introduction and Agents
• Search
• Logic
• Planning
• Uncertainty
• Learning
• Natural Language Processing
What is AI?
Views of AI fall into four categories:
2
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?" "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game
3
Thinking rationally: "laws of
thought"
• Aristotle: what are correct arguments/thought
processes?
• Several Greek schools developed various forms of
logic: notation and rules of derivation for thoughts; may
or may not have proceeded to the idea of
mechanization
• Direct line through mathematics and philosophy to
modern AI
• Problems:
1. Not all intelligent behavior is mediated by logical deliberation
2. What is the purpose of thinking? What thoughts should I
have?
4
Rational agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept
histories to actions:
[f: P* A]
• For any given class of environments and tasks,
we seek the agent (or class of agents) with the
best performance
• Caveat: computational limitations make perfect
rationality unachievable
design best program for given machine resources
AI prehistory
• Philosophy Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
• Mathematics Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
• Economics utility, decision theory
• Neuroscience physical substrate for mental activity
• Psychology phenomena of perception and motor control,
experimental techniques
• Computer building fast computers
engineering
• Control theory design systems that maximize an objective
function over time
• Linguistics knowledge representation, grammar
5
A bridged history of AI
• 1943 McCulloch & Pitts: Boolean circuit model of brain
• 1950 Turing's "Computing Machinery and Intelligence"
• 1956 Dartmouth meeting: "Artificial Intelligence" adopted
• 1952—69 Look, Ma, no hands!
• 1950s Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
• 1965 Robinson's complete algorithm for logical reasoning
• 1966—73 AI discovers computational complexity
Neural network research almost disappears
• 1969—79 Early development of knowledge-based systems
• 1980-- AI becomes an industry
• 1986-- Neural networks return to popularity
• 1987-- AI becomes a science
• 1995-- The emergence of intelligent agents
6
Intelligent Agents
Chapter 2
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types
7
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through
actuators
• Human agent: eyes, ears, and other organs for
sensors;
• hands, legs, mouth, and other body parts for
actuators
• Robotic agent: cameras and infrared range
finders for sensors;
• various motors for actuators
8
Vacuum-cleaner world
A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-
table}
9
Rational agents
• An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity
consumed, amount of noise generated, etc.
Rational agents
• Rational Agent: For each possible percept
sequence, a rational agent should select
an action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and
whatever built-in knowledge the agent
has.
10
Rational agents
• Rationality is distinct from omniscience
(all-knowing with infinite knowledge)
• Agents can perform actions in order to
modify future percepts so as to obtain
useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with
ability to learn and adapt)
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
11
PEAS
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake, signal,
horn
– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
12
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
13
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode
depends only on the episode itself.
Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
14
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
15
Table-lookup agent
• \input{algorithms/table-agent-algorithm}
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries
16
Agent types
• Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
17
Simple reflex agents
• \input{algorithms/d-agent-algorithm}
18
Model-based reflex agents
• \input{algorithms/d+-agent-algorithm}
Goal-based agents
19
Utility-based agents
Learning agents
20
Solving problems by
searching
Chapter 3
Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms
21
Problem-solving agents
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest
22
Example: Romania
Problem types
• Deterministic, fully observable single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable
contingency problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space exploration problem
23
Example: vacuum world
• Single-state, start in #5.
Solution?
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
24
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
• Contingency
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution?
• Contingency
– Nondeterministic: Suck may
dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
25
Single-state problem formulation
A problem is defined by four items:
26
Vacuum world state space graph
• states?
• actions?
• goal test?
• path cost?
27
Example: The 8-puzzle
• states?
• actions?
• goal test?
• path cost?
28
Example: robotic assembly
29
Tree search example
30
Tree search example
31
Implementation: states vs. nodes
Search strategies
• A search strategy is defined by picking the order of node
expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
32
Uninformed search strategies
• Uninformed search strategies use only the
information available in the problem
definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end
33
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end
34
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors
go at end
35
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
• Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of
g(n)
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
36
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
37
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
38
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
39
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
40
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
41
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
42
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
• Recursive implementation:
43
Iterative deepening search l =0
44
Iterative deepening search l =2
45
Iterative deepening search
• Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
Properties of iterative
deepening search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
46
Summary of algorithms
Repeated states
• Failure to detect repeated states can turn
a linear problem into an exponential one!
47
Graph search
Summary
• Problem formulation usually requires abstracting away
real-world details to define a state space that can
feasibly be explored
48