Artificial Intelligence 1
Artificial Intelligence 1
an attempt of
Intelligent
behavior
Computer
Humans
Jan 2, 2014 11
Can Machines Act/Think
Intelligently?
Maybe yes, maybe not, if intelligence is not
separated from the rest of “being human”
Yes, if intelligence is narrowly defined as
information processing
Jan 2, 2014 13
Behind the success
The system derived its playing strength mainly out of
brute force computing power.
It was a massively parallel, with 30 nodes, with each
node containing a 120 MHz.
Its chess playing program was written in C and ran
under the AIX operating system.
It was capable of evaluating 200 million positions per
second, twice as fast as the 1996 version.
The Deep Blue chess computer that defeated Kasparov
in 1997 would typically search to a depth of between
six and eight moves to a maximum of twenty or even
more moves in some situations.
Jan 2, 2014 14
Some Achievements
Computers have won over world champions in
several games, including Checkers, and Chess, but
still do not do well in Go
AI techniques are used in many systems: formal
calculus, video games, route planning, logistics
planning, pharmaceutical drug design, medical
diagnosis, hardware and software trouble-shooting,
speech recognition, traffic monitoring, facial
recognition, medical image analysis, part inspection,
etc...
Stanford’s robotic car, Stanley, autonomously
traversed 132 miles of desert.
Some industries (automobile, electronics) are highly
robotized, while other robots perform brain and
heart surgery, are rolling on Mars, fly autonomously,
…,
24,
Dec But
2015 home robots still remain a thing of the future 15
Some Big Open Questions
AI (especially, the “rational agent” approach) assumes that
intelligent behaviors are only based on information processing? Is
this a valid assumption?
If yes, can the human brain machinery solve problems that are
inherently intractable for computers?
In a human being, where is the interface between “intelligence”
and the rest of “human nature”, e.g.:
• How does intelligence relate to emotions felt?
• What does it mean for a human to “feel” that he/she understands
something?
Is this interface critical to intelligence? Can there exist a general
theory of intelligence independent of human beings? What is the
role of the human body?
Agent
Search, especially heuristic
Perception search (puzzles, games)
Robotics
Planning
Reasoning
Reasoning under uncertainty,
Search
Learning including probabilistic
reasoning
Knowledge Constraint Learning
Planning rep. satisfaction
Agent architectures
Robotics and perception
Natural
language
... Expert Natural language processing
Systems
Dec 24, 2015 18
Bits of History
1956: The name “Artificial Intelligence” is coined
60’s: Search and games, formal logic and theorem
proving
70’s: Robotics, perception, knowledge representation,
expert systems
80’s: More expert systems, AI becomes an industry
90’s: Rational agents, probabilistic reasoning,
machine learning
00’s: Systems integrating many AI methods, machine
learning, reasoning under uncertainty, robotics again
Dec 24, 2015 19
The human brain: Perhaps the most complex
information processing machine in nature
Forebrain (Cerebral Cortex):
Language, maths, sensation,
movement, cognition, emotion
Midbrain: Information Routing;
involuntary controls
Cerebellum: Motor
Control
Hindbrain: Control of breathing,
heartbeat, blood circulation
ABILITY TO EXIST
TO BE AUTONOMOUS,
REACTIVE,
GOAL-ORIENTED, ETC.
Dec 24, 2015 27
Agents
An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon that
environment through actuators
percepts
environment ?
agent
actions
effectors
Percept refers to the agent's perceptual inputs at any given
instant.
An agent's percept sequence is the complete history of
everything the agent has ever perceived. In general, an agent's
choice of action at any given instant can depend on the entire
percept sequence observed to date.
An agent's behavior is described by the agent function that
maps any given percept sequence to an action. It can be given
Vacuum-cleaner world
• Percepts: location and
contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck,
Internally, the agent function
for an artificial agent will be
implemented by an agent
program.
The agent function is an
abstract mathematical
description; the agent program
is a concrete implementation,
running on the agent
RATIONAL
AGENTS
Ideal Rational Agent: For each possible percept sequence,
such an agent does whatever action is expected to maximize
its performance measure, on the basis of the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
Performance measure
An objective criterion for success of an agent's behavior.
E.g., performance measure of a vacuum-cleaner agent could be
amount of dirt cleaned up, amount of time taken, amount of electricity
consumed, amount of noise generated, etc.
Rationality
What is rational at any given time depends on
four things:
The performance measure that defines the criterion of
success.
The agent's prior knowledge of the environment.
The actions that the agent can perform.
The agent's percept sequence to date.
1.The initial state that the agent starts in. E.g, the initial state
for the agent in Romania might be described as In(Arad).
44
G
Jan 2, 2014 49
E.g. Water Jug Problem
We give you two jugs with a maximum capacity
of 4-litre and 3-litre each and a pump to fill
each of the jugs. Neither have any measuring
markers on it. Now your job is to get exactly 2
litres of water into the
4-litre jug. How will you do that? How will you
define
The statethespace
state space?
can be described as the set of ordered pairs of
integers
(x, y), viz. x = 0, 1, 2, 3, or 4 and y = 0, 1,2, or 3; where x and
y represent the quantity of water in the 4-litre jug and 3-litre
jug
Therespectively.
start state
is (0, 0)
Jan 2, 2014 51
Find a sequence of rules to solve the water jug
problem
Required a control structure that
loops through a simple cycle in
which some rule whose left side
matches the current state is
chosen, the appropriate change to
the state is made as described in
the corresponding right side, and
the resulting state is checked to
see if it corresponds to goal state.
One solution to the water jug
problem
Shortest such sequence will have a
impact on the choice of appropriate
mechanism to guide the search for
solution.
Jan 2, 2014 52
Introduced in 1878 by Sam Loyd.
1 2 3 4 1 2 3 4
5 6 7 8 ? 5 6 7 8
9 10 11 12 9 10 11 12
13 14 15 13 15 14
....
8 2
3 4 7 5 6 7 8
5 1 6 9 10 11 12
13 14 15
8-puzzle 9! = 362,880 states
15-puzzle 16! ~ 2.09 x 1013 states
24-puzzle 25! ~ 1025 states
54
A tile j appears after a tile i if either j appears on the same row
as i to the right of i, or on another row below the row of i.
For every i = 1, 2, ..., 15, let ni be the number of tiles j < i that
appear after tile i (permutation inversions)
N = n2 + n3 + + n15 + row number of empty tile
1 2 3 4
Find N for the following state.
5 10 7 8
9 6 11 12
13 14 15
n2 = 0 n3 = 0 n4 = 0
n5 = 0 n6 = 0 n7 = 1
n8 = 1 n9 = 1 n10 = 4 N=7+4
n11 = 0 n12 =0 n13 = 0
n14 = 0 n15 =0 55
Proposition: (N mod 2) is invariant under any legal move
of the empty tile
Proof:
Any horizontal move of the empty tile leaves N unchanged
A vertical move of the empty tile changes N by an even
increment
( 1 1 1 1)
For a goal state g to be reachable from a state s, a necessary &
sufficient condition is that N(g) and N(s) have the same parity
The state graph consists of two connected components of equal
size
1 2 3 4 1 2 3 4
5 6 7 5 6 11 7
s= s’ = N(s’) = N(s) + 3 + 1
9 10 11 8 9 10 8
13 14 15 12 13 14 15 12
56
15- Puzzle
Sam Loyd off ered $1,000 of his own money t o
t he fi rst person who would solve t he f ollowing
problem:
1 2 3 4 1 2 3 4
5 6 7 8 ? 5 6 7 8
9 10 11 12 9 10 11 12
13 14 15 13 15 14
N=4 N=5
So, the second state is not
reachable from the first, and
Sam Loyd took no risk with
his money ...
57
What is the Actual State Space?
a) The set of all states?
[e.g., a set of 16! states for the 15-puzzle]
etc.
4 17 boards farthest away from goal state (80 moves)
1
13
<2
,5,
6 >
?
<15,12,11>/
<9,10,14>
?
What is it ?
about these
17 boards
Each require 80 moves to reach:
out of over
10 trillion? Intriguing similarities. Each number has its own
few locations. <3 ,
Interesting machine learning task: 7,8>
Learn to recognize the hardest boards!
(Extremal Combinatorics, e.g. LeBras, Gomes, and Selman AAAI-12)
17 boards farthest away from goal state (80 moves)
0.036 sec
~ 55 hours
63
a r ch
se
h e se
s are ex
T l e m pl Requires positioning millions of components
ro b o m
p e ly c & connections on a chip to
r em
ext minimize area, circuit delays & stray
capacitances
The layout problem comes after maximize manufacturing
the logical yield.
design phase, and is
usually split into two parts:
Cell layout: the primitive components of the circuit are grouped
into cells, each of which performs some recognized function. In cell
layout, The aim is to place the cells on the chip so that they do not
overlap & there is room for the connecting wires to be placed between
the cells.
Channel routing: finds a specific route for each wire through the
Decgaps
24, 2015 between the cells. 64
Having formulated some problems, we now need to
solve them. This is done by a search through the state
space.
These search techniques use an explicit search tree
that is generated by the initial state and the
successor.
The essence of search is following up one option &
putting the others aside for later, in case the first
choice does not lead to a solution.
The choice of which state to expand is determined by
In search
the general, we may have a search graph
strategy.
strategy
instead of a search tree, when the same
state can be reached from multiple paths.
A node is a data structure with five components:
STATE: the state in the state space to which the node
corresponds;
PARENT-NODE: the node in the search tree that generated this
node;
ACTION: the action that was applied to the parent to generate the
node;
PATH-COST: the cost, traditionally denoted by g ( n ) , of the path
from the initial state to the node,
as indicated by the parent pointers;
DEPTH: the number of steps
along the path from the initial state.
If a state is too large, it may be
preferable to only represent the initial
state and (re-)generate the other
Dec 24, 2015 66
Fringe
• Set of search nodes that have not been
expanded yet
• Implemented as a queue FRINGE
– INSERT(node,FRINGE)
– REMOVE(FRINGE)
• The ordering of the nodes in FRINGE defines
the search strategy
The
general
tree-
search
algorithm.
The following table lists the time and memory required for a
breadth-first search with
branching factor b = 10, for various values of the solution depth
d.
The table assumes that 10,000 nodes can be generated per
second
A node requires 1000 bytes of storage.
Many search problems fit roughly within these assumptions (give
or take a factor of 100) when run on a modern personal computer.
Dec 24, 2015 73
Time and memory requirements for breadth-first search.
The numbers shown assume branching factor b = 10;
10,000 nodes/second; 1000 bytes/node.
Completeness &
Optimality?
Dec 24, 2015 76
Time & Space Complexity
o Uniform-cost search is guided by path costs rather than depths, so
its complexity cannot easily be characterized in terms of b and d.
o Instead, let C* be the cost of the optimal solution, & assume that
every action costs at least ε.
o Then the algorithm's worst-case time and space complexity is
O(b1+C*/ε), which can be much greater than bd.
o This is because uniform-cost search can, and often does, explore
large trees of small steps before exploring paths involving large
and perhaps useful steps.
o When all step costs are equal, b1+C*/ε, is just bd.
Jan 2, 2014 80
BFS
Jan 2, 2014 81
DFS
Jan 2, 2014 82
The problem of unbounded trees can be alleviated by
supplying depth-first search with a predetermined depth
limit l.
Nodes at depth l are treated as if they have no successors.
It introduces an additional source of incompleteness if we
choose l < d, that is, the shallowest goal is beyond the
depth limit.
Depth-limited search will also be nonoptimal if we choose l
> d.
Its time complexity is O(bl) and its space complexity is
O(bl).
Dec 24, 2015
Depth-first search can be viewed as a special case of 83
function ID-DFS(problem) returns solution/fail
for depth = 0 to ∞ do
result ← DLS(problem,depth)
if result ≠ cutoff then return result
Jan 2, 2014 86
Unfortunately, like all rules of discovery and invention,
heuristics are fallible.
A heuristic is only an informed guess of the next step to be
taken in solving a problem.
It is often based on experience or intuition.
Because heuristics use limited information, such as
knowledge of the present situation or descriptions of states
currently on the open list, they are not always able to
predict the exact behavior of the state space farther along in
the search.
A heuristic can lead a search algorithm to a suboptimal solution
or fail to find any solution at all.
This is an inherent limitation of heuristic search. It cannot be
eliminated by “better” heuristics or more efficient search
Jan 2, 2014 algorithms 87
h(n) = estimated cost of the cheapest path
from node n to a goal node.
Some heuristics are better than others, and the better (more
informed) the heuristic is, the fewer nodes it needs to examine in
the search tree to find a solution.
Jan 2, 2014 89
An algorithm in which a node is selected for expansion
based on an evaluation function f(n)
Traditionally the node with the lowest evaluation function is
selected
Not an accurate name…expanding the best node first would
be a straight march to the goal.
Choose the node that appears to be the best
There is a whole family of BEST-FIRST-SEARCH algorithms with
different evaluation functions.
Jan 2, 2014 92
5. Expand: For each successor, m, of n
If m [OPEN CLOSED] //expanding m first time
Set g(m) = g(n) + C(n, m)
Set f(m) = g(m) + h(m)
Insert m in OPEN
If m [OPEN CLOSED]
Set g(m) = min{g(m), g(n)+ C(n, m)}
Set f(m) = g(m) + h(m)
If f(m) has decreased & m CLOSED, move m to
OPEN
6. Loop: Go to Step 2. 93
Jan 2, 2014
1/ 2/ 3/ 4/ Estimated heuristic
1 2 1 1 1 2 1
value
2 0 6 5
1 3 3 1
5/ 7/ 8/
5 6/
1 1 1 5 1
7
2 1 5
1 1
1 4 0 5 12 is the goal state, so its
9/ 1 1 1 heuristic value is 0.
1 8 0/ 3 1/ 1 2/
2 4 1 0
Jan 2, 2014 94
1/ 2/ 3/ 4/
1 2 1 1 1 2 1
2 0 6 5
1 3 3 1
5/ 7/ 8/
5 6/
1 1 1 5 1
7
2 1 5
1 1
1 4 0 5
9/ 1 1 1
1 8 0/ 3 1/ 1 2/
2 4 1 0
CLOSED
1(12), 2(12), 6(12), 5(13), 10(13), 11(11), 12(13)
Jan 2, 2014 95
1/
5 Apply A* algorithm
3 2
3/
2/
2
4
3
4 3
4/
2
1
5/
3
2
0
6/
0
Jan 2, 2014 96
How Heuristics is developed
Consider the 8-puzzle problem. The start state of
the puzzle is a random configuration, and the
goal state is as shown.
?
If it is in the middle of the grid, the branching factor is 4
if it is on an edge, the branching factor is 3
if it is in a corner, the branching factor is 2
Jan 2, 2014 97
So, an exhaustive search of the search tree would
need to examine around 320 states, which is around
3.5 billion.
In all, there
Because how many states
are only are
9! or possible
362,880 in the 8states,
possible
thepuzzle
searchproblem?
tree could clearly be cut down
significantly by avoiding repeated states.
It is useful to find ways to reduce the search tree
further, in order to devise a way to solve the problem
efficiently.
A heuristic would help us to do this, by telling us
approximately how many moves a given state is from
the goal state.
Jan 2, 2014 98
1 2 3 GOAL 1 2 3
8 4 7 8 4
7 6 5 6 5
t up
l ef
right
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5
h2(node) = 3 + 3 + 2 + 2 + 1 + 2 + 2 + 3 =
18
-5 -5 -2
2 8 3 1 2 3
1 4 h = -3 8 4 h = -1
7 6 5 7 6 5
-3 -4
2 3 2 3
1 8 4 1 8 4 h = -2
7 6 5 7 6 5
h = -3 -4
f(n) = -(number of tiles out of place)
Hill-climbing does not look ahead beyond the immediate
neighbors of the current state.
"Like climbing Everest in thick fog with amnesia”
1 2 5
-4
7 4
start 8 6 3 goal
1 2 5 1 2 5 1 2 5
7 4 7 4 -4 7 4 0
8 6 3 8 6 3 8 6 3
-3
1 2 5
7 4 -4
8 6 3
Steepest-Ascent Hill Climbing
A variation on simple hill climbing.
Instead of moving to the first state that is better, move
to the best possible state that is one move away.
Consider all the moves from the current state and select
the best one as the next state.
Insteepest ascent hill climbing you will always
make your next state the best successor of your
current state, and will only make a move if that
successor is better than your current state.
The order of operators does not matter.
Not just climbing to a better state, climbing up the
Jan 2, 2014 111
Algorithm: Steepest-Ascent Hill
Climbing
Start Goal
A D
D C
C B
B A
Blocks World
Start A Goal D
0
Score? D 4
Score? C
C B
B A
Blocks World
D 2
B A
A 0
Star A Goal D
t-6
Score? D
6
Score? C
C B
B A
Blocks World
B
be? WhatB
areA their scores?
B A D
Jan 2, 2014 120
Hill Climbing: Conclusion
• Can be very inefficient in a large, rough
problem space.
Simulated Annealing
Generate a new neighbor from current state.
◦ If it’s better take it.
◦ If it’s worse then take it with some probability
proportional to the temperature and the delta
(difference) between the new and old states.
upward moves
COST FUNCTION, C
HILL CLIMBING
may occur early
on, but as the
process
HILL CLIMBING
progresses, only
relatively small
upward moves are
AT FINAL_TEMP
allowed until
NUMBER OF ITERATIONS finally the process
converges to a
local minimum 126
Jan 2, 2014
Jan 2, 2014 127
• Algorithm: Simulated Annealing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue
with the initial state as the current state.
2. Initialize BEST-SO-FAR to the current state.
3. Initialize T according to the annealing schedule.
4. Loop until a solution is found or until there are no new operators left to be applied in the
current state.
(a) Select an operator that has not yet been applied to the current state & apply it to
produce a new state.
(b) Evaluate the new state. Compute ∆E = (value of current) - (value of new state)
• If the new state is a goal state, then return it and quit.
• If it is not a goal state but is better than the current state, then make it the current
state. Also set
BEST-SO-FAR to this new state.
• If it is not better than the current state, then make it the current state with probability p'
as defined. This step is usually implemented by invoking a random number generator to
produce a number in the range [0,1 ]. If that number is less than p', then the move is
accepted. Otherwise, do nothing.
Jan 2, 2014 128
Annealing Schedule
It has three/four components:
Examples of sentences
The moon is made of paneer
If A is true then B is true
A is false
All humans are mortal
134
Knowledge Base
Knowledge Base: set of sentences represented in a
knowledge representation language and represents
assertions about the world.
ask
tell
Inference rule: when one ASKs questions of the KB, the
answer should follow from what has been TELLed to the KB
previously.
The agent maintains a knowledge base, KB, which may initially
contain some background knowledge.
Each time the agent program is called, it does three things.
1.It TELLS the knowledge base what it perceives.
2.It ASKS the knowledge base what action it should perform.
3.The agent records its choice with TELL and executes the action.
The second TELL is necessary to let the knowledge base know that
the hypothetical action has actually been executed.
Dec 24, 2015 136
• Performance measure
– gold +1000, death -1000
– -1 per step, -10 for using the arrow
• Environment
– Squares adjacent to wumpus are smelly
– Squares adjacent to pit are breezy
– Glitter iff gold is in the same square
– Shooting kills wumpus if you are facing it
– Shooting uses up the only arrow
– Grabbing picks up gold if in same square
– Releasing drops the gold in same square
• Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot
• Sensors: Stench, Breeze, Glitter, Bump, Scream
137
Wumpus world characterization
• Fully Observable? No – only local perception
• Deterministic? Yes – outcomes exactly specified
• Static? Yes – Wumpus and Pits do not move
• Discrete? Yes
• Episodic? No – sequential at the level of actions
• Single-agent? Yes – The wumpus itself is
. essentially a natural feature, not
. another agent
138
A typical Wumpus world
• The agent always
starts in the field
[1,1].
• The task of the
agent is to find the
gold, return to the
field [1,1] and
climb out of the
cave.
139
Agent in a Wumpus world: Percepts
• The agent perceives
– a stench in the square containing the wumpus and in the
adjacent squares (not diagonally)
– a breeze in the squares adjacent to a pit
– a glitter in the square where the gold is
– a bump, if it walks into a wall
– a woeful scream everywhere in the cave, if the wumpus is killed
• The percepts will be given as a five-symbol list:
– If there is a stench, and a breeze, but no glitter, no
bump, and no scream, the percept is
[Stench, Breeze, None, None, None]
• The agent can not perceive its own location.
140
Exploring a Wumpus world
Directly observed:
S: stench
B: breeze
G: glitter
A: agent
Inferred (mostly):
OK: safe square
P: pit
W: wumpus
141
Exploring a wumpus world
The first step taken
by the agent in the
wumpus world.
(a) The initial
situation, after
percept [None, None,
None, None, None].
(b) After one move,
with percept
[None, Breeze, None,
None,
In 1,1 we don’t get B or S, so we know 1,2 and 2,1 areNone].
safe.
Move to 2,1.
In 2,1 we feel a breeze.
So we know there is a pit in 3,1 or 2,2.
142
Exploring a wumpus world
So go back to 1,1
then to 1,2 where
we smell a stench.
Percept?
[Stench, None, None,
None, None]
Stench in 1,2, so
the wumpus is in
1,3 or 2,2.
We don't smell a stench in 2,1, so 2,2 can't be the wumpus, so 1,3 must be
the wumpus.
We don't feel a breeze in 1,2, so 2,2 can't be a pit, so 3,1 must be a pit.
2,2 has neither pit nor wumpus and is therefore okay.
We move to 2,2. We don’t get any sensory input.
So we know that 2,3 and 3,2 are ok.
Move to 3,2, where we observe stench, breeze and glitter!
We have found the gold and won. 143
• Can represent general knowledge about an environment by a set
of rules and facts
146
Models
• Models are formal definitions of possible states of the
world
• We say m is a model of a sentence if is true in m
• M() is the set of all models of
• Then KB ╞ if and only if M(KB) M()
M(KB)
Entailment in the Wumpus World
• Situation after detecting
nothing in [1,1], moving right,
breeze in [2,1]
• What are possible models
for ? – assume only possibility
pit or no pit.
? ?
B
V ?
V
Wumpus Models
B
B
B
B B
B B
B
B
B
B
B B
B B
B
B
B
B
B B
B B
B
B
B
B
B B
B B
B
155
Logic in general
• Logics are formal languages for representing information
such that conclusions can be drawn
• Syntax defines the sentences in the language
• Semantics define the "meaning" of sentences;
– i.e., define truth of a sentence in a world
• E.g., the language of arithmetic
• x+2 ≥ y is a sentence; x2+y > {} is not a sentence
– x+2 ≥ y is true iff the number x+2 is no less than the number
y.
– x+2 ≥ y is true in a world where x = 7, y = 1
– x+2 ≥ y is false in a world where x = 0, y = 6
156
Syntax of Propositional Logic
TRUE and FALSE are sentences
The propositional variables P, Q, R, … are sentences
Parentheses around a sentence forms a sentence
Combining sentences with the following logical connectives
forms a sentence
Symbol Example Name Sentence Name
PQ and Conjunction
PQ or Disjunction
P not Negation
PQ implies Implication
PQ is equivalent Equivalence
(biconditional)
Jan 5, 2013 157
Jan 5, 2013 158
Idempotent P P P P P P
Associative (P Q) R P (Q R) (P Q) R P (Q R)
Commutative P Q Q P P Q Q P P Q Q P
De Morgan ~ (P Q) ~ P ~ Q ~ (P Q) ~ P ~ Q
Eliminarea
P Q ~ P Q
implicatiei
Eliminarea
P Q (P Q) (Q P)
implicatiei double
• Definition of Argument:
163
Rule of inference Tautology Name
p q
p [ p ( p q)] q Modus ponens
q
q
p q [ q ( p q)] p Modus tollen
p
p q
q r [( p q) (q r )] ( p r ) Hypothetical syllogism
p r
p q
p (( p q) p ) q Disjunctiv e syllogism
q
p
p ( p q) Addition
p q
p q
( p q) p Simplification
p
p
q (( p) (q)) ( p q) Conjunctio n
p q
p q
p r [( p q) ( p r )] ( p r ) Resolution
q r
Dec 24, 2015 164
An example
Using the rules of inference to build arguments
1. It is not sunny this afternoon and it is colder than yesterday.
2. If we go swimming it is sunny.
3. If we do not go swimming then we will take a canoe trip.
4. If we take a canoe trip then we will be home by sunset.
5. We will be home by sunset
1. p q
p It is sunny this afternoon
2. r p
q It is colder than yesterday
r We go swimming 3. r s
s We will take a canoe trip 4. s t
t We will be home by sunset (the conclusion) 5. t
hypotheses
propositions
1. p q Rule of inference Tautology Name
2. r p p q
p [ p ( p q)] q Modus ponens
3. r s q
q
4. s t p q [ q ( p q)] p Modus tollen
p
5. t p q
q r [( p q) (q r )] ( p r ) Hypothetical syllogism
Step
Step Reason
Reason
Reason p r
p q
1. pp qq
Hypothesis
Hypothesis
Hypothesis
p (( p q) p) q Disjunctive syllogism
2. p Simplification
Simplifica tionusing
using(1)
(1) q
p
3. r p Hypothesis
Hypothesis p ( p q) Addition
p q
4. r Modus
Modustollens
tollensusing
using(2)
(2)and
and(3)
(3) p q
( p q) p Simplification
5. r s Hypothesis p
p
6. s Modus ponens using (4) and (5) q (( p) (q)) ( p q) Conjunctio n
7. s t Hypothesis p q
p q
8. t Modus ponens using (6) and (7) p r [( p q) ( p r )] ( p r ) Resolution
q r
Agents have no independent access to
the world
• The reasoning agent often gets its knowledge about the facts of
the world as a sequence of logical sentences.
• It must draw conclusions only from them , without independent
access to the world.
• Thus it is very important that the agent’s reasoning is sound!
reasoning agent
167
Wumpus world sentences
• Let Pi,j be true if there is a pit in [i, j].
• Let Bi,j be true if there is a breeze in [i, j].
• We have
– ¬ P1,1
– ¬B1,1
– B2,1
• "Pits cause breezes in adjacent squares"
– B1,1 ⇔ (P1,2 ∨ P2,1)
– B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
168
• Proposition Symbols for each i,j:
– Let Pi,j be true if there is a pit in square i,j
– Let Bi,j be true if there is a breeze in square i,j
• Sentences in KB
– “There is no pit in square 1,1”
R1: P1,1
Difference ?
?
Function
Predicate will return a
will return T or F value
Jan 5, 2013 175
Components of First-Order Logic
• Sentence AtomicSentence | Sentence
|Sentence Connective Sentence
|Quantifier Variable, …Sentence
|(Sentence)
• Atomic Sentence Predicate(Term, …)
|Term = Term
• Term Function(Term, …)|Constant | Variable
• Connective |||
• Quantifier |
Can it be
replaced by ?
o First show one student failed in Maths; then show he was the only
one.
The best score in Maths is better than the best score in Arts.
One way of showing it is that for every student x who has taken Arts,
there is a student y who has taken Maths and his score is better
than the score of x in Arts.
192
Jan 5, 2013
Predicates needed to translate these statements?
oGaul(x)
oHostile(z)
oPotion(y)
oCriminal(x)
oSells(x, y, z) x sells y to z
oOwns(x, y) x owns y
Druid is a Gaul.
Gaul (Druid)
194
Jan 5, 2013
1. x y z [Gaul(x) Potion(y) Hostile(z) Sells(x, y,
z) Criminal(x)]
2. Hostile(Rome)
3. y Potion(y) Owns(Rome, y)
4. y Potion(y) Owns(Rome, y)
Sells(Druid, y, Rome)
5. Gaul (Druid)
6. Goal ???
– Criminal(Druid)
Jan 5, 2013 195
Forward Chaining
Start with the formula & reach the Goal
3. y Potion(y) Owns(Rome, y)
4. y Potion(y) Owns(Rome, y)
Sells(Druid, y, Rome)
Potion(P) Owns(Rome, P)
Sells(Druid, P, Rome) //Universal Instantiation
Potion(y) Owns(Rome, y)
Potion(P) Owns(Rome, P) Sells(Druid, y, Rome)
{y / P}
Hostile(Rome) Sells(Druid, P, Rome)
{y / P
z / Rome
x / Druid}
Criminal(Druid)
Jan 5, 2013 197
Backward Chaining
Start with the Goal & try to deduce whether it is true or
not
Goal: Criminal(Druid)
The goal matches with the RHS of the following rule, where x has
been instantiated with Druid
As soon as we
instantiate y with P,
then this y will also Potion(P) Owns(Rome, P)
become P
As soon as we
instantiate z with
Rome, then this z
will also become
Rome
Jan 5, 2013 200
Criminal(Druid)
Goal of
unification:
finding σ
Jan 5, 2013 203
Unification
P Q σ
Student(x) Student(Ram) {x/Ram}