2025 Lecture05 P1 PL BasedLogicalAgents
2025 Lecture05 P1 PL BasedLogicalAgents
2
Knowledge-based agents
Problem-solving agents
• These agents know things in a very limited,
inflexible sense.
• E.g., an 8-puzzle agent cannot deduce pairs
of unsolvable states from their parities.
6
Model for reasoning: An example
• A simple model for reasoning
A, Not C
perceives
KB
Inference A ⇒ (B or C)
A, Not C,
B
infers added to
B
Agent
7
The
Wumpus
world
PEAS Description
• Environment
• 4×4 grid of rooms, agent starts in the square [1,1], facing to the right
• The locations of Gold and Wumpus are random
• Each square can be a pit, with probability 0.2
• Performance measure
• +1000 for climbing out of the cave with gold, -1000 for death
• -1 per step, -10 for using the arrow
• The game ends when agent dies or climbs out of the cave
9
Exploring a Wumpus world
10
Exploring a Wumpus world
11
Exploring a Wumpus world
12
Exploring a Wumpus world
13
An agent in the Wumpus world
• A Wumpus-world agent using propositional logic will have a
KB of 64 distinct proposition symbols, 155 sentences.
P1,1
W1,1
Bx,y (Px,y+1 Px,y-1 Px+1,y Px-1,y)
Sx,y (Wx,y+1 Wx,y-1 Wx+1,y Wx-1,y)
W1,1 W1,2 … W4,4
W1,1 W1,2
W1,1 W1,3
…
14
Propositional logic
Propositional logic: Syntax
16
Logics in general
• Models (or possible worlds) are mathematical abstractions
that fix the truth or falsehood of every relevant sentence.
• E.g., all possible assignments of real numbers to 𝑥 and 𝑦
• 𝑚 satisfies (or is a model of) 𝛼 if 𝛼 is true in model 𝑚
• 𝑀 𝛼 = the set of all models of 𝛼
17
Propositional logic: Semantics
• Each model specifies true/false for each proposition symbol.
• E.g., 𝑚1 = {𝑃1,2 = 𝑓𝑎𝑙𝑠𝑒, 𝑃2,2 = 𝑓𝑎𝑙𝑠𝑒, 𝑃3,1 = 𝑡𝑟𝑢𝑒}, 8 possible models
18
Entailment in logic
• A sentence follows logically from another sentence: 𝜶 ⊨ 𝜷
• 𝜶 ⊨ 𝜷 if and only if, in every model
in which 𝜶 is true, 𝜷 is also true,
i.e., 𝑀 𝛼 ⊆ 𝑀 𝛽
• For example,
• 𝑥 = 0 entails 𝑥𝑦 = 0
• The KB containing “Apple is red” and “Tomato is red” entails “Either
the apple or the tomato is red”
𝑲𝑩 ⊨ 𝜶𝟏 𝑲𝑩 ⊭ 𝜶𝟐
20
Logical inference
• 𝐾𝐵 ⊨𝑖 𝛼 means 𝛼 can be derived from 𝐾𝐵 by procedure 𝑖
• Soundness: 𝑖 is sound if whenever 𝐾𝐵 ⊨𝑖 𝛼, it is also true
that 𝐾𝐵 ⊨ 𝛼
• Completeness: 𝑖 is complete if whenever 𝐾𝐵 ⊨ 𝛼, it is also
true that 𝐾𝐵 ⊨𝑖 𝛼
• That is, the procedure will answer any question whose
answer follows from what is known by the KB.
21
World and representation
22
A simple knowledge base
• Symbols for each position [𝑖, 𝑗]
• 𝑃𝑖,𝑗 : there is a pit in [𝑖, 𝑗] • 𝐵𝑖,𝑗 : there is a breeze in [𝑖, 𝑗]
𝑅1: 𝑃1,1
𝑅2: 𝐵1,1 (𝑃1,2 𝑃2,1 )
𝑅3: 𝐵2,1 (𝑃1,1 𝑃2,2 𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1
23
A simple inference procedure
• Given: a set of sentences, 𝑲𝑩, and sentence 𝜶
• Goal: answer 𝑲𝑩 ⊨ 𝜶? = “Does 𝑲𝑩 semantically entail 𝜶?”
• In all interpretations in which 𝐾𝐵’s sentences are true, is 𝛼 also true?
• E.g., in the Wumpus world, 𝐾𝐵 ⊨ 𝑃1,2 ? = “Is there a pit in [1,2]?”
Inference rules
24
Model-checking approach
• Check if 𝛼 is true in every model in which 𝐾𝐵 is true.
• E.g., the Wumpus’s KB has 7 symbols → 27 = 128 models
• Draw a truth table for checking No pit in [1,2]
25
Inference by (depth-first) enumeration
function TT-ENTAILS?(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
symbols ← a list of the proposition symbols in KB and α
return TT-CHECK-ALL(KB,α,symbols,{ })
function TT-CHECK-ALL(KB,α,symbols,model) returns true or false
if EMPTY?(symbols) then
if PL-TRUE?(KB,model) then return PL-TRUE?(α,model)
else return true // when KB is false, always return true
else do sound and complete
P ← FIRST(symbols) Time complexity 𝑂(2𝑛 ), space complexity 𝑂(𝑛)
rest ← REST(symbols)
return (TT-CHECK-ALL(KB,α,rest,model ∪ {P = true})
and TT-CHECK-ALL(KB,α,rest,model ∪ {P = false})) 26
Quiz 01: Model-checking approach
• Given a KB containing the following rules and facts
R1: IF hot AND smoky THEN fire
R2: IF alarm_beeps THEN smoky
R3: IF fire THEN sprinklers_on
F1: alarm_beeps
F2: hot
• Represent the KB in propositional logic with given symbols
• H = hot, S = smoky, F = fire, A = alarms_beeps, R = sprinklers_on
• Answer the question “Sprinklers_on?” by using the model-
checking approach.
27
Propositional theorem proving
• Proof by Resolution
• Forward and Backward Chaining
Inference rules approach
• Theorem proving: Apply rules of inference directly to the
sentences in KB to construct a proof of the desired sentence
without consulting models
• More efficient than model checking when the number of
models is large, yet the length of the proof is short
29
Logical equivalence
• Two sentences, 𝛼 and 𝛽, are logically equivalent if they are
true in the same set of models.
𝜶 ≡ 𝜷 𝒊𝒇𝒇 𝜶 ⊨ 𝜷 𝒂𝒏𝒅 𝜷 ⊨ 𝜶
30
Validity
• A sentence is valid if it is true in all models.
• E.g., 𝑃 ∨ ¬𝑃, 𝑃 ⇒ ¬𝑃, (P ∧ 𝑃 ⇒ 𝑄 ) ⇒ 𝑄
• Valid sentences are also known as tautologies.
• Validity is connected to inference via the Deduction Theorem
𝛼 ⊨ 𝛽 𝑖𝑓𝑓 𝛼 ⇒ 𝛽 𝑖𝑠 𝑣𝑎𝑙𝑖𝑑
31
Satisfiability
• A sentence is satisfiable if it is true in some model.
• E.g., 𝑃 ∨ 𝑄, 𝑃
• A sentence is unsatisfiable if it is true in no models.
• E.g., 𝑃 ∧ ¬𝑃
• Satisfiability is connected to inference via the following
𝛼 ⊨ 𝛽 𝑖𝑓𝑓 𝛼 ∧ ¬𝛽 𝑖𝑠 𝑢𝑛𝑠𝑎𝑡𝑖𝑠𝑓𝑖𝑎𝑏𝑙𝑒
→ Refutation or proof by contradiction
• The SAT problem determines the satisfiability of sentences
in propositional logic (NP-complete)
• E.g., in CSPs, the constraints are satisfiable by some assignment.
32
Quiz 02: Validity and Satisfiability
• Check the validity and satisfiability of the below sentences
using the truth table
1. 𝐴∨𝐵 ⇒𝐴∧𝐶
2. 𝐴∧𝐵 ⇒𝐴∨𝐶
3. (𝐴 ∨ 𝐵) ∧ (¬𝐵 ∨ 𝐶) ⇒ 𝐴 ∨ 𝐶
4. (𝐴 ∨ ¬𝐵) ⇒ 𝐴 ∧ 𝐵
33
Inference and Proofs
• Proof: A chain of conclusions leads to the desired goal
• Example sound rules of inference
αβ αβ α αβ
α β β
∴β ∴ α ∴ αβ ∴ α
Modus Ponens Modus Tollens AND-Introduction AND-Elimination
34
Inference rules: An example
KB No. Sentences Explanation
𝑃∧𝑄 1 𝑃∧𝑄 From KB
4 𝑃 1 And-Elim
6 𝑄 1 And-Elim
35
Inference rules in Wumpus world
𝑅1: 𝑃1,1 Proof: 𝑃1,2
𝑅2: 𝐵1,1 (𝑃1,2 𝑃2,1 )
𝑅3: 𝐵2,1 (𝑃1,1 𝑃2,2 𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1
37
Monotonicity
• The set of entailed sentences only increases as information
is added to the knowledge base.
𝑖𝑓 𝐾𝐵 ⊨ 𝛼 𝑡ℎ𝑒𝑛 𝐾𝐵 ∧ 𝛽 ⊨ 𝛼
• Additional conclusions can be drawn without invalidating any
conclusion 𝛼 already inferred.
38
Proof by Resolution
• Proof by Inference Rules: sound but not complete
• If the rules are inadequate, then the goal is not reachable.
• Resolution: sound and complete, a single inference rule
• A complete inference algorithm when coupled with any complete
search algorithm 𝑙1 ∨ ⋯ ∨ 𝑙𝑘
• Unit resolution inference rule 𝑚
where 𝑙𝑖 and 𝑚 are complementary literals
𝑙1 ∨ ⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨ ⋯ ∨ 𝑙𝑘
𝑙1 ∨ ⋯ ∨ 𝑙𝑘
• Full resolution rule
𝑚1 ∨ ⋯ ∨ 𝑚𝑛
𝑙1 ∨ ⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨ ⋯ ∨ 𝑙𝑘 ∨ 𝑚1 ∨ ⋯ ∨ 𝑚𝑗−1 ∨ 𝑚𝑗+1 ∨ ⋯ ∨ 𝑚𝑛
where 𝑙𝑖 and 𝑚𝑗 are complementary literals 39
Inference rules in Wumpus world
𝑅1: 𝑃1,1
𝑅2: 𝐵1,1 (𝑃1,2 𝑃2,1 )
𝑅3: 𝐵2,1 (𝑃1,1 𝑃2,2 𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1
𝑅6 : 𝐵1,1 ⇒ 𝑃1,2 𝑃2,1 ∧ 𝑃1,2 𝑃2,1 ⇒ 𝐵1,1
𝑅7 : 𝑃1,2 ∧ 𝑃2,1 ⇒ 𝐵1,1
𝑅8 : 𝐵1,1 ⇒ 𝑃1,2 𝑃2,1
𝑅9 : 𝑃1,2 𝑃2,1
𝑅10 : 𝑃1,2 ∧ 𝑃2,1
40
Inference rules in Wumpus world
𝑅1: 𝑃1,1
…
𝑅11: 𝐵1,2
𝑅12: 𝐵1,2 𝑃1,1 𝑃2,2 𝑃1,3
𝑅13: 𝑃2,2
𝑅14: 𝑃1,3
𝑅15: 𝑃1,1 𝑃2,2 𝑃3,1
P2,2 resolves with P2,2
𝑅16 : 𝑃1,1 𝑃3,1
𝑅17 : 𝑃3,1 P1,1 resolves with P1,1
41
Proof by Resolution
• Factoring: the resulting clause should contain only one copy
of each literal.
• E.g., resolving (𝐴 ∨ 𝐵) with (𝐴 ∨ ¬𝐵) obtains (𝐴 ∨ 𝐴) → reduced to 𝐴
42
Conjunctive Normal Form (CNF)
• Resolution applies only to clauses, i.e., disjunctions of literals
→ Convert all sentences in KB into clauses (CNF form)
• For example, convert 𝐵1,1 (𝑃1,2 𝑃2,1 ) into CNF
(¬𝐵1,1 ∨ 𝑃1,2 ∨ 𝑃2,1 ) ∧ (¬𝑃1,2 ∨ 𝐵1,1 ) ∧ (¬𝑃2,1 ∨ 𝐵1,1 )
→ A conjunction of 3 clauses
43
Conversion to CNF
1. Eliminate : 𝛼 ⇔ 𝛽 ≡ 𝛼 ⇒ 𝛽 ∧ 𝛽 ⇒ 𝛼
2. Eliminate : 𝛼 ⇒ 𝛽 ≡ ¬𝛼 ∨ 𝛽
3. The operator ¬ appears only in literals: “move ¬ inwards”
¬¬𝛼 ≡ 𝛼 (double-negation elimination)
¬(𝛼 ∧ 𝛽) ≡ ¬𝛼 ∨ ¬𝛽 (De Morgan)
¬(𝛼 ∨ 𝛽) ≡ ¬𝛼 ∧ ¬𝛽 (De Morgan)
4. Apply the distributivity law to distribute ∨ over ∧
(𝛼 ∧ 𝛽) ∨ 𝛾 ≡ (𝛼 ∨ 𝛾) ∧ (𝛽 ∨ 𝛾)
44
Quiz 03: Conversion to CNF
• Convert the following sentences into CNF
1. (𝐴 ∧ 𝐵) ⇒ (𝐶 ⇒ 𝐷)
2. 𝑃 ∨ 𝑄 𝑅 ∧ ¬𝑄 ⇒ 𝑃
45
Resolution algorithm
• Proof by contradiction (resolution refutation): To show that 𝐾𝐵 ⊨ 𝛼, prove
𝐾𝐵 ∧ ¬𝛼 is unsatisfiable
47
Quiz 04: The resolution algorithm
• Given the following hypotheses
• If it rains, Joe brings his umbrella.
• If Joe brings his umbrella, Joe does not get wet.
• If it does not rain, Joe does not get wet.
• Prove that Joe does not get wet.
48
Quiz 04: Resolution algorithm
• The KB contains facts and hypotheses KB
𝑅⇒𝑈
𝑈 ⇒ ¬𝑊
¬𝑅 ⇒ ¬𝑊
No. Sentences Explanation
• Check if the sentence
1 ¬𝑅 ∨ 𝑈 From KB
¬𝑊 is entailed by KB? From KB
2 ¬𝑈 ∨ ¬𝑊
3 𝑅 ∨ ¬𝑊 From KB
4 𝑊 Negated conclusion
5 ¬𝑅 ∨ ¬𝑊 1 and 2
6 ¬𝑊 3 and 5
7 ⚫ 4 and 6 49
Horn clauses and Definite clauses
• Definite clause: a disjunction of literals of which exactly one
is positive.
• E.g., ¬𝑃 ∨ ¬𝑄 ∨ 𝑅 is a definite clause, whereas ¬𝑃 ∨ 𝑄 ∨ 𝑅 is not.
• Horn clause: a disjunction of literals of which at most one is
positive.
• All definite clauses are Horn clauses
• Goal clause: clauses with no positive literals
• Horn clauses are closed under resolution
• Resolving two Horn clauses will get back a Horn clause.
50
Propositional sentences and clauses
51
KB of definite clauses
• KB containing only definite clauses are interesting.
• Every definite clause can be written as an implication.
• Premise (body) is a conjunction of positive literals and Conclusion
(head) is a single positive literal (fact) → easier to understand
• E.g., ¬𝑃 ∨ ¬𝑄 ∨ 𝑅 ≡ 𝑃 ∧ 𝑄 ⇒ 𝑅
• Inference can be done with forward-chaining and backward-
chaining algorithms
• This type of inference is the basis for logic programming.
• Deciding entailment can be done in linear time.
52
KB: Horn clauses vs. CNF clauses
Disjuctions of literals
CNF clauses
(𝑙1 ∨ 𝑙2 ∨ ⋯ ∨ 𝑙𝑚)
Restricted form
53
Forward chaining
• Key idea: Fire any rule whose premises are satisfied in the
KB, add its conclusion to the KB, until the query is found.
AND
OR
54
The forward chaining algorithm
function PL-FC-ENTAILS?(KB, q) returns true or false
inputs: KB, the knowledge base, a set of propositional definite clauses
q, the query, a proposition symbol
count ← a table, where count[c] is the number of symbols in c’s premise
inferred ← a table, where inferred[s] is initially false for all symbols
agenda ← a queue of symbols, initially symbols known to be true in KB
while agenda is not empty do
p ← POP(agenda)
if p = q then return true
Sound and complete
if inferred[p] = false then
inferred[p] ← true
for each clause c in KB where p is in c.PREMISE do
decrement count[c]
if count[c] = 0 then add c.CONCLUSION to agenda
return false 55
Forward chaining: An example
56
Forward chaining: An example
57
Forward chaining: An example
58
Forward chaining: An example
59
Forward chaining: An example
60
Forward chaining: An example
61
Forward chaining: An example
62
Forward chaining: An example
63
Forward chaining: Another example
𝐴 4 𝐴 From KB
𝐵 5 𝐵 From KB
𝐷 6 𝐷 From KB
7 𝐶 1, 4 and 5
𝑬? 8 𝑬 2, 6, and 7
64
Backward chaining
• Key idea: Work backwards from the query 𝒒
• Check if 𝒒 is known already, or
• Recursively prove by BC all premises of some rule concluding 𝒒
65
Backward chaining: An example
66
Backward chaining: An example
Q? PQ
P?
67
Backward chaining: An example
Q? PQ
P? LMP
L?
68
Backward chaining: An example
Q? PQ
P? LMP
L? ABL
A? ✓
69
Backward chaining: An example
Q? PQ
P? LMP
L? ABL
A? ✓
B? ✓
70
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓
71
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓
M? LBM
L?
B?
72
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓
73
Backward chaining: An example
Q? ✓
P? ✓
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓
74
Backward chaining: An example
Q? ✓
P? ✓
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓
75
Backward chaining: Another example
KB • E? 𝐶∧𝐷 ⇒𝐸
𝐴∧𝐵 ⇒𝐶 • C? 𝐴∧𝐵 ⇒𝐶
𝐶∧𝐷 ⇒𝐸 • A?
𝐶∧𝐹 ⇒𝐺 • B?
𝐴
• D?
𝐵
• A, B and D are given → All needed rules
𝐷
are satisfied → The goal is proven.
𝑬?
76
Forward vs. Backward chaining
• Forward chaining: data-driven, automatic, unconscious
processing
• E.g., object recognition, routine decisions
• May do lots of work that is irrelevant to the goal
• Backward chaining: goal-driven, good for problem-solving
• E.g., Where are my keys? How do I get into a PhD program?
• Complexity can be much less than linear in size of KB
77
Quiz 05: Forward vs. Backward chaining
80
DPLL algorithm
• Often called the Davis-Putnam algorithm (1960)
• Determine whether an input propositional logic sentence (in
CNF) is satisfiable.
• A recursive, depth-first enumeration of possible models.
• Improvements over truth table enumeration
1. Early termination
2. Pure symbol heuristic
3. Unit clause heuristic
81
Improvements in DPLL
• Early termination: A clause is true if any literal is true, and a sentence is
false if any clause is false.
• Avoid examination of entire subtrees in the search space
• E.g., (𝐴 ∨ 𝐵) ∧ (𝐴 ∨ 𝐶) is true if 𝐴 is true, regardless 𝐵 and 𝐶
• Pure symbol heuristic: A pure symbol always appears with the same
"sign" in all clauses.
• E.g., 𝐴 ∨ ¬𝐵 , ¬𝐵 ∨ ¬𝐶 , (𝐴 ∨ 𝐶), 𝐴 and 𝐵 are pure, 𝐶 is impure.
• Make a pure symbol true → Doing so never make a clause false
• Unit clause heuristic: there is only one literal in the clause and thus this
literal must be true
• Unit propagation: if the model contains 𝐵 = 𝑡𝑟𝑢𝑒 then ¬𝐵 ∨ ¬𝐶 simplifies
to a unit clause ¬𝐶 → 𝐶 must be false (so that ¬𝐶 is true) → 𝐴 must be true
(so that 𝐴 ∨ 𝐶 is true)
82
DPLL procedure
function DPLL-SATISFIABLE?(s) returns true or false
inputs: s, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of s
symbols ← a list of the proposition symbols in s
return DPLL(clauses, symbols,{ })
function tautology()
and
84
DPLL procedure vs. DP procedure
• DP can cause a quadratic expansion every time it is applied.
• This can easily exhaust space on large problems.
• DPLL attacks the problem by sequentially solving smaller
problems.
• Basic idea: Choose a literal. Assume true, simplify clause set, and
try to show satisfiable. Repeat for the negation of the literal.
• Good because we do not cross multiply the clause set
85
DPLL procedure vs. DP procedure
Reference: http://logic.stanford.edu/classes/cs157/2011/lectures/lecture04.pdf
86
WalkSAT algorithm
• Incomplete, local search algorithm
• Evaluation function: min-conflict heuristic, to minimize the
number of unsatisfied clauses
• Balance between greediness and randomness
87
WalkSAT algorithm
• The algorithm returns a model → satisfiable
• The algorithm returns false → unsatisfiable OR more time is
needed for searching
88
Quiz 06: DPLL and DP
• Given a KB as shown aside KB
𝐴 ⇒𝐵∨C
𝐴⇒𝐷
C ∧ D ⇒ ¬𝐹
𝐵⇒𝐹
𝐴