0% found this document useful (0 votes)
12 views90 pages

2025 Lecture05 P1 PL BasedLogicalAgents

Uploaded by

Khang Trần
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views90 pages

2025 Lecture05 P1 PL BasedLogicalAgents

Uploaded by

Khang Trần
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

LOGICAL AGENTS

Nguyễn Ngọc Thảo – Nguyễn Hải Minh


{nnthao, nhminh}@fit.hcmus.edu.vn
Outline
• Knowledge-based agents
• The Wumpus world
• Inference with Propositional logic
• Propositional theorem proving
• Effective propositional model checking

2
Knowledge-based agents
Problem-solving agents
• These agents know things in a very limited,
inflexible sense.
• E.g., an 8-puzzle agent cannot deduce pairs
of unsolvable states from their parities.

• CSP enables some parts of the agent to


work domain-independently
• State = an assignment of values to variables
• Allow for more efficient algorithms
4
Knowledge-based agents
• Supported by logic – a general class of representation
• Combine and recombine information to suit myriad purposes
• Accept new tasks in the form of explicitly described goals
• Achieve competence by learning new knowledge of the environment
• Adapt to changes by updating the relevant knowledge

Offline and online


decomposition of an agent 5
Image credit: artint.info
Knowledge-based agents
• Knowledge base (KB): A set of sentences or facts
• Each sentence represents some assertion about the world.
• Axiom = sentence that is not derived from other sentences

• Inference: Derive (infer) new sentences from old ones


• Add new sentences to the knowledge base and
query what is known

6
Model for reasoning: An example
• A simple model for reasoning
A, Not C
perceives

KB

Inference A ⇒ (B or C)
A, Not C,
B

infers added to
B
Agent
7
The
Wumpus
world
PEAS Description
• Environment
• 4×4 grid of rooms, agent starts in the square [1,1], facing to the right
• The locations of Gold and Wumpus are random
• Each square can be a pit, with probability 0.2
• Performance measure
• +1000 for climbing out of the cave with gold, -1000 for death
• -1 per step, -10 for using the arrow
• The game ends when agent dies or climbs out of the cave

• Actuators: 𝐹𝑜𝑟𝑤𝑎𝑟𝑑, 𝑇𝑢𝑟𝑛𝐿𝑒𝑓𝑡/𝑇𝑢𝑟𝑛𝑅𝑖𝑔ℎ𝑡 by 90o, 𝐺𝑟𝑎𝑏, 𝑆ℎ𝑜𝑜𝑡, 𝐶𝑙𝑖𝑚𝑏


• Sensors: 𝑆𝑡𝑒𝑛𝑐ℎ, 𝐵𝑟𝑒𝑒𝑧𝑒, 𝐺𝑙𝑖𝑡𝑡𝑒𝑟, 𝐵𝑢𝑚𝑝, 𝑆𝑐𝑟𝑒𝑎𝑚
• Percept: [𝑆𝑡𝑒𝑛𝑐ℎ, 𝐵𝑟𝑒𝑒𝑧𝑒, 𝑁𝑜𝑛𝑒, 𝑁𝑜𝑛𝑒, 𝑁𝑜𝑛𝑒]

9
Exploring a Wumpus world

10
Exploring a Wumpus world

11
Exploring a Wumpus world

12
Exploring a Wumpus world

13
An agent in the Wumpus world
• A Wumpus-world agent using propositional logic will have a
KB of 64 distinct proposition symbols, 155 sentences.
P1,1
W1,1
Bx,y  (Px,y+1  Px,y-1  Px+1,y  Px-1,y)
Sx,y  (Wx,y+1  Wx,y-1  Wx+1,y  Wx-1,y)
W1,1  W1,2  …  W4,4
W1,1  W1,2
W1,1  W1,3

14
Propositional logic
Propositional logic: Syntax

Literal: atomic sentence (P) or negated atomic sentence (P)

16
Logics in general
• Models (or possible worlds) are mathematical abstractions
that fix the truth or falsehood of every relevant sentence.
• E.g., all possible assignments of real numbers to 𝑥 and 𝑦
• 𝑚 satisfies (or is a model of) 𝛼 if 𝛼 is true in model 𝑚
• 𝑀 𝛼 = the set of all models of 𝛼

17
Propositional logic: Semantics
• Each model specifies true/false for each proposition symbol.
• E.g., 𝑚1 = {𝑃1,2 = 𝑓𝑎𝑙𝑠𝑒, 𝑃2,2 = 𝑓𝑎𝑙𝑠𝑒, 𝑃3,1 = 𝑡𝑟𝑢𝑒}, 8 possible models

• Rules for evaluating truth with respect to a model 𝑚

• Simple recursive process evaluates an arbitrary sentence.


• E.g.,¬𝑃1,2 ∧ 𝑃2,2 ∨ 𝑃3,1 = 𝑡𝑟𝑢𝑒 ∧ 𝑡𝑟𝑢𝑒 ∨ 𝑓𝑎𝑙𝑠𝑒 = 𝑡𝑟𝑢𝑒 ∧ 𝑡𝑟𝑢𝑒 = 𝑡𝑟𝑢𝑒

18
Entailment in logic
• A sentence follows logically from another sentence: 𝜶 ⊨ 𝜷
• 𝜶 ⊨ 𝜷 if and only if, in every model
in which 𝜶 is true, 𝜷 is also true,
i.e., 𝑀 𝛼 ⊆ 𝑀 𝛽

• For example,
• 𝑥 = 0 entails 𝑥𝑦 = 0
• The KB containing “Apple is red” and “Tomato is red” entails “Either
the apple or the tomato is red”

• Entailment is a relationship between sentences (i.e., syntax)


that is based on semantics.
19
Entailment in logic: Wumpus world
• Consider two possible conclusions 𝛼1 and 𝛼2

“There is no pit in [1,2].” “There is no pit in [2,2].”

𝑲𝑩 ⊨ 𝜶𝟏 𝑲𝑩 ⊭ 𝜶𝟐

20
Logical inference
• 𝐾𝐵 ⊨𝑖 𝛼 means 𝛼 can be derived from 𝐾𝐵 by procedure 𝑖
• Soundness: 𝑖 is sound if whenever 𝐾𝐵 ⊨𝑖 𝛼, it is also true
that 𝐾𝐵 ⊨ 𝛼
• Completeness: 𝑖 is complete if whenever 𝐾𝐵 ⊨ 𝛼, it is also
true that 𝐾𝐵 ⊨𝑖 𝛼
• That is, the procedure will answer any question whose
answer follows from what is known by the KB.

21
World and representation

Socrates is a man All men are mortal Socrates is mortal

[470 – 399 BC]

22
A simple knowledge base
• Symbols for each position [𝑖, 𝑗]
• 𝑃𝑖,𝑗 : there is a pit in [𝑖, 𝑗] • 𝐵𝑖,𝑗 : there is a breeze in [𝑖, 𝑗]

• 𝑊𝑖,𝑗 : there is a Wumpus in [𝑖, 𝑗] • 𝑆𝑖,𝑗 : there is a stench in [𝑖, 𝑗]

• Sentences in Wumpus world’s 𝐾𝐵

𝑅1: 𝑃1,1
𝑅2: 𝐵1,1  (𝑃1,2  𝑃2,1 )
𝑅3: 𝐵2,1  (𝑃1,1  𝑃2,2  𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1
23
A simple inference procedure
• Given: a set of sentences, 𝑲𝑩, and sentence 𝜶
• Goal: answer 𝑲𝑩 ⊨ 𝜶? = “Does 𝑲𝑩 semantically entail 𝜶?”
• In all interpretations in which 𝐾𝐵’s sentences are true, is 𝛼 also true?
• E.g., in the Wumpus world, 𝐾𝐵 ⊨ 𝑃1,2 ? = “Is there a pit in [1,2]?”

Model-checking approach (Inference by enumeration)

Inference rules

Conversion to the inverse SAT problem (Resolution refutation)

24
Model-checking approach
• Check if 𝛼 is true in every model in which 𝐾𝐵 is true.
• E.g., the Wumpus’s KB has 7 symbols → 27 = 128 models
• Draw a truth table for checking No pit in [1,2]

25
Inference by (depth-first) enumeration
function TT-ENTAILS?(KB,α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
symbols ← a list of the proposition symbols in KB and α
return TT-CHECK-ALL(KB,α,symbols,{ })
function TT-CHECK-ALL(KB,α,symbols,model) returns true or false
if EMPTY?(symbols) then
if PL-TRUE?(KB,model) then return PL-TRUE?(α,model)
else return true // when KB is false, always return true
else do sound and complete
P ← FIRST(symbols) Time complexity 𝑂(2𝑛 ), space complexity 𝑂(𝑛)
rest ← REST(symbols)
return (TT-CHECK-ALL(KB,α,rest,model ∪ {P = true})
and TT-CHECK-ALL(KB,α,rest,model ∪ {P = false})) 26
Quiz 01: Model-checking approach
• Given a KB containing the following rules and facts
R1: IF hot AND smoky THEN fire
R2: IF alarm_beeps THEN smoky
R3: IF fire THEN sprinklers_on
F1: alarm_beeps
F2: hot
• Represent the KB in propositional logic with given symbols
• H = hot, S = smoky, F = fire, A = alarms_beeps, R = sprinklers_on
• Answer the question “Sprinklers_on?” by using the model-
checking approach.
27
Propositional theorem proving

• Proof by Resolution
• Forward and Backward Chaining
Inference rules approach
• Theorem proving: Apply rules of inference directly to the
sentences in KB to construct a proof of the desired sentence
without consulting models
• More efficient than model checking when the number of
models is large, yet the length of the proof is short

29
Logical equivalence
• Two sentences, 𝛼 and 𝛽, are logically equivalent if they are
true in the same set of models.
𝜶 ≡ 𝜷 𝒊𝒇𝒇 𝜶 ⊨ 𝜷 𝒂𝒏𝒅 𝜷 ⊨ 𝜶

30
Validity
• A sentence is valid if it is true in all models.
• E.g., 𝑃 ∨ ¬𝑃, 𝑃 ⇒ ¬𝑃, (P ∧ 𝑃 ⇒ 𝑄 ) ⇒ 𝑄
• Valid sentences are also known as tautologies.
• Validity is connected to inference via the Deduction Theorem
𝛼 ⊨ 𝛽 𝑖𝑓𝑓 𝛼 ⇒ 𝛽 𝑖𝑠 𝑣𝑎𝑙𝑖𝑑

31
Satisfiability
• A sentence is satisfiable if it is true in some model.
• E.g., 𝑃 ∨ 𝑄, 𝑃
• A sentence is unsatisfiable if it is true in no models.
• E.g., 𝑃 ∧ ¬𝑃
• Satisfiability is connected to inference via the following
𝛼 ⊨ 𝛽 𝑖𝑓𝑓 𝛼 ∧ ¬𝛽 𝑖𝑠 𝑢𝑛𝑠𝑎𝑡𝑖𝑠𝑓𝑖𝑎𝑏𝑙𝑒
→ Refutation or proof by contradiction
• The SAT problem determines the satisfiability of sentences
in propositional logic (NP-complete)
• E.g., in CSPs, the constraints are satisfiable by some assignment.

32
Quiz 02: Validity and Satisfiability
• Check the validity and satisfiability of the below sentences
using the truth table
1. 𝐴∨𝐵 ⇒𝐴∧𝐶
2. 𝐴∧𝐵 ⇒𝐴∨𝐶
3. (𝐴 ∨ 𝐵) ∧ (¬𝐵 ∨ 𝐶) ⇒ 𝐴 ∨ 𝐶
4. (𝐴 ∨ ¬𝐵) ⇒ 𝐴 ∧ 𝐵

33
Inference and Proofs
• Proof: A chain of conclusions leads to the desired goal
• Example sound rules of inference
αβ αβ α αβ
α β β
∴β ∴ α ∴ αβ ∴ α
Modus Ponens Modus Tollens AND-Introduction AND-Elimination

34
Inference rules: An example
KB No. Sentences Explanation
𝑃∧𝑄 1 𝑃∧𝑄 From KB

𝑃⇒𝑅 2 𝑃⇒𝑅 From KB

𝑄∧𝑅 ⇒𝑆 3 𝑄∧𝑅 ⇒𝑆 From KB

4 𝑃 1 And-Elim

𝑺? 5 𝑅 4,2 Modus Ponens

6 𝑄 1 And-Elim

7 𝑄∧𝑅 5,6 And-Intro

8 𝐒 3,7 Modus Ponens

35
Inference rules in Wumpus world
𝑅1: 𝑃1,1 Proof: 𝑃1,2
𝑅2: 𝐵1,1  (𝑃1,2  𝑃2,1 )
𝑅3: 𝐵2,1  (𝑃1,1  𝑃2,2  𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1

• Bi-conditional elimination to 𝑅2 : 𝑅6 : 𝐵1,1 ⇒ 𝑃1,2  𝑃2,1 ∧ 𝑃1,2  𝑃2,1 ⇒ 𝐵1,1

• And-Elimination to 𝑅6 : 𝑅7 : 𝑃1,2  𝑃2,1 ⇒ 𝐵1,1


• Logical equivalence for contrapositives: 𝑅8 : 𝐵1,1 ⇒  𝑃1,2  𝑃2,1
• Modus Ponens with 𝑅8 and the percept 𝑅4 : 𝑅9:  𝑃1,2  𝑃2,1
• De Morgan’s rule: 𝑅10 : 𝑃1,2 ∧ 𝑃2,1
36
Proving by search
• Search algorithms can be applied to find a sequence of
steps that constitutes a proof.
• INITIAL STATE: the initial knowledge base
• ACTIONS: apply all inference rules to all the sentences that match
the top half of the inference rule
• RESULT: add the sentence in the bottom half of the inference rule
• GOAL: a state that contains the sentence need to be proved
• The proof can ignore irrelevant propositions, no matter how
many of them there are → more efficient
• E.g., in the Wumpus world, 𝐵2,1 , 𝑃1,1 , 𝑃2,2 𝑎𝑛𝑑 𝑃3,1 are not mentioned.

37
Monotonicity
• The set of entailed sentences only increases as information
is added to the knowledge base.
𝑖𝑓 𝐾𝐵 ⊨ 𝛼 𝑡ℎ𝑒𝑛 𝐾𝐵 ∧ 𝛽 ⊨ 𝛼
• Additional conclusions can be drawn without invalidating any
conclusion 𝛼 already inferred.

38
Proof by Resolution
• Proof by Inference Rules: sound but not complete
• If the rules are inadequate, then the goal is not reachable.
• Resolution: sound and complete, a single inference rule
• A complete inference algorithm when coupled with any complete
search algorithm 𝑙1 ∨ ⋯ ∨ 𝑙𝑘
• Unit resolution inference rule 𝑚
where 𝑙𝑖 and 𝑚 are complementary literals
𝑙1 ∨ ⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨ ⋯ ∨ 𝑙𝑘
𝑙1 ∨ ⋯ ∨ 𝑙𝑘
• Full resolution rule
𝑚1 ∨ ⋯ ∨ 𝑚𝑛
𝑙1 ∨ ⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨ ⋯ ∨ 𝑙𝑘 ∨ 𝑚1 ∨ ⋯ ∨ 𝑚𝑗−1 ∨ 𝑚𝑗+1 ∨ ⋯ ∨ 𝑚𝑛
where 𝑙𝑖 and 𝑚𝑗 are complementary literals 39
Inference rules in Wumpus world
𝑅1: 𝑃1,1
𝑅2: 𝐵1,1  (𝑃1,2  𝑃2,1 )
𝑅3: 𝐵2,1  (𝑃1,1  𝑃2,2  𝑃3,1 )
𝑅4: 𝐵1,1
𝑅5: 𝐵2,1
𝑅6 : 𝐵1,1 ⇒ 𝑃1,2  𝑃2,1 ∧ 𝑃1,2  𝑃2,1 ⇒ 𝐵1,1
𝑅7 : 𝑃1,2 ∧ 𝑃2,1 ⇒ 𝐵1,1
𝑅8 : 𝐵1,1 ⇒  𝑃1,2  𝑃2,1
𝑅9 : 𝑃1,2  𝑃2,1
𝑅10 : 𝑃1,2 ∧ 𝑃2,1

40
Inference rules in Wumpus world
𝑅1: 𝑃1,1

𝑅11: 𝐵1,2
𝑅12: 𝐵1,2  𝑃1,1  𝑃2,2  𝑃1,3
𝑅13: 𝑃2,2
𝑅14: 𝑃1,3
𝑅15: 𝑃1,1  𝑃2,2  𝑃3,1
P2,2 resolves with P2,2
𝑅16 : 𝑃1,1  𝑃3,1
𝑅17 : 𝑃3,1 P1,1 resolves with P1,1

41
Proof by Resolution
• Factoring: the resulting clause should contain only one copy
of each literal.
• E.g., resolving (𝐴 ∨ 𝐵) with (𝐴 ∨ ¬𝐵) obtains (𝐴 ∨ 𝐴) → reduced to 𝐴

• For any pair of sentences, 𝛼 and 𝛽, in propositional logic, a


resolution-based theorem prover can decide whether 𝛼 ⊨ 𝛽.

42
Conjunctive Normal Form (CNF)
• Resolution applies only to clauses, i.e., disjunctions of literals
→ Convert all sentences in KB into clauses (CNF form)
• For example, convert 𝐵1,1  (𝑃1,2  𝑃2,1 ) into CNF
(¬𝐵1,1 ∨ 𝑃1,2 ∨ 𝑃2,1 ) ∧ (¬𝑃1,2 ∨ 𝐵1,1 ) ∧ (¬𝑃2,1 ∨ 𝐵1,1 )
→ A conjunction of 3 clauses

43
Conversion to CNF
1. Eliminate : 𝛼 ⇔ 𝛽 ≡ 𝛼 ⇒ 𝛽 ∧ 𝛽 ⇒ 𝛼
2. Eliminate : 𝛼 ⇒ 𝛽 ≡ ¬𝛼 ∨ 𝛽
3. The operator ¬ appears only in literals: “move ¬ inwards”
¬¬𝛼 ≡ 𝛼 (double-negation elimination)
¬(𝛼 ∧ 𝛽) ≡ ¬𝛼 ∨ ¬𝛽 (De Morgan)
¬(𝛼 ∨ 𝛽) ≡ ¬𝛼 ∧ ¬𝛽 (De Morgan)
4. Apply the distributivity law to distribute ∨ over ∧
(𝛼 ∧ 𝛽) ∨ 𝛾 ≡ (𝛼 ∨ 𝛾) ∧ (𝛽 ∨ 𝛾)

44
Quiz 03: Conversion to CNF
• Convert the following sentences into CNF
1. (𝐴 ∧ 𝐵) ⇒ (𝐶 ⇒ 𝐷)
2. 𝑃 ∨ 𝑄  𝑅 ∧ ¬𝑄 ⇒ 𝑃

45
Resolution algorithm
• Proof by contradiction (resolution refutation): To show that 𝐾𝐵 ⊨ 𝛼, prove
𝐾𝐵 ∧ ¬𝛼 is unsatisfiable

function PL-RESOLUTION(KB,α) returns true or false


inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α
new ← { }
loop do
for each pair of clauses Ci , Cj in clauses do
resolvents ← PL-RESOLVE(Ci , Cj)
if resolvents contains the empty clause then return true
new ← new ∪ resolvents
if new ⊆ clauses then return false
clauses ← clauses ∪ new 46
Resolution algorithm

• Many resolution steps are pointless.


• Clauses with two complementary literals can be discarded.
• E.g., 𝐵1,1  ¬𝐵1,1 𝑃2,1 ≡ 𝑇𝑟𝑢𝑒  𝑃2,1 ≡ 𝑇𝑟𝑢𝑒

47
Quiz 04: The resolution algorithm
• Given the following hypotheses
• If it rains, Joe brings his umbrella.
• If Joe brings his umbrella, Joe does not get wet.
• If it does not rain, Joe does not get wet.
• Prove that Joe does not get wet.

48
Quiz 04: Resolution algorithm
• The KB contains facts and hypotheses KB
𝑅⇒𝑈
𝑈 ⇒ ¬𝑊
¬𝑅 ⇒ ¬𝑊
No. Sentences Explanation
• Check if the sentence
1 ¬𝑅 ∨ 𝑈 From KB
¬𝑊 is entailed by KB? From KB
2 ¬𝑈 ∨ ¬𝑊
3 𝑅 ∨ ¬𝑊 From KB

4 𝑊 Negated conclusion

5 ¬𝑅 ∨ ¬𝑊 1 and 2

6 ¬𝑊 3 and 5

7 ⚫ 4 and 6 49
Horn clauses and Definite clauses
• Definite clause: a disjunction of literals of which exactly one
is positive.
• E.g., ¬𝑃 ∨ ¬𝑄 ∨ 𝑅 is a definite clause, whereas ¬𝑃 ∨ 𝑄 ∨ 𝑅 is not.
• Horn clause: a disjunction of literals of which at most one is
positive.
• All definite clauses are Horn clauses
• Goal clause: clauses with no positive literals
• Horn clauses are closed under resolution
• Resolving two Horn clauses will get back a Horn clause.

50
Propositional sentences and clauses

51
KB of definite clauses
• KB containing only definite clauses are interesting.
• Every definite clause can be written as an implication.
• Premise (body) is a conjunction of positive literals and Conclusion
(head) is a single positive literal (fact) → easier to understand
• E.g., ¬𝑃 ∨ ¬𝑄 ∨ 𝑅 ≡ 𝑃 ∧ 𝑄 ⇒ 𝑅
• Inference can be done with forward-chaining and backward-
chaining algorithms
• This type of inference is the basis for logic programming.
• Deciding entailment can be done in linear time.

52
KB: Horn clauses vs. CNF clauses

Disjuctions of literals
CNF clauses
(𝑙1 ∨ 𝑙2 ∨ ⋯ ∨ 𝑙𝑚)

Clause 1 ∧ Clause 2 ∧…∧ Clause n

Disjunctions of literals of which at most one is positive


Horn clauses
(¬𝑙1 ∨ ¬𝑙2 ∨ ⋯ ∨ 𝑙𝑚)

Restricted form

53
Forward chaining
• Key idea: Fire any rule whose premises are satisfied in the
KB, add its conclusion to the KB, until the query is found.

AND
OR

54
The forward chaining algorithm
function PL-FC-ENTAILS?(KB, q) returns true or false
inputs: KB, the knowledge base, a set of propositional definite clauses
q, the query, a proposition symbol
count ← a table, where count[c] is the number of symbols in c’s premise
inferred ← a table, where inferred[s] is initially false for all symbols
agenda ← a queue of symbols, initially symbols known to be true in KB
while agenda is not empty do
p ← POP(agenda)
if p = q then return true
Sound and complete
if inferred[p] = false then
inferred[p] ← true
for each clause c in KB where p is in c.PREMISE do
decrement count[c]
if count[c] = 0 then add c.CONCLUSION to agenda
return false 55
Forward chaining: An example

56
Forward chaining: An example

57
Forward chaining: An example

58
Forward chaining: An example

59
Forward chaining: An example

60
Forward chaining: An example

61
Forward chaining: An example

62
Forward chaining: An example

63
Forward chaining: Another example

KB No. Sentences Explanation


𝐴∧𝐵 ⇒𝐶 1 𝐴∧𝐵 ⇒𝐶 From KB

𝐶∧𝐷 ⇒𝐸 2 𝐶∧𝐷 ⇒𝐸 From KB

𝐶∧𝐹 ⇒𝐺 3 𝐶∧𝐹 ⇒𝐺 From KB

𝐴 4 𝐴 From KB

𝐵 5 𝐵 From KB

𝐷 6 𝐷 From KB

7 𝐶 1, 4 and 5
𝑬? 8 𝑬 2, 6, and 7

64
Backward chaining
• Key idea: Work backwards from the query 𝒒
• Check if 𝒒 is known already, or
• Recursively prove by BC all premises of some rule concluding 𝒒

• Avoid loops: A new subgoal is already on the goal stack?


• Avoid repeated work: A new subgoal has already been
proved true, or has already failed?

65
Backward chaining: An example

66
Backward chaining: An example
Q? PQ
P?

67
Backward chaining: An example
Q? PQ
P? LMP
L?

68
Backward chaining: An example
Q? PQ
P? LMP
L? ABL
A? ✓

69
Backward chaining: An example
Q? PQ
P? LMP
L? ABL
A? ✓
B? ✓

70
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓

71
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓
M? LBM
L?
B?

72
Backward chaining: An example
Q? PQ
P? LMP
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓

73
Backward chaining: An example
Q? ✓
P? ✓
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓

74
Backward chaining: An example
Q? ✓
P? ✓
L? ✓
A? ✓
B? ✓
M? ✓
L? ✓
B? ✓

75
Backward chaining: Another example

KB • E? 𝐶∧𝐷 ⇒𝐸
𝐴∧𝐵 ⇒𝐶 • C? 𝐴∧𝐵 ⇒𝐶
𝐶∧𝐷 ⇒𝐸 • A?
𝐶∧𝐹 ⇒𝐺 • B?
𝐴
• D?
𝐵
• A, B and D are given → All needed rules
𝐷
are satisfied → The goal is proven.
𝑬?

76
Forward vs. Backward chaining
• Forward chaining: data-driven, automatic, unconscious
processing
• E.g., object recognition, routine decisions
• May do lots of work that is irrelevant to the goal
• Backward chaining: goal-driven, good for problem-solving
• E.g., Where are my keys? How do I get into a PhD program?
• Complexity can be much less than linear in size of KB

77
Quiz 05: Forward vs. Backward chaining

• Given a KB containing the following rules and facts


R1: IF hot AND smoky THEN fire
R2: IF alarm_beeps THEN smoky
R3: IF fire THEN sprinklers_on
F1: alarm_beeps
F2: hot
• Represent the KB in propositional logic with given symbols
• H = hot, S = smoky, F = fire, A = alarms_beeps, R = sprinklers_on
• Answer the question “Sprinklers_on?” by using the forward
chaining and backward chaining approaches
78
Effective model checking

• A complete backtracking algorithm


• Local search algorithms
Efficient propositional inference
• There are two families of efficient algorithms for general
propositional inference based on model checking

1. Complete backtracking search algorithms


• DPLL algorithm (Davis, Putnam, Logemann, Loveland)

2. Incomplete local search algorithms (hill-climbing)


• WalkSAT algorithm

80
DPLL algorithm
• Often called the Davis-Putnam algorithm (1960)
• Determine whether an input propositional logic sentence (in
CNF) is satisfiable.
• A recursive, depth-first enumeration of possible models.
• Improvements over truth table enumeration
1. Early termination
2. Pure symbol heuristic
3. Unit clause heuristic

81
Improvements in DPLL
• Early termination: A clause is true if any literal is true, and a sentence is
false if any clause is false.
• Avoid examination of entire subtrees in the search space
• E.g., (𝐴 ∨ 𝐵) ∧ (𝐴 ∨ 𝐶) is true if 𝐴 is true, regardless 𝐵 and 𝐶

• Pure symbol heuristic: A pure symbol always appears with the same
"sign" in all clauses.
• E.g., 𝐴 ∨ ¬𝐵 , ¬𝐵 ∨ ¬𝐶 , (𝐴 ∨ 𝐶), 𝐴 and 𝐵 are pure, 𝐶 is impure.
• Make a pure symbol true → Doing so never make a clause false

• Unit clause heuristic: there is only one literal in the clause and thus this
literal must be true
• Unit propagation: if the model contains 𝐵 = 𝑡𝑟𝑢𝑒 then ¬𝐵 ∨ ¬𝐶 simplifies
to a unit clause ¬𝐶 → 𝐶 must be false (so that ¬𝐶 is true) → 𝐴 must be true
(so that 𝐴 ∨ 𝐶 is true)

82
DPLL procedure
function DPLL-SATISFIABLE?(s) returns true or false
inputs: s, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of s
symbols ← a list of the proposition symbols in s
return DPLL(clauses, symbols,{ })

function DPLL(clauses, symbols, model) returns true or false


if every clause in clauses is true in model then return true 1. Early
if some clause in clauses is false in model then return false Termination
2
P, value ← FIND-PURE-SYMBOL(symbols, clauses, model)
if P is non-null then return DPLL(clauses, symbols – P, model ∪ {P=value})
3
P, value ← FIND-UNIT-CLAUSE(clauses, model)
if P is non-null then return DPLL(clauses, symbols – P, model ∪ {P=value})
P ← FIRST(symbols); rest ← REST(symbols)
return DPLL(clauses, rest, model ∪ {P=true}) or
DPLL(clauses, rest, model ∪ {P=false})) 83
Davis-Putnam procedure
function DP()
for  in vocabulary () do
var ’  { };
for 1 in  for 2 in  such that   1   2 do
var ’  1 – {}  2 – {};
if not tautology(’) then ’  ’  (’);
   – {   |    or   }  ’ ;
return {if { }   then unsatisfiable else satisfiable};

function tautology()
   and   
84
DPLL procedure vs. DP procedure
• DP can cause a quadratic expansion every time it is applied.
• This can easily exhaust space on large problems.
• DPLL attacks the problem by sequentially solving smaller
problems.
• Basic idea: Choose a literal. Assume true, simplify clause set, and
try to show satisfiable. Repeat for the negation of the literal.
• Good because we do not cross multiply the clause set

85
DPLL procedure vs. DP procedure

Reference: http://logic.stanford.edu/classes/cs157/2011/lectures/lecture04.pdf
86
WalkSAT algorithm
• Incomplete, local search algorithm
• Evaluation function: min-conflict heuristic, to minimize the
number of unsatisfied clauses
• Balance between greediness and randomness

87
WalkSAT algorithm
• The algorithm returns a model → satisfiable
• The algorithm returns false → unsatisfiable OR more time is
needed for searching

• WalkSAT cannot always detect unsatisfiability


• It is most useful when a solution is expected to exist.
• For example,
• An agent cannot reliably use WALKSAT to prove that a square is
safe in the Wumpus world.
• Instead, it can say, “I thought about it for an hour and couldn’t come
up with a possible world in which the square isn’t safe.”

88
Quiz 06: DPLL and DP
• Given a KB as shown aside KB
𝐴 ⇒𝐵∨C
𝐴⇒𝐷
C ∧ D ⇒ ¬𝐹
𝐵⇒𝐹
𝐴

• Using either DPLL or DP to check whether KB entails each


of the following sentences
• 𝑪
• 𝑩 ⇒ ¬𝑪
89

You might also like