0% found this document useful (0 votes)
19 views37 pages

Ai9 - Propositional Theorem Proving

Uploaded by

yesvin veluchamy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views37 pages

Ai9 - Propositional Theorem Proving

Uploaded by

yesvin veluchamy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

PROPOSITIONAL

THEOREM PROVING
PROPOSITIONAL THEOREM PROVING
• Applying rules of inference directly to the sentences in our knowledge
base to construct a proof of the desired sentence without consulting
models
• If the number of models is large but the length of the proof is short,
then theorem proving can be more efficient than model checking.
Equivalence
• two sentences α and β are logically equivalent if they are true in the
same set of models
Deduction theorem

• Conversely, the deduction theorem states that every valid implication


sentence describes a legitimate inference.
Satisfiability
• A sentence is satisfiable if it is true under some interpretation (i.e. it has a
model), otherwise the sentence is unsatisfiable
• The problem of determining the satisfiability of sentences in propositional logic—
the SAT problem—was the first problem proved to be NP-complete
• Validity and satisfiability are connected:
• is valid iff is unsatisfiable; contrapositively, i is satisfiable iff is not valid
• if and only if the sentence is unsatisfiable
Inference and proofs
• Inference rules can be applied to derive a proof—a chain of conclusions that
leads to the desired goal
• Modus Ponens (Latin for mode that affirms) and is written

• And-Elimination, from a conjunction, any of the conjuncts can be inferred:

• All of the logical equivalences can be used as inference rules.


A simple knowledge base
Exploring the wumpus world

[None, B, None, None, None]


• finding a proof can be more efficient because the proof can ignore
irrelevant propositions, no matter how many of them there are
• Monotonicity, which says that the set of entailed sentences can only
increase as information is added to the knowledge base
Resolution

• unit resolution rule takes a clause - a disjunction of literals - and a literal and produces
a new clause
• resolution takes two clauses and produces a new clause containing all the literals of the
two original clauses except the two complementary literals
• the resulting clause should contain only one copy of each literal
• removal of multiple copies of literals is called factoring
• For eg: if we resolve (A V B) with (A V B) we obtain (A V A), which is reduced to just A
[S, None, None, None, None] [None, None, None, None, None]
Conjunctive normal form
• A sentence expressed as a conjunction of disjunctions of literals is said to be in
conjunctive normal form or CNF
• A sentence in k-CNF has exactly k literals per clause

• Convert to CNF
Resolution algorithm
• Inference procedures based on resolution work by using the principle of proof by
contradiction
• to show that KB |= α, we show that (KB ∧ ¬α) is unsatisfiable
• First, (KB ∧ ¬α) is converted into CNF.
• Then, the resolution rule is applied to the resulting clauses. Each pair that
contains complementary literals is resolved to produce a new clause, which is
added to the set if it is not already present. The process continues until one of
two things happens:
• there are no new clauses that can be added, in which case KB does not entail α; or,
• two clauses resolve to yield the empty clause, in which case KB entails α.
Horn clauses and definite clauses
• Definite clause - disjunction of literals of which exactly one is positive
• For example, (¬L1,1 ∨¬Breeze ∨B1,1) is a definite clause, whereas (¬B1,1 ∨ P1,2 ∨ P2,1) is not
• Horn clause - disjunction of literals of which at most one is positive
• All definite clauses are Horn clauses
• Clauses with no positive literals are called goal clauses
• Horn clauses are closed under resolution: if you resolve two Horn clauses, you get
back a Horn clause
• KB containing only definite clauses are interesting for three reasons:
• Every definite clause can be written as an implication whose premise is a
conjunction of positive literals and whose conclusion is a single positive
literal. The premise is called the body and the conclusion is called the head
• For example, (¬L1,1 ∨¬Breeze ∨B1,1) can be written as the implication (L1,1 ∧ Breeze) ⇒ B1,1
• Inference with Horn clauses can be done through the forward-chaining and
backward-chaining algorithms. This type of inference is the basis for logic
programming.
• Deciding entailment with Horn clauses can be done in time that is linear in
the size of the knowledge base
Forward chaining
• Determines if a single proposition symbol q—the query—is entailed by a KB of
definite clauses
• It begins from known facts in the knowledge base. If all the premises of an
implication are known, then its conclusion is added to the set of known facts.
• if L1,1 and Breeze are known and (L1,1 ∧ Breeze) ⇒ B1,1 is in KB, then B1,1 can be added
• This process continues until the query q is added or until no further inferences can
be made
• Sound and complete, efficient implementation runs in linear time
• Data-driven reasoning—reasoning in which the focus of attention starts with the
known data. It can be used within an agent to derive conclusions from incoming
percepts, often without a specific query.
Backward chaining
• Works backward from the query; Goal-directed reasoning
• If the query q is known to be true, then no work is needed.
• Otherwise, the algorithm finds those implications in the knowledge base whose
conclusion is q.
• If all the premises of one of those implications can be proved true (by backward
chaining), then q is true.
• efficient implementation runs in linear time
• The cost of backward chaining is much less than linear in the size of the KB,
because the process touches only relevant facts
DPLL algorithm
• For checking satisfiability: the SAT problem
• Davis–Putnam algorithm (1960) - seminal paper by Martin Davis and Hilary
Putnam; which is modified by Davis, Logemann, and Loveland (1962), so we will
call it DPLL after the initials of all four authors
• DPLL takes as input a sentence in CNF
• A recursive, depth-first enumeration of possible models.
• 3 improvements over the simple scheme of TT-ENTAILS:
• Early termination
• Pure symbol heuristic
• Unit clause heuristic
• Early termination: The algorithm detects whether the sentence must be true or
false, even with a partially completed model.
• A clause is true if any literal is true, even if the other literals do not yet have truth
values; hence, the sentence as a whole could be judged true even before the
model is complete.
• For example, the sentence (A ∨ B) ∧ (A ∨ C) is true if A is true, regardless of the values of B
and C.
• Similarly, a sentence is false if any clause is false, which occurs when each of its
literals is false. Again, this can occur long before the model is complete.
• Early termination avoids examination of entire subtrees in the search space.
• Pure symbol heuristic: A pure symbol is a symbol that always appears with the
same “sign” in all clauses.
• For example, in the three clauses (A ∨ ¬B), (¬B ∨ ¬C), and (C ∨ A), the symbol A is pure because
only the positive literal appears, B is pure because only the negative literal appears, and C is
impure.
• It is easy to see that if a sentence has a model, then it has a model with the pure
symbols assigned so as to make their literals true, because doing so can never make
a clause false.
• In determining the purity of a symbol, the algorithm can ignore clauses that are
already known to be true in the model constructed so far.
• For example, if the model contains B =false, then the clause (¬B ∨ ¬C) is already
true, and in the remaining clauses C appears only as a positive literal; therefore C
becomes pure.
• Unit clause heuristic: A unit clause was defined earlier as a clause with just one
literal. In the context of DPLL, it also means clauses in which all literals but one
are already assigned false by the model.
• For example, if the model contains B =true, then (¬B ∨ ¬C) simplifies to ¬C, which is a unit
clause. Obviously, for this clause to be true, C must be set to false.
• The unit clause heuristic assigns all such symbols before branching on the
remainder.
• Assigning one unit clause can create another unit clause—for example, when C is
set to false, (C ∨ A) becomes a unit clause, causing true to be assigned to A. This
“cascade” of forced assignments is called unit propagation.
WALKSAT algorithm
• On every iteration, the algorithm picks an unsatisfied clause and picks a symbol in
the clause to flip.
• It chooses randomly between two ways to pick which symbol to flip:
• (1) a “min-conflicts” step that minimizes the number of unsatisfied clauses in the new state
and
• (2) a “random walk” step that picks the symbol randomly.
• When WALKSAT returns a model, the input sentence is indeed satisfiable, but
when it returns failure, there are two possible causes: either the sentence is
unsatisfiable or we need to give the algorithm more time.
• WALKSAT is most useful when we expect a solution to exist. It cannot always
detect unsatisfiability, which is required for deciding entailment.
The landscape of random SAT
problems
Agents Based On Propositional Logic
• The current state of the world
• new assertion would not contradict existing KB
• a percept asserts something only about the current time
• fluent is used to refer an aspect of the world that changes
• Symbols associated with permanent aspects of the world do not need a time
superscript and are called atemporal variables
• transition model of the world as a set of logical sentences, need proposition
symbols for the occurrences of actions
• To describe how the world changes, we can write effect axioms that specify
the outcome of an action at the next time step.

• Leads to frame problem, solution is to add frame axioms explicitly asserting


all the propositions that remain the same, but inefficient
• In a world with m different actions and n fluents, the set of frame axioms will be
of size O(mn). This specific manifestation of the frame problem is called as
representational frame problem which in turn leads to inferential frame problem.
• Solution to this involves changing one’s focus from writing axioms about actions
to writing axioms about fluents.
• For each fluent F, we will have an axiom that defines the truth value of F t+1 in
terms of fluents (including F itself) at time t and the actions that may have
occurred at time t.
• Truth value of Ft+1 can be set in one of two ways: either the action at time t causes
F to be true at t+1, or F was already true at time t and the action at time t does
not cause it to be false.
• An axiom of this form is called a successor-state axiom and has this schema:
• Successor-state axiom of

• There is no complete solution within logic; system designers have to use good
judgment in deciding how detailed they want to be in specifying their model, and
what details they want to leave out.

You might also like