0% found this document useful (0 votes)
22 views51 pages

AI Lec04 Prop Logic

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views51 pages

AI Lec04 Prop Logic

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

COSC2129

Semester 3, 2024

Artificial Intelligence

Propositional Logic
Road Map for Today
 Revision of adversarial search

 Knowledge representation and reasoning


 Logic in general
 Models and entailment
 Soundness and completeness

 Propositional logic
 Syntax and semantics
 Model checking
 Inference rules and theorem proving
-2-
Search & Adversarial Search

 Uninformed search

 Informed search
Hands-on
Examples
 Games and adversarial search
- Minimax search
- Alpha-beta pruning search
- Cutting off search
- Heuristic evaluation function
Knowledge and Reasoning
 Humans know things and what they know (knowledge) help them do
things (reasoning).

 Intelligence of humans is achieved by the processes of reasoning that


operates on internal representations of knowledge.

 It is also important for an artificial being, we call it agent, to achieve


“high-level” intelligence.

A good chess program can defeat human masters but it does not know that a
chess board can be used for playing checker as well. It cannot decide when to
play a chess game.

The transition model for 8-puzzle problems, i.e., knowledge of what the actions
do, can be used to predict the outcomes of actions but not to deduce that two
tiles cannot occupy the same space.
Knowledge and Reasoning (cont’d)
 Knowledge and reasoning
 enable agents to cope with complex environments.
[E.g., how to schedule an around-world trip? Need the knowledge about “the world”.]

 play a crucial role in dealing with partially observable environments.


[ Part of states could be hidden. E.g., how to avoid collisions in a busy shopping mall?]

 enable agents to handle complicated tasks, e.g. understanding natural


language.
[ E.g., “Mike opened the door, found a chair and sat on it”. It refers to door or chair?]

 provide more flexibility in problem solving.


[ Decision based on updated knowledge. E.g., how to choose a good text book for a
course? ]
Knowledge and Reasoning (cont’d)
To be intelligent, an agent needs a knowledge base to store knowledge and an
inference engine to do the reasoning.

An agent must be able to:


- represent knowledge of the world, e.g., states, actions, etc.
- incorporate new percepts.
- update internal representations of the world.
- deduce hidden properties of the world.
- deduce appropriate actions.

How to represent knowledge and perform inference?


Knowledge Representation
 Desirable properties of a knowledge representation (KR) scheme:
 Expressive: must be able to represent as much as possible about the world.

 Precise: must be clear and unambiguous.


E.g., the man saw the boy in the park with the telescope. Whose telescope?

 Regular: must be a clear mapping between the knowledge and its representation.

 Adaptable: must be able to add new information or delete invalid information.

 Suitable for reasoning: new knowledge can be inferred.

 Computationally efficient: the scheme is able to be implemented.

 Qualitative: it can represent qualitative knowledge.


E.g., how to represent “you can only put block A on block B if B is clear”?

 Meta-level reasoning: can represent knowledge about knowledge.

 …
Wumpus World
 Rules of the environment
-- Rooms adjacent to wumpus are smelly.
-- Rooms adjacent to pit are breezy.
-- The room containing gold is glittering.
-- A bump is perceived if hiting into a wall.
-- Scream spreads anywhere if wumpus is killed.
-- Shooting kills wumpus if you are facing it.
-- Shooting uses up the only arrow.
-- Grabbing picks up gold if in same square.

 Percepts (sensors): Stench, Breeze, -- a cave containing 4x4 grid of rooms


-- a wumpus lives in one of these room.
Glitter, Bump, Scream
-- some rooms contain bottomless pits.
 Actions: Left turn, Right turn, Forward, -- one room has a heap of gold.
-- locations of wumpus and gold are random.
Grab, Shoot, Climb
-- each room may have a pit with prob. of 0.2.
 Partially observable -- an agent starts from room [1, 1], facing right.
-- room [1,1] is both the entrance and exit
 Main challenge: Initial ignorance of the -- room [1,1] is free of wumpus, pit and gold.
environmental configuration
Wumpus World (cont’d)
Only partially observable, so reasoning is needed!

How does the agent reason?


 Represent the states of the world

 Determine the rules of the world, e.g., rooms next to the


wumpus stink.

Then repeatedly:
 Incorporate percepts as inputs from sensors

 Update knowledge of the world

 Deduce “hidden” properties of the world

 Determine actions
Wumpus World (cont’d)

Often considered as a knowledge base and an


inference engine.

Knowledge base (domain-dependent)


“I am in room [3,2].”
“If the wumpus is in any adjacent room, I will sense a
stench.”

Inference engine (domain-independent)


“If A and A => B, then B.”
Exploring a Wumpus World
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Exploring a Wumpus World (cont’d)
Logic in General
 Logic is a set of formal languages for representing information
such that conclusions can be drawn.

 Syntax specifies the well-formed sentences in the language.

 Semantics define the “meaning” of sentences, i.e., the truth of


a sentence with respect to each possible world.

 Examples of the language of arithmetic:

 x+2 ≥ y is a sentence; x2+y > {} is not a sentence


 x+2 ≥ y is true iff the number x+2 is no less than the number y
 x+2 ≥ y is true in a world where x = 7, y = 1
 x+2 ≥ y is false in a world where x = 0, y = 6
Entailment
 Logical entailment means that one follows logically from
another:
KB ╞ α

 Knowledge base KB entails sentence α if and only if α is true


in all worlds where KB is true.
 KB containing “the Giants won” and “the Reds won” entails
“Either the Giants won or the Reds won”.
 “x+y = 4” entails “4 = x+y”.

 Entailment is a relationship between sentences (syntax), which


is based on semantics.
Models
 Logicians typically think in terms of models, which are formally
structured worlds with respect to which truth can be evaluated.
In other words: Assignment of (T/F) values to each of the symbols.

 We say m is a model of (or satisfies) a sentence α if α is true in m.

 M(α) is the set of all models of α.

 KB ╞ α iff M(KB) ⊆M(α)


 KB = “the Giants won and the Reds
won” and α = “the Giants won”.
Entailment in the Wumpus World

Situation after detecting nothing


in [1,1], moving to [2, 1],
and detecting breeze in [2,1]

Consider possible models for ?s


assuming only pits

3 Boolean choices  8 possible


models
Wumpus Models
Wumpus Models (cont’d)

KB = the rules of the wumpus world + percepts


Wumpus Models (cont’d)

 KB = the rules of the wumpus world + percepts


 α1 = “[1, 2] is safe”, KB ╞ α1, proved by model checking
Wumpus Models (cont’d)

KB = the rules of the wumpus world + percepts


Wumpus Models (cont’d)

 KB = the rules of the wumpus world + percepts


 α2 = “[2,2] is safe”, KB ╞ α2
Inference
 KB ├i α denotes α is derived from KB by an inference algorithm i.
-- Consequences of KB are a haystack; α is a needle.
-- Entailment = needle in haystack; inference = finding it.

 Soundness: i is sound if
whenever KB ├i α, it is also true that KB╞ α

 Completeness: i is complete if
whenever KB╞ α, it is also true that KB ├i α

 Preview: we will define a logic (first-order logic) which is


expressive enough to say almost anything of interest, and for which
there exists a sound and complete inference procedure.

 The procedure will answer any question whose answer follows


from what is known by the KB.
Propositional Logic: Syntax
Propositional logic is the simplest logic!

Basic elements:
-- Propositional symbols (atomic sentences)
-- P, Q, R, … (start with an uppercase letter)
-- T (always true), F (always-false)
-- Connectives , , , , 

Complex sentences are constructed from simpler


ones using connectives and parentheses.

P3,1  P3,2 : “There is a pit in [3,1] or [3,2]”


P3,1  P3,2 : “There is a pit in [3,1] and [3,2]”
Propositional Logic: Syntax (cont’d)
T and F are atomic sentences
P, Q, R, … are atomic sentences
If P is a sentence, so is (P) and [P]
If P is a sentence, so is P
If P and Q are sentences, so are
PQ (conjunction)
PQ (disjunction)
PQ (implication) a.k.a. rules or if-then statement
PQ (biconditional)

A literal is a atomic sentence either positive (P) or negative (P).


Propositional Logic: Syntax (cont’d)
Wff (well formed formula, i.e., syntactically ok):
P
PQ
PQ
(P  Q)  R Note: Parentheses
( (P  Q)  ( R  (S P) ) ) must be used to avoid
ambiguity.
Non-wff (“syntactic junk”):
P
P
PQ
P
Propositional Logic: Semantics
 A model specifies the truth value (true/false) for every proposition symbol
e.g., P1,2 P2,2 P3,1
false true false
With 3 symbols, 8 possible models can be enumerated automatically.

 Recursive evaluation: atomic sentences and sentences formed with five connectives.

 Rules for computing the truth value with respect to a model m:


-- T and F are true and false in every model, respectively
-- Other atomic sentences must be directly specified in m
-- S is true iff S is false
-- S1  S2 is true iff S1 is true and S2 is true
-- S1  S2 is true iff S1 is true or S2 is true
-- S1  S2 is true iff S1 is false or S2 is true
is false iff S1 is true and S2 is false
-- S1  S2 is true iff S1S2 is true and S2S1 is true

 Simple recursive process evaluates an arbitrary sentence, e.g.,


P1,2  (P2,2  P3,1) = true  (true  false) = true  true = true
Truth Tables
 Rules can be expressed with truth tables, specifying the truth
values of sentences for each assignment of truth values to their
components.

 Truth tables for 5 connectives:


Implication
The truth value of implication “” is somewhat count-intuitive.

Let P refers to “He studied hard.”, Q refers to “He passed the exam.”.

So P  Q means “He studied hard implies he passed the exam.”.


-- True when P true Q true – he studied hard so he passed the exam.
-- True when P false Q false – he didn’t study hard so he didn’t pass the exam.
-- False when P true Q false – he studied hard but he didn’t pass the exam.
-- True when P false Q true – he didn’t study hard but he did pass the exam.

The last one seems bizarre. It actually says if he passed the exam, then the implication is true even if
the premise/antecedent is false.

“Dogs love cats implies cats hate dogs” odd but logically true if cats hate dogs is always true.

Question: what is the truth value of P  P?


Propositional Logic in Wumpus World
Let Pi,j be true if there is a pit in [i, j].
Let Bi,j be true if it is breezy in [i, j].

R1: P1,1 There is no pit in [1, 1].

R2: B1,1  (P1,2  P2,1 ) Pits cause breezes in adjacent squares.

R3: B2,1  (P1,1  P2,2  P3,1) Pits cause breezes in adjacent squares.

R4: B1,1 There is no breeze in [1, 1]

R5: B2,1 There is a breeze in [2, 1].

Based on these given information, we can infer α: there is no pit in [1, 2].
Truth Tables for Inference

Enumerating rows via different assignments to symbols).

If KB is true for a row, then checking α for that row.


Model-Checking

 Inference by enumerating all models (via truth


tables)

 Sound and complete

 For n proposition symbols, time complexity:


O(2n), and space complexity: O(n)
Equivalence
Two sentences are logically equivalent if and only if (iff) they are true in the same models (they have
the same truth value for same input). It is expressed as:
 ≡  iff  ╞  and  ╞ 

Some standard logical equivalences:

P  Q is logically equivalent to P  Q .
Validity
A sentence is valid if it is true in all models.

The following sentences are valid:


True
P  P
PP
(P  (P  Q ))  Q

Valid sentences are also known as tautologies (necessarily true).

Validity can be used for inference - Deduction Theorem:

KB ╞ α if and only if (KB  α) is valid


Satisfiability
A sentence is satisfiable if it is true in some models e.g.
PQ,
P

A sentence is unsatisfiable if it is true in no models e.g.


P P

Satisfiability can be used for inference as:

KB ╞ α if and only if (KB α) is unsatisfiable

It is actually the standard mathematical proof technique of reductio ad absurdum


(literally, “reduction to an absurd thing”).
It is also called proof by refutation or proof by contradiction.
Inference
The aim of logical inference is to decide whether an entailment ╞  is true.

To prove it, there are two main approaches:

-- by truth tables based enumeration (model-checking approach)


The complexity of this approach is exponential.

-- by applying inference rules (theorem proving approach)

Inference rule  1,  2 ......  n,


This general form of inference rules means “if we have proof that each i is true then this is extended
to a proof that  is true.”.

A chained inference (i.e. possibly using more than one rule) from  to  ( infers ) is written:
┣ 
Inference Rules
One well known inference rule is called Modus Ponens:

   ,

It means that if    and  are given, then  can be inferred.
E.g., if (Hungry  FoodisAvailable)  Eat and Hungry  FoodisAvailable are given,
then Eat can be inferred.

Another useful inference rule is And-Elimination:

 

It means from a conjunction, any of the conjuncts can be inferred.


For example, from (Hungry  FoodisAvailable ), Hungry can be inferred.
Inference Rules (cont’d)
Logical equivalence can be used as inferences rules, e.g., biconditional elimination yields
two rules:

(   )  (    )   
   (   )  (    )

By using these rules, we can infer that there is no pit in [1, 2] (the wumpus world).
We already know that:
R1: P1,1; R2: B1,1  (P1,2  P2,1 ) ; R3: B2,1  (P1,1  P2,2  P3,1) ; R4: B1,1; R5: B2,1

Apply biconditional elimination to sentence R2, we can refer that:


R6: (B1,1  (P1,2  P2,1 ))  ( (P1,2  P2,1 )  B1,1 )
Inference Rules (cont’d)
 
Apply And-Elimination to sentence R6, we can refer that:

R7: ( (P1,2  P2,1 )  B1,1 )

Apply contraposition to sentence R7, we get:


R8: (B1,1  (P1,2  P2,1 ) )

   ,
Apply Modus Ponens to sentence R8 (with Sentence R4: B1,1 ), we get

R9: (P1,2  P2,1 )

Apply de Morgan’s rule to sentence R9, we can refer that:


R10: P1,2  P2,1

So there is not pit in both [1,2] and [2,1].


Monotonicity
The previous inference process did not use sentences R1, R3, R5
R1:  P1,1 R2: B1,1  (P1,2  P2,1 ) R3: B2,1  (P1,1  P2,2  P3,1) R4:  B1,1 R5:B2,1
because they don’t have relevant propositions.

Finding a proof can be highly efficient in practice because it can ignore irrelevant
propositions, no matter how many of them there are.

This property of logical systems follows a fundamental property: Monotonicity.

Monotonicity means for any sentences α and 

if KB ╞ α then KB   ╞ α

Additional information  cannot invalidate any conclusion α already inferred.

Conclusion follow inference rules regardless of what else is in the knowledge base.

45
Resolution
 Resolution inference rule:

li …  lk, m1  …  mn

li  …  li-1  li+1  …  lk  m1  …  mj-1  mj+1 ...  mn

where li and mj are complementary literals.

e.g., P1,3  P2,2, P2,2

P1,3

 Resolution is sound and complete for propositional logic


Conjunctive Normal Form

Resolution can work for all propositional logical


sentences

Trick: need to convert to conjunctive normal form


(CNF)

“Conjunction of disjunction of literals”

CNF: Clause1  …  Clausen


Clause: Literal1  …  Literalm
Literal: P | P
Conversion to CNF

1. Replace P  Q with (P  Q)  (Q  P)
2. Replace P  Q with P  Q
3. Push  innermost
 Replace P with P
 Replace (P  Q) with P  Q
 Replace (P  Q) with P  Q

4. Distribute  over 
 Replace P  (Q  R) with (P  Q)  (P  R)
Resolution example

KB = (B1,1  (P1,2 P2,1)) B1,1 α = P1,2


Horn and Definite Clauses
Horn clauses: at most one positive literal
Definite clauses: exactly one positive literal

Resolution with two Horn clauses results in another Horn clause.

Deciding entailment with Horn clauses has linear time complexity.

Forward and backward chaining:


Conclusions
 Revision of adversarial search

 Knowledge representation and reasoning

 Logic in general
 Models and entailment
 Soundness and completeness

 Propositional logic
 Syntax and semantics
 Model checking
 Inferences rules and theorem proving

You might also like