Unit - Iii
Unit - Iii
Using Predicate logic: Representing simple facts in logic - Representing Instance and Isa
relationships - Computable functions and predicates - Resolution - Natural deduction.
Representing knowledge using rules: Procedural Vs Declarative knowledge - Logic
programming - Forward Vs Backward reasoning - Matching - Control knowledge.
Quantifiers
Universal quantification
● (∀x)P(x) means that P holds for all values of x in the domain associated with that
variable
● E.g., (∀x) dolphin(x) → mammal(x)
Existential quantification
● (∃ x)P(x) means that P holds for some value of x in the domain associated with that
variable
● E.g., (∃ x) mammal(x) ∧ lays-eggs(x)
Also, Consider the following example that shows the use of predicate logic as a way of
representing knowledge.
1. Marcus was a man.
● man(Marcus)
2. Marcus was a Pompeian.
● Pompeian(Marcus)
3. All Pompeians were Romans.
● ∀x: Pompeian(x) → Roman(x)
4. Caesar was a ruler.
● ruler(Caesar)
5. Also, All Pompeians were either loyal to Caesar or hated him.
● inclusive-or
● ∀x: Roman(x) → loyalto(x, Caesar) ∨ hate(x, Caesar)
● exclusive-or
● ∀x: Roman(x) → (loyalto(x, Caesar) ∧¬ hate(x, Caesar)) ∨
● (¬loyalto(x, Caesar) ∧ hate(x, Caesar))
6. Everyone is loyal to someone.
● ∀x: ∃y: loyalto(x, y)
7. People only try to assassinate rulers they are not loyal to.
● ∀x: ∀y: person(x) ∧ ruler(y) ∧
● tryassassinate(x, y) →¬loyalto(x, y)
8. Marcus tried to assassinate Caesar.
● tryassassinate(Marcus, Caesar)
Now suppose if we want to use these statements to answer the question: Was Marcus loyal to
Caesar?
Also, Now let’s try to produce a formal proof, reasoning backward from the desired goal: ¬
Ioyalto(Marcus, Caesar)
In order to prove the goal, we need to use the rules of inference to transform it into another goal
(or possibly a set of goals) that can, in turn, transformed, and so on, until there are no unsatisfied
goals remaining.
Figure: An attempt to prove ¬loyalto(Marcus, Caesar).
● The problem is that, although we know that Marcus was a man, we do not have any way
to conclude from that that Marcus was a person. Also, We need to add the representation
of another fact to our system, namely: ∀ man(x) → person(x)
● Now we can satisfy the last goal and produce a proof that Marcus was not loyal to
Caesar.
● Moreover, From this simple example, we see that three important issues must be
addressed in the process of converting English sentences into logical statements and then
using those statements to deduce new ones:
1. Many English sentences are ambiguous (for example, 5, 6, and 7 above).Choosing
the correct interpretation may be difficult.
2. Also, There is often a choice of how to represent the knowledge. Simple
representations are desirable, but they may exclude certain kinds of reasoning.
3. Similarly, Even in very simple situations, a set of sentences is unlikely to contain
all the information necessary to reason about the topic at hand. In order to be able
to use a set of statements effectively. Moreover, It is usually necessary to have
access to another set of statements that represent facts that people consider too
obvious to mention.
● The predicate instance is a binary one, whose first argument is an object and whose
the second argument is a class to which the object belongs.
● But these representations do not use an explicit isa predicate.
● Instead, subclass relationships, such as that between Pompeians and Romans, described
as shown in sentence 3.
● The implication rule states that if an object is an instance of the subclass Pompeian then it
is an instance of the superclass Roman.
● Note that this rule is equivalent to the standard set-theoretic definition of the subclass
superclass relationship.
● The third part contains representations that use both the instance and isa predicates
explicitly.
● The use of the isa predicate simplifies the representation of sentence 3, but it requires that
one additional axiom (shown here as number 6) be provided.
Computable Functions and Predicates
● To express simple facts, such as the following greater-than and less-than relationships:
gt(1,O) It(0,1) gt(2,1) It(1,2) gt(3,2) It( 2,3)
● It is often also useful to have computable functions as well as computable predicates.
Thus we might want to be able to evaluate the truth of gt(2 + 3,1)
● To do so requires that we first compute the value of the plus function given the arguments
2 and 3, and then send the arguments 5 and 1 to gt.
Consider the following set of facts, again involving Marcus:
1) Marcus was a man.
man(Marcus)
2) Marcus was a Pompeian.
Pompeian(Marcus)
3) Marcus was born in 40 A.D.
born(Marcus, 40)
4) All men are mortal.
x: man(x) → mortal(x)
5) All Pompeians died when the volcano erupted in 79 A.D.
erupted(volcano, 79) ∧ ∀ x : [Pompeian(x) → died(x, 79)]
6) No mortal lives longer than 150 years.
x: t1: At2: mortal(x) born(x, t1) gt(t2 – t1,150) → died(x, t2)
7) It is now 1991.
now = 1991
So, Above example shows how these ideas of computable functions and predicates can be useful.
It also makes use of the notion of equality and allows equal objects to be substituted for each
other whenever it appears helpful to do so during a proof.
● So, Now suppose we want to answer the question “Is Marcus alive?”
● The statements suggested here, there may be two ways of deducing an answer.
● Either we can show that Marcus is dead because he was killed by the volcano or we can
show that he must be dead because he would otherwise be more than 150 years old,
which we know is not possible. Also, As soon as we attempt to follow either of those
paths rigorously, however, we discover, just as we did in the last example, that we need
some additional knowledge. For example, our statements talk about dying, but they say
nothing that relates to being alive, which is what the question is asking.
So we add the following facts:
8) Alive means not dead.
x: t: [alive(x, t) → ¬ dead(x, t)] [¬ dead(x, t) → alive(x, t)]
9) If someone dies, then he is dead at all later times.
x: t1: At2: died(x, t1) gt(t2, t1) → dead(x, t2)
So, Now let’s attempt to answer the question “Is Marcus alive?” by proving: ¬ alive(Marcus,
now)
Resolution
Propositional Resolution
1. Convert all the propositions of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in
Step 1.
3. Repeat until either a contradiction is found or no progress can be made:
1. Select two clauses. Call these the parent clauses.
2. Resolve them together. The resulting clause, called the resolvent, will be the
disjunction of all of the literals of both of the parent clauses with the following
exception: If there are any pairs of literals L and ¬ L such that one of the parent
clauses contains L and the other contains ¬L, then select one such pair and
eliminate both L and ¬ L from the resolvent.
3. If the resolvent is the empty clause, then a contradiction has been found. If it is
not, then add it to the set of classes available to the procedure.
The Unification Algorithm
● In propositional logic, it is easy to determine that two literals cannot both be true at the
same time.
● Simply look for L and ¬L in predicate logic, this matching process is more complicated
since the arguments of the predicates must be considered.
● For example, man(John) and ¬man(John) is a contradiction, while the man(John) and
¬man(Spot) is not.
● Thus, in order to determine contradictions, we need a matching procedure that compares
two literals and discovers whether there exists a set of substitutions that makes them
identical.
● There is a straightforward recursive procedure, called the unification algorithm, that does
it.
Algorithm: Unify(L1, L2)
1. If L1 or L2 are both variables or constants, then:
1. If L1 and L2 are identical, then return NIL.
2. Else if L1 is a variable, then if L1 occurs in L2 then return {FAIL}, else return
(L2/L1).
3. Also, Else if L2 is a variable, then if L2 occurs in L1 then return {FAIL}, else
return (L1/L2). d. Else return {FAIL}.
2. If the initial predicate symbols in L1 and L2 are not identical, then return {FAIL}.
3. If LI and L2 have a different number of arguments, then return {FAIL}.
4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the
substitutions used to unify L1 and L2.)
5. For I ← 1 to the number of arguments in L1 :
1. Call Unify with the ith argument of L1 and the ith argument of L2, putting the
result in S.
2. If S contains FAIL then return {FAIL}.
3. If S is not equal to NIL then:
2. Apply S to the remainder of both L1 and L2.
3. SUBST: = APPEND(S, SUBST).
6. Return SUBST.
Resolution in Predicate Logic
We can now state the resolution algorithm for predicate logic as follows, assuming a set of given
statements F and a statement to be proved P:
Algorithm: Resolution
1. Convert all the statements of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in
1.
3. Repeat until a contradiction is found, no progress can be made, or a predetermined
amount of effort has expanded.
1. Select two clauses. Call these the parent clauses.
2. Resolve them together. The resolvent will the disjunction of all the literals of
both parent clauses with appropriate substitutions performed and with the
following exception: If there is one pair of literals T1 and ¬T2 such that one of the
parent clauses contains T2 and the other contains T1 and if T1 and T2 are
unifiable, then neither T1 nor T2 should appear in the resolvent. We call T1 and
T2 Complementary literals. Use the substitution produced by the unification to
create the resolvent. If there is more than one pair of complementary literals, only
one pair should omit from the resolvent.
3. If the resolvent is an empty clause, then a contradiction has found. Moreover, If
it is not, then add it to the set of classes available to the procedure.
Resolution Procedure
● Resolution is a procedure, which gains its efficiency from the fact that it operates on
statements that have been converted to a very convenient standard form.
● Resolution produces proof by refutation.
● In other words, to prove a statement (i.e., to show that it is valid), resolution attempts
to show that the negation of the statement produces a contradiction with the known
statements (i.e., that it is unsatisfiable).
● The resolution procedure is a simple iterative process: at each step, two clauses, called
the parent clauses, are compared (resolved), resulting in a new clause that has inferred
from them. The new clause represents ways that the two parent clauses interact with each
other. Suppose that there are two clauses in the system:
winter V summer
¬ winter V cold
● Now we observe that precisely one of winter and ¬ winter will be true at any point.
● If winter is true, then cold must be true to guarantee the truth of the second clause. If ¬
winter is true, then summer must be true to guarantee the truth of the first clause.
● Thus we see that from these two clauses we can deduce summer V cold
● This is the deduction that the resolution procedure will make.
● Resolution operates by taking two clauses that each contains the same literal, in this
example, winter.
● Moreover, The literal must occur in the positive form in one clause and in negative form
in the other. The resolvent obtained by combining all of the literals of the two parent
clauses except the ones that cancel.
● If the clause that produced is the empty clause, then a contradiction has found.
For example, the two clauses
winter
¬ winter
will produce the empty clause.
Natural Deduction is a formal system for logical reasoning that allows you to derive
conclusions from premises using a set of rules. The system uses basic inference rules that
describe how to manipulate logical formulas.
Structure of a Proof
● If you have two propositions PPP and QQQ, you can combine them into a conjunction
P∧QP \land QP∧Q.
● Form:
PPP
QQQ
∴P∧Q\therefore P \land Q∴P∧Q
● Example:
If you know PPP (it’s raining) and QQQ (it’s cold), you can conclude P∧QP \land
QP∧Q (It’s raining and it’s cold).
● If you know PPP, you can conclude P∨QP \lor QP∨Q, where QQQ is any formula.
● Form:
PPP
∴P∨Q\therefore P \lor Q∴P∨Q
● Example:
If you know PPP (I will study), you can conclude P∨QP \lor QP∨Q (I will study or I
will play video games).
● If assuming PPP leads you to conclude QQQ, you can conclude P→QP \to QP→Q (if
PPP then QQQ).
● Form:
P⊢QP \vdash QP⊢Q
∴P→Q\therefore P \to Q∴P→Q
● Example:
If assuming "It rains" leads to "I take an umbrella," you can conclude "If it rains, I will
take an umbrella."
● If assuming PPP leads to a contradiction, you can conclude ¬P\neg P¬P (not PPP).
● Form:
P⊢⊥P \vdash \botP⊢⊥ (assuming PPP leads to contradiction)
∴¬P\therefore \neg P∴¬P
● Example:
If assuming "It’s sunny" leads to "It’s both raining and not raining," you can conclude
"It’s not sunny."
● If you have P∧QP \land QP∧Q, you can extract either PPP or QQQ.
● Form:
P∧QP \land QP∧Q
∴P\therefore P∴P
or
P∧QP \land QP∧Q
∴Q\therefore Q∴Q
● Example:
If you know "It’s raining and it’s cold" (i.e., P∧QP \land QP∧Q), you can conclude
"It’s raining" or "It’s cold."
● If you have P∨QP \lor QP∨Q (P or Q) and can derive the same conclusion RRR from
both PPP and QQQ, then you can conclude RRR.
● Form:
P∨QP \lor QP∨Q
P→RP \to RP→R
Q→RQ \to RQ→R
∴R\therefore R∴R
● Example:
If you know "I will either go to the park or the mall" (i.e., P∨QP \lor QP∨Q), and you
know:
○ If I go to the park, I will see my friend (i.e., P→RP \to RP→R).
○ If I go to the mall, I will see my friend (i.e., Q→RQ \to RQ→R).
● Then, you can conclude that I will see my friend (i.e., RRR).
● If you have P→QP \to QP→Q and PPP, you can conclude QQQ.
● Form:
P→QP \to QP→Q
PPP
∴Q\therefore Q∴Q
● Example:
If you know "If it rains, I’ll carry an umbrella" (i.e., P→QP \to QP→Q) and you know
"It rains" (i.e., PPP), you can conclude "I’ll carry an umbrella" (i.e., QQQ).
Let’s prove P→(Q→P)P \to (Q \to P)P→(Q→P), which means "If PPP is true, then Q→PQ \to
PQ→P is true."
Thus, we’ve completed the proof for P→(Q→P)P \to (Q \to P)P→(Q→P).
Practice Problems
● If we were asked: Is Person(y)? The knowledge base might justify multiple answers:
○ Y = Marcus
○ Y = Caesar
○ Y = Cleopatra
● The answer depends on how the assertions are examined. Procedural knowledge is
crucial here because the sequence of checks can impact which result is returned first.
Definition: Declarative knowledge refers to what is known, typically in the form of facts, rules,
and definitions. It represents knowledge about the world and specifies information that can be
used for reasoning, without prescribing how to use that information.
● Example: Laws of physics, definitions, and people’s names are examples of declarative
knowledge. They provide facts but do not define how to use them.
● Key Features:
○ Knowledge is represented in a fact-based or assertion-based manner.
○ Does not specify how the knowledge should be applied.
○ The control for using the knowledge must be provided by an external system,
program, or reasoning mechanism.
● Declarative knowledge only specifies what is true but does not dictate how to use that
truth. For example, logical assertions in a knowledge base may suggest multiple valid
conclusions, but it is up to the reasoning system to determine how to compute these
conclusions.
● No embedded control: The system needs additional mechanisms to search through facts
and apply rules.
Logic Programming is an example of a paradigm that blends both declarative and procedural
knowledge. It allows you to represent knowledge using logical assertions (declarative) while also
providing a controlled procedure to apply those assertions to derive answers (procedural).
PROLOG Example:
1. This logical representation simply states the relationships between pets, dogs, small
pets, and apartment pets. It doesn’t specify how the computer should search through the
rules to answer queries.
PROLOG Representation (Procedural Knowledge):
apartmentpet(x) :- pet(x), small(x).
pet(x) :- cat(x).
pet(x) :- dog(x).
dog(x) :- poodle(x).
small(x) :- poodle(x).
poodle(fluffy).
Both strategies involve searching through a space of possible states, but they start from different
ends and follow different approaches.
Forward Reasoning
Forward reasoning (also known as forward chaining) starts at the initial state and moves
toward the goal state. It involves generating new states based on the current state using rules and
applying those rules until the goal state is reached.
Backward Reasoning
Backward reasoning (also known as backward chaining) works backwards from the goal state.
Instead of starting at the initial state, you start at the goal state and work your way backward to
see which actions would lead you to the goal.
● Direction of Search:
○ Forward reasoning starts from the initial state and generates new states.
○ Backward reasoning starts from the goal state and looks for the previous states
that could lead to the goal.
● Efficiency:
○ In general, forward reasoning is useful when there are fewer possible start states
than goal states. It can be more efficient if the initial state is well-defined and the
goal state is less constrained.
○ Backward reasoning is often more efficient when there are fewer goal states or
when the goal state is more tightly defined. For example, backward reasoning is
useful when solving problems like proofs or diagnosis.
● Flexibility:
○ Forward reasoning is more intuitive for some types of problems because we
usually know how to begin a task, and then apply rules to work toward a solution.
○ Backward reasoning can sometimes be more targeted and efficient, particularly
when the problem involves searching for specific facts or conditions.
Several factors can influence whether forward or backward reasoning is more appropriate:
○ PROLOG uses backward chaining to answer queries. Given a set of rules and
facts, PROLOG attempts to match the goal (query) with the head of the rules,
working backward from the goal to find which facts satisfy it.
In some cases, you might want to combine forward and backward reasoning. One such
strategy is Bi-directional Search:
● Bi-directional Search:
○ This method starts from both the initial state and the goal state simultaneously
and tries to meet in the middle. This approach can reduce the search space
significantly.
○ However, bi-directional search can be inefficient if the search spaces do not
meet as expected, or if the problem space is poorly structured.
● Challenges:
○ The two searches may pass each other, requiring additional work to connect the
two paths.
○ If the rules can be applied symmetrically in both forward and backward
reasoning, the combined approach is feasible. However, if the rules are
asymmetric, it may be difficult to combine both strategies.