Module-3 Knowledge Representation
Module-3 Knowledge Representation
Module-III:
Knowledge Representation using
Predicate Logic and Rules
A implies B: (A → B) (~A˅B)
A if and only if B (A ↔ B) (~A˅B) ˄ (~B˅A)
• The natural language words may have slightly different
meanings.
Example:
• A ^ B and B ^ A should always have the same meaning.
But the below sentences
• She became sick and she went to the doctor.
1. ∀x bird(x) →fly(x).
2. ∀x man(x) → respects (x, parent).
3. ∃x boys(x) → play(x, cricket).
4. ¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].
5. ∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y) [¬(x==y) ∧
student(y) → ¬failed (y, Mathematics)]].
Consider the following example that shows the use of predicate
logic as a way of representing knowledge.
1. Marcus was a man.
2. Marcus was a Pompeian.
3. All Pompeians were Romans.
4. Caesar was a ruler.
5. All Pompeians were either loyal to Caesar or hated him.
6. Everyone is loyal to someone.
7. People only try to assassinate rulers they are not loyal to.
8. Marcus tried to assassinate Caesar.
The facts described by these sentences can be represented as
a set of well-formed formulas (wffs) as follows:
• Marcus was a man.
– man(Marcus)
• Marcus was a Pompeian.
– Pompeian(Marcus)
• All Pompeians were Romans.
– ∀x: Pompeian(x) → Roman(x)
• Caesar was a ruler.
– ruler(Caesar)
• All Pompeians were either loyal to Caesar or hated him.
As inclusive-or
– ∀x: Roman(x) → loyalto(x, Caesar) ∨ hate(x, Caesar)
As exclusive-or
– ∀x: Roman(x) → (loyalto(x, Caesar) ∧¬ hate(x, Caesar)) ∨
(¬loyalto(x, Caesar) ∧ hate(x, Caesar))
• Everyone is loyal to someone.
– ∀x: ∃y: loyalto(x, y)
• People only try to assassinate rulers they are not loyal to.
– ∀x: ∀y: person(x) ∧ ruler(y) ∧ tryassassinate(x, y) →¬loyalto(x, y)
• Marcus tried to assassinate Caesar.
– tryassassinate(Marcus, Caesar)
• Now suppose if we want to use these statements to answer
the question: Was Marcus loyal to Caesar?
1. NIL
2. man(Marcus)
3. ruler(Caesar)
4. tryassassinate(Marcus, Caesar)
5. ¬ loyalto(Marcus, Caesar)
An attempt to prove ¬loyalto(Marcus, Caesar)
• The problem is that, although we know that Marcus was a
man, we do not have any way to conclude from that that
Marcus was a person.
• So, We need to add the representation of another fact to our
system, namely: All men are person
– ∀ man(x) → person(x)
Representing Instance and ISA Relationships
• Specific attributes instance and isa play an important role
particularly in a useful form of reasoning called property
inheritance.
• The predicates instance and isa explicitly captured the
relationships they used to express, namely class membership and
class inclusion.
• The first part of the figure contains the representations we have
already discussed. In these representations, class membership
represented with unary predicates (such as Roman), each of which
corresponds to a class.
• Asserting that P(x) is true is equivalent to asserting that x is an
instance (or element) of P.
• The second part of the figure contains representations that use the
instance predicate explicitly.
Three ways of representing class membership: ISA Relationships
• The predicate instance is a binary one, whose first argument is an object
and whose second argument is a class to which the object belongs.
• But these representations do not use an explicit isa predicate.
• Instead, subclass relationships, such as that between Pompeians and
Romans, described as shown in sentence 3.
• The implication rule states that if an object is an instance of the subclass
Pompeian then it is an instance of the superclass Roman.
• Note that this rule is equivalent to the standard set-theoretic definition of
the subclass superclass relationship.
• The third part contains representations that use both the instance and isa
predicates explicitly.
• The use of the isa predicate simplifies the representation of sentence 3,
but it requires that one additional axiom (shown here as number 6) be
provided.
Computable Functions and Predicates
• To express simple facts, such as the following greater-than
and less-than relationships:
– gt(1,0) It(0,1) gt(2,1) It(1,2) gt(3,2) It( 2,3)
• It is often also useful to have computable functions as well as
computable predicates.
– Thus we might want to be able to evaluate the truth of
gt(2 + 3,1)
– To do so requires that we first compute the value of the
plus function given the arguments 2 and 3, and then send
the arguments 5 and 1 to gt.
Consider the following set of facts, again involving Marcus:
1. Marcus was a man.
man(Marcus)
2. Marcus was a Pompeian.
Pompeian(Marcus)
3. Marcus was born in 1740
born(Marcus, 1740)
4. All men are mortal.
∀x: man(x) → mortal(x)
5. All Pompeians died when the volcano erupted in 1779.
erupted(volcano, 1779) ∧ ∀ x : [Pompeian(x) → died(x, 1779)]
6. No mortal lives longer than 150 years.
∀x: ∀t1: ∀t2: mortal(x) ∧ born(x, t1) ∧ gt(t2 – t1,150) → died(x, t2)
7. It is now 2024.
now = 2024
• Now suppose we want to answer the question “Is Marcus alive
now?”
• The statements suggested here, there may be two ways of
deducing an answer.
• Either we can show that Marcus is dead because he was killed by
the volcano or we can show that he must be dead because he
would otherwise be more than 150 years old, which we know is not
possible.
• Also, As soon as we attempt to follow either of those paths
rigorously, however, we discover, just as we did in the last
example, that we need some additional knowledge. For example,
our statements talk about dying, but they say nothing that relates to
being alive, which is what the question is asking.
• Modus ponens: If there is an axiom E → F and an axiom E, then
F logically follows.
• Modus tolens: If there is an axiom E → F and an axiom ¬F, then
¬E follows logically
So we add the following facts:
• Alive means not dead.
– ∀x:∀t: [alive(x, t) → ¬ died(x, t)] ∧ [¬ died(x, t) → alive(x, t)]
– ∀x:∀t: [alive(x, t) ↔ ¬ died(x, t)]
• If someone dies, then he is dead at all later times.
– ∀x: ∀t1: ∀t2: died(x, t1) ∧ gt(t2, t1) → died(x, t2)
Now let‟s attempt to answer the question “Is Marcus alive now?”
by proving:
¬ alive(Marcus,now)
7
Prove: Is Marcus alive now? i.e. ¬ alive(Marcus,now)
1. NIL
2. man(Marcus) by rule-1
3. Pompeian(Marcus) by rule-2
4. gt(2024,1979) compute
5. gt(now,1979) substitute by rule-7
6. Pompeian(Marcus) ∧ gt(now,1979) combine 2 and sentence 5
7. died(Marcus, 1779) ∧ gt(now,1979) by rule 5 and x/marcus
8. died(x, t1) ∧ gt(t2,t1) substitution with variables
9. died(x,t2) by rule 9
10. ¬ alive(Marcus,now) by rule 8
1. man(Marcus)
2. Pompeian(Marcus)
3. born(Marcus, 1740)
4. ∀x: man(x) → mortal(x)
5. erupted(volcano, 1779) ∧ ∀ x : [Pompeian(x) → died(x, 1779)]
6. ∀x: ∀t1: ∀t2: mortal(x) ∧ born(x, t1) ∧ gt(t2 – t1,150) → died(x, t2)
7. now = 2024
8. ∀x:∀t: [alive(x, t) → ¬ died(x, t)] ∧ [¬ died(x, t) → alive(x, t)]
9. ∀x: ∀t1: ∀t2: died(x, t1) ∧ gt(t2, t1) → died(x, t2)
Prove: Is Marcus died because of volcano erupted in 1979 ? i.e. Died(Marcus,1979)
1. NIL
2. man(Marcus) by rule-1
3. Pompeian(Marcus) by rule-2
4. erupted(volcano, 1779) compute
5. erupted(volcano, 1779) ∧ Pompeian(Marcus) combine 3 & 4
7. Died(Marcus,1979) by rule 5
1. man(Marcus)
2. Pompeian(Marcus)
3. born(Marcus, 1740)
4. ∀x: man(x) → mortal(x)
5. erupted(volcano, 1779) ∧ ∀ x : [Pompeian(x) → died(x, 1779)]
6. ∀x: ∀t1: ∀t2: mortal(x) ∧ born(x, t1) ∧ gt(t2 – t1,150) → died(x, t2)
7. now = 2024
8. ∀x:∀t: [alive(x, t) → ¬ died(x, t)] ∧ [¬ died(x, t) → alive(x, t)]
9. ∀x: ∀t1: ∀t2: died(x, t1) ∧ gt(t2, t1) → died(x, t2)
Resolution in propositional logic
1. Convert all the propositions of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of
clauses obtained in step 1.
3. Repeat until either a contradiction is found or no progress can be
made:
1. Select two clauses. Call these the parent clauses.
2. Resolve them together. The resulting clause, called the
resolvent, will be the disjunction of all of the literals of both of
the parent clauses with the following exception: If there are any
pairs of literals L and ¬ L such that one of the parent clauses
contains L and the other contains ¬L, then select one such pair
and eliminate both L and ¬ L from the resolvent.
3. If the resolvent is the empty clause, then a contradiction has
been found. If it is not, then add it to the set of classes
available to the procedure.
The Unification Algorithm
• In propositional logic, it is easy to determine that two literals
cannot both be true at the same time.
• Simply look for L and ¬L in predicate logic, this matching
process is more complicated since the arguments of the
predicates must be considered.
• For example, man(John) and ¬man(John) is a contradiction,
while the man(John) and ¬man(Spot) is not.
• Thus, in order to determine contradictions, we need a
matching procedure that compares two literals and discovers
whether there exists a set of substitutions that makes them
identical.
• There is a straightforward recursive procedure, called the
unification algorithm, that does it.
• Propositional Resolution
αvβ
¬β v γ
---------------
αvγ
• Resolution refutation:
– Convert all sentences to CNF
– Negate the desired conclusion (converted to Conjunctive Normal
Form (CNF)
– Apply resolution rule until either
» Derive false (a contradiction)
» Can‟t apply any more
• Resolution refutation is sound and complete
• If we derive a contradiction, then the conclusion follows from the
axioms
• If we can‟t apply any more, then the conclusion cannot be proved
from the axioms.
The Resolution proof procedure is as follows:
• Negate the theorem to be proved
• Turn the theorem and the axioms into clause form.
• Until the empty clause is produced or there are no
resolvable clauses, find pairs of resolvable clauses, resolve
them (including Unification), and add them to the list of
clauses.
• If the empty clause was produced, the negated theorem
contradicted the axioms, and the (unnegated) theorem is
TRUE, w.r.t. the axioms. If there were no more resolvable
clauses, the theorem is FALSE w.r.t. the axioms.
Propositional Resolution Example
1. P v Q
Sl. No. Formula Derivation
2. P → R
1 PvQ Given
3. Q → R
2 ~P v R Given
Prove R
3 ~Q v R Given
4 ~R Negated conclusion
And finally, resolving away R in lines 4 and 8, we get the empty clause,
which is false. We‟ll often draw this little black box to indicate that we‟ve
reached the desired contradiction.
What is Unification?
• Unification is a process of making two different logical atomic
expressions identical by finding a substitution. Unification depends on
the substitution process.
• It takes two literals as input and makes them identical using substitution.
• Let L1 and L2 be two atomic sentences and ? be a unifier such
that, L1? = L2?, then it can be expressed as UNIFY(L1, L2).
• Example: Find the Most General Unifier (MGU) for Unify{King(x),
King(John)}
• Let L1 = King(x), L2 = King(John)
• Substitution θ = {John/x} is a unifier for these atoms and applying this
substitution, and both expressions will be identical.
• Example: Let's say there are two different expressions,
P(x, y), and P(a, f(z)).
• Substitute x with a, and y with f(z) in the first expression, and it will be
represented as a/x and f(z)/y.
• With both the substitutions, the first expression will be identical to the
second expression and the substitution set will be: [a/x, f(z)/y].
Conditions for Unification:
Following are some basic conditions for unification:
• Predicate symbol must be same, atoms or expression with different
predicate symbol can never be unified.
• Number of Arguments in both expressions must be identical.
• Unification will fail if there are two similar variables present in the same
expression.
• Find the MGU of {p(f(a), g(Y)) and p(X, X)}
Sol:
S0 => Here, L1 = p(f(a), g(Y)), and L2 = p(X, X)
SUBST θ= {f(a) / X}
S1 => L1 = p(f(a), g(Y)), and L2 = p(f(a), f(a))
SUBST θ= {f(a) / g(y)}, Unification failed.
Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}
• Here, L1 = p(b, X, f(g(Z))) , and L2 = p(Z, f(Y), f(Y))
S0 => { p(b, X, f(g(Z))); p(Z, f(Y), f(Y))}
SUBST θ={b/Z}
• S1 => { p(b, X, f(g(b))); p(b, f(Y), f(Y))}
SUBST θ={f(Y) /X}
• S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}
SUBST θ= {g(b) /Y}
• S2 => { p(b, f(g(b)), f(g(b)); p(b, f(g(b)), f(g(b))}
Unified Successfully and Unifier = { b/Z, f(Y) /X , g(b) /Y}.
Question: Find the MGU of {p (X, X), and p (Z, f(Z))}
Pompeian(Marcus)
~person(x) ∨ ~ruler(y) ∨ ~tryassassinate(x, y) ∨
¬loyalto(x, y)
Marcus/x,1979/t
~Pompeian(x)
2. Pompeian(Marcus)
1. man(Marcus)
2. Pompeian(Marcus)
3. born(Marcus, 1740)
4. ∀x: man(x) → mortal(x)
5. erupted(volcano, 1779) ∧ ∀ x : [Pompeian(x) → died(x, 1779)]
6. ∀x: ∀t1: ∀t2: mortal(x) ∧ born(x, t1) ∧ gt(t2 – t1,150) → died(x, t2)
7. now = 2024
8. ∀x:∀t: [alive(x, t) → ¬ died(x, t)] ∧ [¬ died(x, t) → alive(x, t)]
9. ∀x: ∀t1: ∀t2: died(x, t1) ∧ gt(t2, t1) → died(x, t2)
Procedural Vs Declarative Knowledge
We have discussed various search techniques in previous units.
Now we would consider a set of rules that represent,
• Knowledge about relationships in the world and
• Knowledge about how to solve the problem using the content
of the rules.
Procedural Knowledge
• A representation in which the control information that is
necessary to use the knowledge is embedded in the
knowledge itself for e.g. computer programs, directions, and
recipes; these indicate specific use or implementation;
• The real difference between declarative and procedural views
of knowledge lies in where control information reside.
For example, consider the following
• Man (Marcus)
• Man (Caesar)
• Person (Cleopatra)
• ∀x: Man(x) → Person(x)
Now, try to answer the question. ∃y: Person(y)
• The knowledge base justifies any of the following answers.
– y=Marcus
– y=Caesar
– y=Cleopatra
• We get more than one value that satisfies the predicate.
• If only one value needed, then the answer to the question will
depend on the order in
• which the assertions examined during the search for a response.
• If the assertions declarative then they do not themselves say
anything about how they will be examined. In case of procedural
representation, they say how they will examine.
Declarative Knowledge
• A statement in which knowledge specified, but the use to which
that knowledge is to be put is not given.
• For example, laws, people‟s name; these are the facts which can
stand alone, not dependent on other knowledge;
• So to use declarative representation, we must have a program that
explains what is to do with the knowledge and how.
• For example, a set of logical assertions can combine with a
resolution theorem prove to give a complete program for solving
problems but in some cases, the logical assertions can view as a
program rather than data to a program.
• Hence the implication statements define the legitimate reasoning
paths and automatic assertions provide the starting points of those
paths.
• These paths define the execution paths which is similar to the “if
then else” in traditional programming.
• So logical assertions can view as a procedural representation of
knowledge.
For example, consider the following
• Man (Marcus)
• Man (Caesar)
• Person (Cleopatra)
• ∀x: Man(x) → Person(x)
– Declare x=Cleopatra
• Person (Cleopatra)
Logic Programming with PROLOG
• Logic programming is a programming paradigm in which
logical assertions viewed as programs.
• There are several logic programming systems, PROLOG is
one of them.
• A PROLOG program consists of several logical
assertions where each is a horn clause i.e. a clause with
at most one positive literal.
• Example: P, P V Q, P → Q
• The facts are represented on Horn Clause for two reasons.
– Because of a uniform representation, a simple and efficient interpreter
can write.
– The logic of Horn Clause decidable.
• Also, The first two differences are the fact that PROLOG
programs are actually sets of Horn clause that have been
transformed as follows:-
– If the Horn Clause contains no negative literal then leave it
as it is.
– Also, Otherwise rewrite the Horn clauses as an
implication, combining all of the negative literals into the
antecedent of the implications and the single positive
literal into the consequent.
For example the PROLOG clause P(x) :– Q(x, y) is equal to
logical expression ∀x: ∃y: Q (x,y) → P(x).
∀x: ∀t1: ∀t2: died(x, t1) ∧ gt(t2, t1) → died(x, t2)
PROLOG clause:- died(x, t2) :- died(x, t1) ∧ gt(t2, t1)
• Given database in PROLOG
grandson(X,Y) :- son(X,Z), parent(Y,Z)
son(charles,elizabeth)
parent(george,elizabeth)
• In query mode we might ask
?- grandson(elizabeth,charles).
• Ans: no
Question
We‟ll seek to answer the question:
• Is John the tallest boy in class?
Result
Now, to apply backward chaining, we start from the goal and assume that
John is the tallest boy in class. From there, we go backward through the
knowledge base comparing that assumption to each known fact to determine
whether it is true that John is the tallest boy in class or not.
Our goal:
• John is the tallest boy in the class
Which means:
Height (John) > Height (anyone in the class)
AND
John and Kim both are in the same class Rule-4
AND
Height (Kim) > Height (anyone in the class except John)
Rule-5: Everyone else other than John in the class is shorter than Kim
SO Height (John) > Hight(Kim)
income<3
age>60 age<5
6000
α1 α2 α3
age>60 or
age<5 α-β pruning
β1 age>60 or age<5
or income<36000
β2
price will be
charged 50%
Terminal Node
Conflict resolutions
• The result of the matching process is a list of rules whose
past history have matched the current state description along
with whatever variable bindings were generated by the
matching process
• it is the job of search method to decide on the order in which
rules will be applied.
• Sometimes matching process is incorporated along with the
decision making to decide a rule and that phase is called
conflict resolution.
• There are three approaches to the problem conflict resolution
in a production system
– Preference based on the rule that matched
– Preference based on the objects that matched
– Preference based on the action that matched
Preference based on the rule that matched
• Two common ways of assigning a preference based on the rules
• The first and simplest form is
– consider the rules specified in particular order they are presented.
– Priority is given to the rules in the order in which they appear.
– This scheme is used in PROLOG