Unit 3 (Part 2)
Unit 3 (Part 2)
The general framework of concepts is called an upper ontology because of the convention of
drawing graphs with the general concepts at the top and the more specific concepts below
them, as in Figure
For example, a shopper would normally have the goal of buying a basketball, rather
than a particular basketball such as BB9 There are two choices for representing categories in
first-order logic: predicates and objects. That is, we can use the predicate Basketball (b), or
we can reify1 the category as an object, Basketballs.
We could then say Member(b, Basketballs ), which we will abbreviate as b∈
subcategory of Balls. Categories serve to organize and simplify the knowledge base through
inheritance. If we say that all instances of the category Food are edible, and if we assert that
Fruit is a subclass of Food and Apples is a subclass of Fruit, then we can infer that every
apple is edible. We say that the individual apples inherit the property of edibility, in this case
from their membership in the Food category. First-order logic makes it easy to state facts
about categories, either by relating objects to categories or by quantifying over their
members. Here are some types of facts, with examples of each:
BB9 ∈ Basketballs
• An object is a member of a category.
Notice that because Dogs is a category and is a member of Domesticated Species, the
latter must be a category of categories. Categories can also be defined by providing necessary
and sufficient conditions for membership. For example, a bachelor is an unmarried adult
male:
Physical Composition
We use the general PartOf relation to say that one thing is part of another. Objects can
be grouped into part of hierarchies, reminiscent of the Subset hierarchy:
the smallest object satisfying this condition. In other words, BunchOf (s) must be part
of any object that has all the elements of s as parts:
In both scientific and commonsense theories of the world, objects have height, mass,
cost, and so on. The values that we assign for these properties are called measures.
Length(L1)=Inches(1.5)=Centimeters(3.81)
Similar axioms can be written for pounds and kilograms, seconds and days, and
dollars and cents. Measures can be used to describe objects as follows:
Diameter
(Basketball12)=Inches(9.5)
d∈ Days ⇒ Duration(d)=Hours(24)
ListPrice(Basketball12)=$(19)
Time Intervals
Event calculus opens us up to the possibility of talking about time, and time intervals.
We will consider two kinds of time intervals: moments and extended intervals. The
distinction is that only moments have zero duration:
Partition({Moments,ExtendedIntervals},
Intervals ) i∈Moments⇔Duration(i)=Seconds(0)
The functions Begin and End pick out the earliest and latest moments in an interval,
and the function Time delivers the point on the time scale for a moment.
The function Duration gives the difference between the end time and the start time.
EVENTS
Event calculus reifies fluents and events. The fluent At(Shankar, Berkeley) is an object that
refers to the fact of Shankar being in Berkeley, but does not by itself say anything about
whether it is true. To assert that a fluent is actually true at some point in time we use the
predicate T, as in T(At(Shankar, Berkeley), t). Events are described as instances of event
and say E1 ∈Flyings(Shankar, SF,DC) We then use Happens(E1, i) to say that the event E1
(E1,DC) we can define an alternative three-argument version of the category of flying events
took place over the time interval i, and we say the same thing in functional form with
Extent(E1)=i. We represent time intervals by a (start, end) pair of times; that is, i = (t1, t2) is
the time interval that starts at t1 and ends at t2. The complete set of predicates for one
version of the event calculus is T(f, t) Fluent f is true at time t Happens(e, i) Event e happens
over the time interval i Initiates(e, f, t) Event e causes fluent f to start to hold at time t
Terminates(e, f, t) Event e causes fluent f to cease to hold at time t Clipped(f, i) Fluent f
ceases to be true at some point during time interval i Restored (f, i) Fluent f becomes true
sometime during time interval i We assume a distinguished event, Start, that describes the
initial state by saying which fluents are initiated or terminated at the start time. We define T
by saying that a fluent holds at a point in time if the fluent was initiated by an event at some
time in the past and was not made false (clipped) by an intervening event. A fluent does not
hold if it was terminated by an event and not made true (restored) by another event. Formally,
the axioms are:
Happens(e, (t1, t2)) 𝖠Initiates(e, f, t1) 𝖠 ¬ Clipped(f, (t1, t)) 𝖠 t1 < t ⇒T(f,
t)Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠¬Restored (f, (t1, t)) 𝖠 t1 < t ⇒¬T(f,
t)
where Clipped and Restored are defined by Clipped(f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t,
t3))𝖠 t1 ≤ t < t2 𝖠 Terminates(e, f, t) Restored (f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t, t3))
𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)
What we need is a model of the mental objects that are in someone’s head (or something’s
knowledge base) and of the mental processes that manipulate those mental objects. The
model does not have to be detailed. We do not have to be able to predict how many
milliseconds it will take for a particular agent to make a deduction. We will be happy just to
be able to conclude that mother knows whether or not she is sitting.
We begin with the propositional attitudes that an agent can have toward mental objects:
attitudes such as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these
attitudes do not behave like “normal” predicates.
For example, suppose we try to assert that Lois knows that Superman can fly: Knows (Lois,
CanFly(Superman)) One minor issue with this is that we normally think of
CanFly(Superman) as a sentence, but here it appears as a term. That issue can be patched up
just be reifying CanFly(Superman); making it a fluent. A more serious problem is that, if it
(Clark)) Modal logic is designed to address this problem. Regular logic is concerned with a
single modality, the modality of truth, allowing us to express “P is true.” Modal logic
includes special modal operators that take sentences (rather than terms) as arguments.
For example, “A knows P” is represented with the notation KAP, where K is the modal
operator for knowledge. It takes two arguments, an agent (written as the subscript) and a
sentence. The syntax of modal logic is the same as first-order logic, except that sentences
can also be formed with modal operators.
In first-order logic, a model contains a set of objects and an interpretation that maps each
name to the appropriate object, relation, or function. In modal logic we want to be able to
consider both the possibility that Superman’s secret identity is Clark and that it isn’t.
Therefore, we will need a more complicated model, one that consists of a collection of
possible worlds rather than just one true world. The worlds are connected in a graph by
accessibility relations, one relation for each modal operator. We say that world w1 is
accessible from world w0 with respect to the modal operator KA if everything in w1 is
consistent with what A knows in w0, and we write this as Acc(KA,w0,w1). In general, a
knowledge atom KAP is true in world w if and only if P is true in every world accessible
from w. The truth of more complex sentences is derived by recursive application of this rule
and the normal rules of first-order logic. That means that modal logic can be used to
reason about nested knowledge sentences: what one agent knows about another agent’s
knowledge. For example, we can say that, even though Lois doesn’t know whether
Superman’s secret identity is Clark Kent, she does know that Clark knows: Klo is [KClark
Identity(Superman, Clark )∨KClark¬Identity(Superman, Clark )].
REASONING SYSTEMS FOR CATEGORIES
This section describes systems specially designed for organizing and reasoning with
categories. There are two closely related families of systems: semantic networks provide
graphical aids for visualizing a knowledge base and efficient algorithms for inferring
properties of an object on the basis of its category membership; and description logics
provide a formal language for constructing and combining category definitions and efficient
algorithms for deciding subset and superset relationships between categories.
SEMANTIC NETWORKS
Semantic networks work as an alternative to predicate logic for knowledge representation. In semantic
networks, the user can represent their knowledge in the form of graphical networks. This network
consists of nodes representing objects and arcs which describe the relationship between those objects.
This representation consists of two types of relations, such as IS-A relationship (Inheritance) and
Kind-Of-Relation.
Advantages
Semantic networks are a natural representation of knowledge.
It transparently conveys meaning.
These networks are simple and easy to understand.
Disadvantages
Semantic networks take more computational time at runtime.
These are inadequate as they do not have any equivalent quantifiers.
These networks are not intelligent and depend on the creator of the system.
Example: Following are some statements which we need to represent in the form of
nodes and arcs.
Statements:
Jerry is a cat.
Jerry is a mammal
Jerry is owned by Priya.
Jerry is brown colored.
All Mammals are animal.
In the above diagram, we have represented the different type of knowledge in the form of nodes
and arcs. Each object is connected with another object by some relation.
There are many variants of semantic networks, but all are capable of representing individual
objects, categories of objects, and relations among objects. A typical graphical notation
displays object or category names in ovals or boxes, and connects them with labeled links.
corresponding to the logical assertion Mary ∈FemalePersons ; similarly, the SisterOf link
For example, Figure 12.5 has a Member Of link between Mary and Female Persons,
between Mary and John corresponds to the assertion SisterOf (Mary, John). We can connect
categories using SubsetOf links, and so on. We know that persons have female persons as
mothers, so can we draw a HasMother link from Persons to FemalePersons? The answer is
no, because HasMother is a relation between a person and his or her mother, and categories
—in Figure 12.5. This link asserts that ∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y
do not have mothers. For this reason, we have used a special notation—the double-boxed link
∈FemalePersons
] We might also want to assert that persons have two legs—that is, ∀x x∈ Persons ⇒
Legs(x, 2) The semantic network notation makes it convenient to perform inheritance
reasoning. For example, by virtue of being a person, Mary inherits the property of having two
legs. Thus, to find out how many legs Mary has, the inheritance algorithm followsthe
MemberOf link from Mary to the category she belongs to, and then follows SubsetOf links up
the hierarchy until it finds a category for which there is a boxed Legs link—in this case, the
Persons category.
Figure 3.16 A semantic network with four objects (John, Mary, 1,
and 2) and four categories. Relations are denoted by labeled links.
Inheritance becomes complicated when an object can belong to more than one category or
when a category can be a subset of more than one other category; this is called multiple
inheritance. The drawback of semantic network notation, compared to first-order logic: the
fact
that links between bubbles represent only binary relations. For example, the sentence
Fly(Shankar, NewYork, NewDelhi, Yesterday) cannot be asserted directly in a semantic
network. Nonetheless, we can obtain the effect of n-ary assertions by reifying the proposition
itself as an event belonging to an appropriate event category. Figure 12.6 shows the semantic
network structure for this particular event. Notice that the restriction to binary relations forces
the creation of a rich ontology of reified concepts. One of the most important aspects of
semantic networks is their ability to represent.
One of the most important aspects of semantic networks is their ability to represent
default values for categories. Examining Figure 3.6 carefully, one notices that John has one
leg, despite the fact that he is a person and all persons have two legs. In a strictly logical KB,
this would be a contradiction, but in a semantic network, the assertion that all persons have
two legs has only default status; that is, a person is assumed to have two legs unless this is
contradicted by more specific information.
EXAMPLES
We also include the negated goal Criminal (West). The resolution proof is shown in Figure 3.18.
Figure 3.18 A resolution proof that West is a criminal. At each step,
the literals that unify are in bold.
Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is characteristic of
resolution on Horn clause knowledge bases. In fact, the clauses along the main spine
correspond exactly to the consecutive values of the goals variable in the backward-chaining
algorithm of Figure. This is because we always choose to resolve with a clause whose
positive literal unified with the left most literal of the “current” clause on the spine; this is
exactly what happens in backward chaining. Thus, backward chaining is just a special case of
resolution with a particular control strategy to decide which resolution to perform next.
EXAMPLE 2
Our second example makes use of Skolemization and involves clauses that are not
definite clauses. This results in a somewhat more complex proof structure. In English, the
problem is a follows:
The resolution proof that Curiosity kills the cat is given in Figure. In English, the
proof could be paraphrased as follows:
Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus
Jack must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because
anyone who kills an animal is loved by no one, we know that no one loves Jack. On the other
hand, Jack loves all animals, so someone loves him; so we have a contradiction. Therefore
Curiosity killed the cat.
Figure 3.19 A resolution proof that Curiosity killed that Cat. Notice then use
of factoring in the derivation of the clause Loves(G(Jack), Jack). Notice also
in the upper right, the unification of Loves(x,F(x)) and Loves(Jack,x) can
only succeed after the variables have been standardized apart
The proof answers the question “Did Curiosity kill the cat?” but often we want to
pose more general questions, such as “Who killed the cat?” Resolution can do this, but it
takes a little more work to obtain the answer. The goal is w Kills (w, Tuna), which, when
negated become Kills (w, Tuna) in CNF, Repeating the proof in Figure with the new negated
goal, we obtain a similar proof tree, but with the substitution {w/Curiosity} in one of the
steps. So, in this case, finding out who killed the cat is just a matter of keeping track of the
bindings for the query variables in the proof.
EXAMPLE 3
(v) Eliminate
1. x graduating(x) vhappy(x)
2. x happy(y) vsmile(y)
3. graduating(name1)
4. w smile(w)
(vi) Eliminate
1. graduating(x) vhappy(x)
2. happy(y) vsmile(y)
3. graduating(name1)
4. smile(w)
EXAMPLE 4
Explain the unification algorithm used for reasoning under predicate logic with an
example. Consider the following facts
a. Team India
b. Team Australia
c. Final match between India and Australia
d. India scored 350 runs, Australia scored 350 runs, India lost 5 wickets,
Australia lost 7 wickets.
f. If the scores are same the team which lost minimum wickets wins the match.
Represent the facts in predicate, convert to clause form and prove by resolution “India
wins the match”.
Solution
(v) Eliminate
(a) team(India)
(b) team(Australia)
(c) team(India) v team(Australia) v final_match (India,Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) team(x) v wins(x) vscore(x, max_runs))
(f) score(x,equal(y)) vwicket(x,min_wicket) v-final_match(x,y)) vwin(x)
(vi) Eliminate
(vii) Convert to conjunct of disjuncts form.
EXAMPLE 5
Problem 3
Solution
(v) Eliminate
1. company(ABC) ^employee(500,ABC)
2. company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. manager(x, ABC) v earns(x,10000)
(vi) Eliminate
Problem 4
If a perfect square is divisible by a prime p then it is also divisible by square of p.
Every perfect square is divisible by some prime.
36 is a perfect square.
Problem 5
Example
Trace the operation of the unification algorithm on each of the following pairs of literals:
In propositional logic it is easy to determine that two literals can not both be true at
the same time. Simply look for L and ~L. In predicate logic, this matching process is more
complicated, since bindings of variables must be considered.
For example man (john) and man(john) is a contradiction while man (john) and
man(Himalayas) is not. Thus in order to determine contradictions we need a matching
procedure that compares two literals and discovers whether there exist a set of substitutions
that makes them identical. There is a recursive procedure that does this matching. It is called
Unification algorithm.
In Unification algorithm each literal is represented as a list, where first element is the
name of a predicate and the remaining elements are arguments. The argument may be a single
element (atom) or may be another list. For example we can have literals as
To unify two literals, first check if their first elements re same. If so proceed.
Otherwise they can not be unified. For example the literals
Can not be Unfied. The unification algorithm recursively matches pairs of elements,
one pair at a time. The matching rules are :
ii) A variable can match another variable, any constant or a function or predicate
expression, subject to the condition that the function or [predicate expression
must not contain any instance of the variable being matched (otherwise it will
lead to infinite recursion).
iii) The substitution must be consistent. Substituting y for x now and then z for x
later is inconsistent (a substitution y for x written as y/x).
The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It returns a
list representing the composition of the substitutions that were performed during the match.
An empty list NIL indicates that a match was found without any substitutions. If the list
contains a single value F, it indicates that the unification procedure failed.
The knowledge base does not entail ∀x P(x). To show this, we must give a model
where P(a) and P(b) but ∀x P(x) is false. Consider any model with three domain elements,
where a and b refer to the first two elements and the relation referred to by P holds only for
those two elements.
What is ontological commitment (what exists in the world) of first order logic?
Represent the sentence “Brothers are siblings” in first order logic?
Ontological commitment means what assumptions language makes about the nature if
reality. Representation of “Brothers are siblings” in first order logic is x, y [Brother (x, y)
Siblings (x, y)]
Following are the comparative differences versus first order logic and propositional
logic.
Illustrate the use of first order logic to represent knowledge. The best way to find
usage of First order logic is through examples. The examples can be taken from some simple
domains. In knowledge representation, a domain is just some part of the world about which
we wish to express some knowledge. Assertions and queries in first-order logic Sentences are
added to a knowledge base using TELL, exactly as in propositional logic. Such sentences are
called assertions. For example, we can assert that John is a king and that kings are persons:
Where KB is knowledge base. TELL(KB, x King(x) => Person(x)). We can ask questions of
the knowledge base using AS K. For example, returns true. Questions asked using ASK are
called queries or goals ASK(KB, Person(John)) Will return true. (ASK KBto find whether
Jon is a king) ASK (KB, x person(x)) The kinship domain The first example we consider is
the domain of family relationships, or kinship. This domain includes facts such as "Elizabeth
is the mother of Charles" and "Charles is the father of William7' and rules such as "One's
grandmother is the mother of one's parent." Clearly, the objects in our domain are people. We
will have two unary predicates, Male and Female. Kinship relations-parenthood, brotherhood,
marriage, and so on- will be represented by binary predicates: Parent, Sibling, Brother, Sister,
Child, Daughter, Son, Spouse, Husband, Grandparent, Grandchild, Cousin, Aunt, and Uncle.
We will use functions for Mother and Father.