0% found this document useful (0 votes)
11 views22 pages

Unit 3 (Part 2)

Ontological engineering involves organizing concepts into categories, facilitating knowledge representation through upper ontologies and first-order logic. It emphasizes the importance of categorization for reasoning, inheritance, and the representation of events and mental processes. Additionally, it discusses semantic networks and description logics as systems for organizing and reasoning with categories, highlighting their advantages and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views22 pages

Unit 3 (Part 2)

Ontological engineering involves organizing concepts into categories, facilitating knowledge representation through upper ontologies and first-order logic. It emphasizes the importance of categorization for reasoning, inheritance, and the representation of events and mental processes. Additionally, it discusses semantic networks and description logics as systems for organizing and reasoning with categories, highlighting their advantages and disadvantages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

KNOWLEDGE REPRESENTATION: ONTOLOGICAL ENGINEERING

Ontology refers to organizing everything in the world into hierarch of categories.


Representing the abstract concepts such as Actions, Time, Physical Objects and Beliefs is
called Ontological Engineering.

The general framework of concepts is called an upper ontology because of the convention of
drawing graphs with the general concepts at the top and the more specific concepts below
them, as in Figure

Categories and Objects

The organization of objects into categories is a vital part of knowledge


representation. Although interaction with the world takes place at the level of individual
objects, much reasoning takes place at the level of categories.

For example, a shopper would normally have the goal of buying a basketball, rather
than a particular basketball such as BB9 There are two choices for representing categories in
first-order logic: predicates and objects. That is, we can use the predicate Basketball (b), or
we can reify1 the category as an object, Basketballs.
We could then say Member(b, Basketballs ), which we will abbreviate as b∈

Subset(Basketballs, Balls), abbreviated as Basketballs ⊂ Balls, to say that Basketballs is a


Basketballs, to say that b is a member of the category of basketballs. We say

subcategory of Balls. Categories serve to organize and simplify the knowledge base through
inheritance. If we say that all instances of the category Food are edible, and if we assert that
Fruit is a subclass of Food and Apples is a subclass of Fruit, then we can infer that every
apple is edible. We say that the individual apples inherit the property of edibility, in this case
from their membership in the Food category. First-order logic makes it easy to state facts
about categories, either by relating objects to categories or by quantifying over their
members. Here are some types of facts, with examples of each:

BB9 ∈ Basketballs
• An object is a member of a category.

• A category is a subclass of another category. Basketballs ⊂ Balls


• All members of a category have some properties.

(x∈ Basketballs) ⇒ Spherical (x)


• Members of a category can be recognized by some properties.
Orange(x) 𝖠 Round (x) 𝖠 Diameter(x)=9.5 𝖠 x∈ Balls ⇒ x∈
Basketballs
• A category as a whole has some properties.

Dogs ∈ Domesticated Species

Notice that because Dogs is a category and is a member of Domesticated Species, the
latter must be a category of categories. Categories can also be defined by providing necessary
and sufficient conditions for membership. For example, a bachelor is an unmarried adult
male:

x∈ Bachelors ⇔ Unmarried(x) 𝖠 x∈ Adults 𝖠 x∈ Males

Physical Composition

We use the general PartOf relation to say that one thing is part of another. Objects can
be grouped into part of hierarchies, reminiscent of the Subset hierarchy:

PartOf (Bucharest, Romania)


PartOf (Romania,
EasternEurope)
PartOf(EasternEurope, Europe)
PartOf (Europe, Earth)
The PartOf relation is transitive and reflexive; that
is, PartOf (x, y) 𝖠PartOf (y, z) ⇒PartOf (x, z)
PartOf (x, x)
Therefore, we can conclude PartOf (Bucharest, Earth).
For example, if the apples are Apple1, Apple2, and Apple3, then
BunchOf ({Apple1,Apple2,Apple3})
denotes the composite object with the three apples as parts (not elements). We can

of BunchOf (s): ∀x x∈ s ⇒PartOf (x, BunchOf (s)) Furthermore, BunchOf (s) is


define BunchOf in terms of the PartOf relation. Obviously, each element of s is part

the smallest object satisfying this condition. In other words, BunchOf (s) must be part
of any object that has all the elements of s as parts:

∀y [∀x x∈ s ⇒PartOf (x, y)] ⇒PartOf (BunchOf (s), y)


Measurements

In both scientific and commonsense theories of the world, objects have height, mass,
cost, and so on. The values that we assign for these properties are called measures.
Length(L1)=Inches(1.5)=Centimeters(3.81)

Conversion between units is done by equating multiples of one unit to another:


Centimeters(2.54 ×d)=Inches(d)

Similar axioms can be written for pounds and kilograms, seconds and days, and
dollars and cents. Measures can be used to describe objects as follows:

Diameter
(Basketball12)=Inches(9.5)

d∈ Days ⇒ Duration(d)=Hours(24)
ListPrice(Basketball12)=$(19)

Time Intervals

Event calculus opens us up to the possibility of talking about time, and time intervals.
We will consider two kinds of time intervals: moments and extended intervals. The
distinction is that only moments have zero duration:

Partition({Moments,ExtendedIntervals},
Intervals ) i∈Moments⇔Duration(i)=Seconds(0)

The functions Begin and End pick out the earliest and latest moments in an interval,
and the function Time delivers the point on the time scale for a moment.

The function Duration gives the difference between the end time and the start time.

Interval (i) ⇒Duration(i)=(Time(End(i)) −


Time(Begin(i))) Time(Begin(AD1900))=Seconds(0)
Time(Begin(AD2001))=Seconds(3187324800)
Time(End(AD2001))=Seconds(3218860800)
Duration(AD2001)=Seconds(31536000)

EVENTS

Event calculus reifies fluents and events. The fluent At(Shankar, Berkeley) is an object that
refers to the fact of Shankar being in Berkeley, but does not by itself say anything about
whether it is true. To assert that a fluent is actually true at some point in time we use the
predicate T, as in T(At(Shankar, Berkeley), t). Events are described as instances of event

described as E1 ∈Flyings𝖠 Flyer (E1, Shankar ) 𝖠 Origin(E1, SF) 𝖠 Destination


categories. The event E1 of Shankar flying from San Francisco to Washington, D.C. is

and say E1 ∈Flyings(Shankar, SF,DC) We then use Happens(E1, i) to say that the event E1
(E1,DC) we can define an alternative three-argument version of the category of flying events

took place over the time interval i, and we say the same thing in functional form with
Extent(E1)=i. We represent time intervals by a (start, end) pair of times; that is, i = (t1, t2) is
the time interval that starts at t1 and ends at t2. The complete set of predicates for one
version of the event calculus is T(f, t) Fluent f is true at time t Happens(e, i) Event e happens
over the time interval i Initiates(e, f, t) Event e causes fluent f to start to hold at time t
Terminates(e, f, t) Event e causes fluent f to cease to hold at time t Clipped(f, i) Fluent f
ceases to be true at some point during time interval i Restored (f, i) Fluent f becomes true
sometime during time interval i We assume a distinguished event, Start, that describes the
initial state by saying which fluents are initiated or terminated at the start time. We define T
by saying that a fluent holds at a point in time if the fluent was initiated by an event at some
time in the past and was not made false (clipped) by an intervening event. A fluent does not
hold if it was terminated by an event and not made true (restored) by another event. Formally,
the axioms are:

Happens(e, (t1, t2)) 𝖠Initiates(e, f, t1) 𝖠 ¬ Clipped(f, (t1, t)) 𝖠 t1 < t ⇒T(f,
t)Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠¬Restored (f, (t1, t)) 𝖠 t1 < t ⇒¬T(f,
t)
where Clipped and Restored are defined by Clipped(f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t,
t3))𝖠 t1 ≤ t < t2 𝖠 Terminates(e, f, t) Restored (f, (t1, t2)) ⇔∃ e, t, t3 Happens(e, (t, t3))
𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)

MENTAL EVENTS AND MENTAL OBJECTS


A mental event is any event that happens within the mind of a conscious
individual. Examples include thoughts, feelings, decisions, dreams, and
realizations. Some believe that mental events are not limited to human
thought but can be associated with animals and artificial intelligence as well.

What we need is a model of the mental objects that are in someone’s head (or something’s
knowledge base) and of the mental processes that manipulate those mental objects. The
model does not have to be detailed. We do not have to be able to predict how many
milliseconds it will take for a particular agent to make a deduction. We will be happy just to
be able to conclude that mother knows whether or not she is sitting.

We begin with the propositional attitudes that an agent can have toward mental objects:
attitudes such as Believes, Knows, Wants, Intends, and Informs. The difficulty is that these
attitudes do not behave like “normal” predicates.

For example, suppose we try to assert that Lois knows that Superman can fly: Knows (Lois,
CanFly(Superman)) One minor issue with this is that we normally think of
CanFly(Superman) as a sentence, but here it appears as a term. That issue can be patched up
just be reifying CanFly(Superman); making it a fluent. A more serious problem is that, if it

fly: (Superman = Clark) 𝖠 Knows(Lois, CanFly(Superman)) |= Knows(Lois, CanFly


is true that Superman is Clark Kent, then we must conclude that Lois knows that Clark can

(Clark)) Modal logic is designed to address this problem. Regular logic is concerned with a
single modality, the modality of truth, allowing us to express “P is true.” Modal logic
includes special modal operators that take sentences (rather than terms) as arguments.

For example, “A knows P” is represented with the notation KAP, where K is the modal
operator for knowledge. It takes two arguments, an agent (written as the subscript) and a
sentence. The syntax of modal logic is the same as first-order logic, except that sentences
can also be formed with modal operators.

In first-order logic, a model contains a set of objects and an interpretation that maps each
name to the appropriate object, relation, or function. In modal logic we want to be able to
consider both the possibility that Superman’s secret identity is Clark and that it isn’t.
Therefore, we will need a more complicated model, one that consists of a collection of
possible worlds rather than just one true world. The worlds are connected in a graph by
accessibility relations, one relation for each modal operator. We say that world w1 is
accessible from world w0 with respect to the modal operator KA if everything in w1 is
consistent with what A knows in w0, and we write this as Acc(KA,w0,w1). In general, a
knowledge atom KAP is true in world w if and only if P is true in every world accessible
from w. The truth of more complex sentences is derived by recursive application of this rule
and the normal rules of first-order logic. That means that modal logic can be used to
reason about nested knowledge sentences: what one agent knows about another agent’s
knowledge. For example, we can say that, even though Lois doesn’t know whether
Superman’s secret identity is Clark Kent, she does know that Clark knows: Klo is [KClark
Identity(Superman, Clark )∨KClark¬Identity(Superman, Clark )].
REASONING SYSTEMS FOR CATEGORIES
This section describes systems specially designed for organizing and reasoning with
categories. There are two closely related families of systems: semantic networks provide
graphical aids for visualizing a knowledge base and efficient algorithms for inferring
properties of an object on the basis of its category membership; and description logics
provide a formal language for constructing and combining category definitions and efficient
algorithms for deciding subset and superset relationships between categories.

SEMANTIC NETWORKS

Semantic networks work as an alternative to predicate logic for knowledge representation. In semantic
networks, the user can represent their knowledge in the form of graphical networks. This network
consists of nodes representing objects and arcs which describe the relationship between those objects.
This representation consists of two types of relations, such as IS-A relationship (Inheritance) and
Kind-Of-Relation.

Advantages
 Semantic networks are a natural representation of knowledge.
 It transparently conveys meaning.
 These networks are simple and easy to understand.
Disadvantages
 Semantic networks take more computational time at runtime.
 These are inadequate as they do not have any equivalent quantifiers.
 These networks are not intelligent and depend on the creator of the system.

Example: Following are some statements which we need to represent in the form of
nodes and arcs.

Statements:
Jerry is a cat.
Jerry is a mammal
Jerry is owned by Priya.
Jerry is brown colored.
All Mammals are animal.
In the above diagram, we have represented the different type of knowledge in the form of nodes
and arcs. Each object is connected with another object by some relation.

There are many variants of semantic networks, but all are capable of representing individual
objects, categories of objects, and relations among objects. A typical graphical notation
displays object or category names in ovals or boxes, and connects them with labeled links.

corresponding to the logical assertion Mary ∈FemalePersons ; similarly, the SisterOf link
For example, Figure 12.5 has a Member Of link between Mary and Female Persons,

between Mary and John corresponds to the assertion SisterOf (Mary, John). We can connect
categories using SubsetOf links, and so on. We know that persons have female persons as
mothers, so can we draw a HasMother link from Persons to FemalePersons? The answer is
no, because HasMother is a relation between a person and his or her mother, and categories

—in Figure 12.5. This link asserts that ∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y
do not have mothers. For this reason, we have used a special notation—the double-boxed link

∈FemalePersons
] We might also want to assert that persons have two legs—that is, ∀x x∈ Persons ⇒
Legs(x, 2) The semantic network notation makes it convenient to perform inheritance
reasoning. For example, by virtue of being a person, Mary inherits the property of having two
legs. Thus, to find out how many legs Mary has, the inheritance algorithm followsthe
MemberOf link from Mary to the category she belongs to, and then follows SubsetOf links up
the hierarchy until it finds a category for which there is a boxed Legs link—in this case, the
Persons category.
Figure 3.16 A semantic network with four objects (John, Mary, 1,
and 2) and four categories. Relations are denoted by labeled links.

Inheritance becomes complicated when an object can belong to more than one category or
when a category can be a subset of more than one other category; this is called multiple
inheritance. The drawback of semantic network notation, compared to first-order logic: the
fact
that links between bubbles represent only binary relations. For example, the sentence
Fly(Shankar, NewYork, NewDelhi, Yesterday) cannot be asserted directly in a semantic
network. Nonetheless, we can obtain the effect of n-ary assertions by reifying the proposition
itself as an event belonging to an appropriate event category. Figure 12.6 shows the semantic
network structure for this particular event. Notice that the restriction to binary relations forces
the creation of a rich ontology of reified concepts. One of the most important aspects of
semantic networks is their ability to represent.

Figure 3.17 A fragment of a semantic network showing the


representation of the logical assertion Fly(Shankar,
NewYork, NewDelhi, Yesterday)

One of the most important aspects of semantic networks is their ability to represent
default values for categories. Examining Figure 3.6 carefully, one notices that John has one
leg, despite the fact that he is a person and all persons have two legs. In a strictly logical KB,
this would be a contradiction, but in a semantic network, the assertion that all persons have
two legs has only default status; that is, a person is assumed to have two legs unless this is
contradicted by more specific information.

Reasoning with Default Information


Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our
knowledge base.
Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into our
knowledge base.
Non-monotonic reasoning deals with incomplete and uncertain models.
"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
o Birds can fly
o Penguins cannot fly
o Pitty is a bird
So from the above sentences, we can conclude that Pitty can fly.
However, if we add one another sentence into knowledge base "Pitty is a penguin", which concludes "Pitty
cannot fly", so it invalidates the above conclusion.
Advantages of Non-monotonic reasoning:
o For real-world systems such as Robot navigation, we can use non-monotonic reasoning.
o In Non-monotonic reasoning, we can choose probabilistic facts or can make assumptions.
Disadvantages of Non-monotonic Reasoning:
o In non-monotonic reasoning, the old facts may be invalidated by adding new sentences.
o It cannot be used for theorem proving.

EXAMPLES

We also include the negated goal  Criminal (West). The resolution proof is shown in Figure 3.18.
Figure 3.18 A resolution proof that West is a criminal. At each step,
the literals that unify are in bold.

Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is characteristic of
resolution on Horn clause knowledge bases. In fact, the clauses along the main spine
correspond exactly to the consecutive values of the goals variable in the backward-chaining
algorithm of Figure. This is because we always choose to resolve with a clause whose
positive literal unified with the left most literal of the “current” clause on the spine; this is
exactly what happens in backward chaining. Thus, backward chaining is just a special case of
resolution with a particular control strategy to decide which resolution to perform next.

EXAMPLE 2

Our second example makes use of Skolemization and involves clauses that are not
definite clauses. This results in a somewhat more complex proof structure. In English, the
problem is a follows:

Everyone who love all animals is loved by someone.


Anyone who kills an animals is loved by no one.
Jack loves all animals.
Either Jack or Curiosity killed the cat, who is named Tuna.
Did Curiosity kill the cat? First, we express the original sentences, some background
knowledge, and the negated goal G in first-order logic:

A.  x [ y Animal(y)  Loves(x,y)  [y Loves(y,x)]


B.  x [ x Animal(z) ^ Kills (x,z)]  [ y Loves(y,x)]
C.  x Animals(x) Loves(Jack, x)
D. Kills (Jack, Tuna) V Kills (Curiosity, Tuna)
E. Cat (Tuna)
F.  x Cat(x)  Animal (x)
G. Kills(Curiosity, Tuna)
Now we apply the conversion procedure to convert each sentence to CNF:
A1. Animal(F(x)) v Loves (G(x),x)
A2. Loves(x,F(x)) v Loves (G(x),x)
B. Loves(y,x) v  Animal(z) V  Kills(x,z)
C. Animal(x) v Loves(Jack, x)
D. Kills(Jack, Tuna) v Kills (Curiosity, Tuna)
E. Cat (Tuna)
F. Cat(x) v Animal (x)
G. Kills (Curiosity, Tuna)

The resolution proof that Curiosity kills the cat is given in Figure. In English, the
proof could be paraphrased as follows:

Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus
Jack must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because
anyone who kills an animal is loved by no one, we know that no one loves Jack. On the other
hand, Jack loves all animals, so someone loves him; so we have a contradiction. Therefore
Curiosity killed the cat.

Figure 3.19 A resolution proof that Curiosity killed that Cat. Notice then use
of factoring in the derivation of the clause Loves(G(Jack), Jack). Notice also
in the upper right, the unification of Loves(x,F(x)) and Loves(Jack,x) can
only succeed after the variables have been standardized apart

The proof answers the question “Did Curiosity kill the cat?” but often we want to
pose more general questions, such as “Who killed the cat?” Resolution can do this, but it
takes a little more work to obtain the answer. The goal is w Kills (w, Tuna), which, when
negated become Kills (w, Tuna) in CNF, Repeating the proof in Figure with the new negated
goal, we obtain a similar proof tree, but with the substitution {w/Curiosity} in one of the
steps. So, in this case, finding out who killed the cat is just a matter of keeping track of the
bindings for the query variables in the proof.

EXAMPLE 3

1. All people who are graduating are happy.


2. All happy people smile.
3. Someone is graduating.
4. Conclusion: Is someone smiling?
Solution

Convert the sentences into predicate Logic


1. x graduating(x) -> happy(x)
2. x happy(x) -> smile(x)
3. x graduating(x)
4. x smile(x)

Convert to clausal form


(i) Eliminate the → sign
1. x – graduating(x) vhappy(x)
2. x – happy(x) vsmile(x)
3. x graduating(x)
4. -x smile(x)

(ii) Reduce the scope of negation


1. x – graduating(x) vhappy(x)
2. x – happy(x) vsmile(x)
3. x graduating(x)
4. x smile(x)

(iii) Standardize variable apart


1. x – graduating(x) vhappy(x)
2. x – happy(y) vsmile(y)
3. x graduating(z)
4. x smile(w)

(iv) Move all quantifiers to the left


1. x graduating(x) vhappy(x)
2. x happy(y) vsmile(y)
3. x graduating(z)
4. w smile(w)

(v) Eliminate 
1. x graduating(x) vhappy(x)
2. x happy(y) vsmile(y)
3. graduating(name1)
4. w smile(w)

(vi) Eliminate
1. graduating(x) vhappy(x)
2. happy(y) vsmile(y)
3. graduating(name1)
4. smile(w)

(vii) Convert to conjunct of disjuncts form

(viii) Make each conjunct a separate clause.

(ix) Standardize variables apart again.

Figure 3.20 Standardize variables

Thus, we proved someone is smiling.

EXAMPLE 4

Explain the unification algorithm used for reasoning under predicate logic with an
example. Consider the following facts

a. Team India
b. Team Australia
c. Final match between India and Australia
d. India scored 350 runs, Australia scored 350 runs, India lost 5 wickets,
Australia lost 7 wickets.
f. If the scores are same the team which lost minimum wickets wins the match.

Represent the facts in predicate, convert to clause form and prove by resolution “India
wins the match”.
Solution

Convert into predicate Logic


(a) team(India)
(b) team(Australia)
(c) team(India) ^ team(Australia) → final_match(India,Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) x team(x) ^ wins(x) → score(x,mat_runs)
(f) xy score(x,equal(y)) ^ wicket(x,min) ^ final_match(x,y) → win(x)

Convert to clausal form


(i) Eliminate the → sign
(a) team(India)
(b) team(Australia)
(c) (team(India) ^ team(Australia) v final_match(India,Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) x (team(x) ^ wins(x)) vscore(x,max_runs))
(f) xy (score(x,equal(y)) ^ wicket(x,min) ^final_match(x,y)) vwin(x)

(ii) Reduce the scope of negation


(a) team(India)
(b) team(Australia)
(c) team(India) v team(Australia) v final_match(India, Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) x team(x) v wins(x) vscore(x,max_runs))
(f) xy (score(x,equal(y)) v  wicket(x,min_wicket) vfinal_match(x,y)) vwin(x)

(iii) Standardize variables apart

(iv) Move all quantifiers to the left

(v) Eliminate 
(a) team(India)
(b) team(Australia)
(c) team(India) v team(Australia) v final_match (India,Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) team(x) v wins(x) vscore(x, max_runs))
(f) score(x,equal(y)) vwicket(x,min_wicket) v-final_match(x,y)) vwin(x)

(vi) Eliminate
(vii) Convert to conjunct of disjuncts form.

(viii) Make each conjunct a separate clause.


(a) team(India)
(b) team(Australia)
(c) team(India) v team(Australia) v final_match (India,Australia)
(d) score(India,350)
Score(Australia,350)
Wicket(India,5)
Wicket(Austrialia,7)
(e) team(x) v wins(x) vscore(x,max_runs))
(f) score(x,equal(y)) vwicket(x,min_wicket) vfinal_match(x,y)) vwin(x)

(ix) Standardize variables apart


again To prove:win(India)
Disprove:win(India)

Figure 3.21 Standardize variables

Thus, proved India wins match.

EXAMPLE 5

Problem 3

Consider the following facts and represent them in predicate

form: F1. There are 500 employees in ABC company.


F2. Employees earning more than Rs. 5000 per
tax. F3. John is a manger in ABC company.
F4. Manger earns Rs. 10,000.
Convert the facts in predicate form to clauses and then prove by resolution: “John pays
tax”.

Solution

Convert into predicate Logic


1. company(ABC) ^employee(500,ABC)
2. x company(ABC) ^employee(x,ABC) ^ earns(x,5000)→pays(x,tax)
3. manager(John,ABC)
4. x manager(x, ABC)→earns(x,10000)
Convert to clausal form

(i) Eliminate the → sign


1. company(ABC) ^employee(500,ABC)
2. x(company(ABC) ^employee(x,ABC) ^ earns(x,5000)) v pays(x,tax)
3. manager(John,ABC)
4. x manager(x, ABC) v earns(x,10000)

(ii) Reduce the scope of negation


1. company(ABC) ^employee(500,ABC)
2. x company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. x manager(x, ABC) v earns(x,10000)

(iii) Standardize variables apart


1. company(ABC) ^employee(500,ABC)
2. x company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. x manager(x, ABC) v earns(x,10000)

(iv) Move all quantifiers to the left

(v) Eliminate 

1. company(ABC) ^employee(500,ABC)
2. company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. manager(x, ABC) v earns(x,10000)

(vi) Eliminate

(vii) Convert to conjunct of disjuncts form


(viii) Make each conjunct a separate clause.
1. (a) company(ABC)
(b) employee(500,ABC)
2. company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. manager(x, ABC) v earns(x,10000)

(ix) Standardize variables apart again.


Prove:pays(John,tax)
Disprove:pays(John,tax)
Figure 3.22 Standardize variables

Thus, proved john pays tax.

EXAMPLE 6 & EXAMPLE 7

Problem 4
If a perfect square is divisible by a prime p then it is also divisible by square of p.
Every perfect square is divisible by some prime.
36 is a perfect square.

Convert into predicate Logic

1. xyperfect_sq(x) ^ prime(h) ^divideds(x,y) → divides(x,square(y))


2. xy perfect)sq(x) ^prime(y) ^divides(x,y)
3. perfect_sq(36)

Problem 5

1. Marcus was a a man


man(Marcus
)
2. Marcus was a Pompeian
Pompeian(Marcus)
3. All Pompeians were Romans
x (Pompeians(x) → Roman(x))
4. Caesar was ruler
Ruler(Caesar
)
5. All Romans were either loyal to Caesar or hated him.
x (Roman(x) → loyalto(x,Caesar) v hate(x,Caesar))
6. Everyone is loyal to someone
x y (person(x) → person(y) ^ loyalto(x,y))
7. People only try to assassinate rulers they are not loyal to
x y (person(x) ^ ruler(y)^ tryassassinate(x,y) → -loyalto(x,y))
8. Marcus tried to assassinate
Caesar tryassassinate(Marcus,
Caesar)
9. All men are persons
x (max(x) → person(x))

Example

Trace the operation of the unification algorithm on each of the following pairs of literals:

i) f(Marcus) and f(Caesar)


ii) f(x) and f(g(y))
iii) f(marcus,g(x,y)) and f(x,g(Caesar, Marcus))

In propositional logic it is easy to determine that two literals can not both be true at
the same time. Simply look for L and ~L. In predicate logic, this matching process is more
complicated, since bindings of variables must be considered.

For example man (john) and man(john) is a contradiction while man (john) and
man(Himalayas) is not. Thus in order to determine contradictions we need a matching
procedure that compares two literals and discovers whether there exist a set of substitutions
that makes them identical. There is a recursive procedure that does this matching. It is called
Unification algorithm.
In Unification algorithm each literal is represented as a list, where first element is the
name of a predicate and the remaining elements are arguments. The argument may be a single
element (atom) or may be another list. For example we can have literals as

(tryassassinate Marcus Caesar)


(tryassassinate Marcus (ruler of Rome))

To unify two literals, first check if their first elements re same. If so proceed.
Otherwise they can not be unified. For example the literals

(try assassinate Marcus


Caesar) (hate Marcus Caesar)

Can not be Unfied. The unification algorithm recursively matches pairs of elements,
one pair at a time. The matching rules are :

i) Different constants, functions or predicates can not match, whereas identical


ones can.

ii) A variable can match another variable, any constant or a function or predicate
expression, subject to the condition that the function or [predicate expression
must not contain any instance of the variable being matched (otherwise it will
lead to infinite recursion).

iii) The substitution must be consistent. Substituting y for x now and then z for x
later is inconsistent (a substitution y for x written as y/x).

The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It returns a
list representing the composition of the substitutions that were performed during the match.
An empty list NIL indicates that a match was found without any substitutions. If the list
contains a single value F, it indicates that the unification procedure failed.

UNIFY (L1, L2)

1. if L1 or L2 is an atom part of same thing do


(a) if L1 or L2 are identical then return NIL
(b) else if L1 is a variable then do
(i) if L1 occurs in L2 then return F else return (L2/L1)
© else if L2 is a variable then do
(i) if L2 occurs in L1 then return F else return (L1/L2)
else return F.
2. If length (L!) is not equal to length (L2) then return F.
3. Set SUBST to NIL
(at the end of this procedure, SUBST will contain all the substitutions used to unify L1 and
L2).
4. For I = 1 to number of elements in L1 do
i) call UNIFY with the ith element of L1 and I’th element of L2, putting the result in S
ii) if S = F then return F
iii) if S is not equal to NIL then do
(A) apply S to the remainder of both L1 and L2
(B) SUBST := APPEND (S, SUBST) return SUBST.

knowledge base entail ∀ x P(x)? Explain your answer in terms of models.


Consider a knowledge base containing just two sentences: P(a) and P(b). Does this

The knowledge base does not entail ∀x P(x). To show this, we must give a model
where P(a) and P(b) but ∀x P(x) is false. Consider any model with three domain elements,
where a and b refer to the first two elements and the relation referred to by P holds only for
those two elements.

Is the sentence ∃ x, y x = y valid? Explain

The sentence ∃ x, y x=y is valid. A sentence is valid if it is true in every model. An


existentially quantified sentence is true in a model if it holds under any extended
interpretation in which its variables are assigned to domain elements. According to the
standard semantics of FOL as given in the chapter, every model contains at least one domain
element, hence, for any model, there is an extended interpretation in which x and y are
assigned to the first domain element. In such an interpretation, x=y is true.

What is ontological commitment (what exists in the world) of first order logic?
Represent the sentence “Brothers are siblings” in first order logic?

Ontological commitment means what assumptions language makes about the nature if
reality. Representation of “Brothers are siblings” in first order logic is  x, y [Brother (x, y) 
Siblings (x, y)]

Differentiate between propositional and first order predicate logic?

Following are the comparative differences versus first order logic and propositional
logic.

1) Propositional logic is less expressive and do not reflect individual object`s


properties explicitly. First order logic is more expressive and can represent
individual object along with all its properties.
2) Propositional logic cannot represent relationship among objects whereas first
order logic can represent relationship.
3) Propositional logic does not consider generalization of objects where as first
order logic handles generalization.
4) Propositional logic includes sentence letters (A, B, and C) and logical
connectives, but not quantifier. First order logic has the same connectives as
propositional logic, but it also has variables for individual objects, quantifier,
symbols for functions and symbols for relations.

Represent the following sentence in predicate form:


“All the children like sweets”
x child(x)  sweet(y)  likes (x,y).

Illustrate the use of first order logic to represent knowledge. The best way to find
usage of First order logic is through examples. The examples can be taken from some simple
domains. In knowledge representation, a domain is just some part of the world about which
we wish to express some knowledge. Assertions and queries in first-order logic Sentences are
added to a knowledge base using TELL, exactly as in propositional logic. Such sentences are
called assertions. For example, we can assert that John is a king and that kings are persons:
Where KB is knowledge base. TELL(KB, x King(x) => Person(x)). We can ask questions of
the knowledge base using AS K. For example, returns true. Questions asked using ASK are
called queries or goals ASK(KB, Person(John)) Will return true. (ASK KBto find whether
Jon is a king) ASK (KB, x person(x)) The kinship domain The first example we consider is
the domain of family relationships, or kinship. This domain includes facts such as "Elizabeth
is the mother of Charles" and "Charles is the father of William7' and rules such as "One's
grandmother is the mother of one's parent." Clearly, the objects in our domain are people. We
will have two unary predicates, Male and Female. Kinship relations-parenthood, brotherhood,
marriage, and so on- will be represented by binary predicates: Parent, Sibling, Brother, Sister,
Child, Daughter, Son, Spouse, Husband, Grandparent, Grandchild, Cousin, Aunt, and Uncle.
We will use functions for Mother and Father.

What are the characteristics of multi agent systems?

Each agent has just incomplete information and is restricted in is capabilities.


The system control is distributed. Data is decentralized. Computation is asynchronous
Multi agent environments are typically open ad have no centralized designer. Multi agent
environments provide an infrastructure specifying communication and interaction
protocols. Multi agent environments have agents that are autonomous and distributed, and
may be self interested or cooperative.

You might also like