0% found this document useful (0 votes)
21 views90 pages

Chapter 4

The document discusses knowledge representation and reasoning in artificial intelligence, defining knowledge and its importance in AI systems. It covers various approaches to knowledge representation, including propositional and predicate logic, and highlights key issues and inference rules related to knowledge manipulation. The document emphasizes the need for effective representation of knowledge to enable intelligent behavior in AI agents.

Uploaded by

Lok Regmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views90 pages

Chapter 4

The document discusses knowledge representation and reasoning in artificial intelligence, defining knowledge and its importance in AI systems. It covers various approaches to knowledge representation, including propositional and predicate logic, and highlights key issues and inference rules related to knowledge manipulation. The document emphasizes the need for effective representation of knowledge to enable intelligent behavior in AI agents.

Uploaded by

Lok Regmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Artificial Intelligence

4. Knowledge representation and reasoning

Er. Nirmal Thapa


Email: [email protected]
Lumbini Engineering College
Pokhara University
Approaches to knowledge representation
Knowledge
§ Knowledge is a familiarity with someone or something, which can include information, facts,
descriptions, or skills acquired through experience or education.
§ It can refer to the theoretical or practical understanding of a subject.
§ In normal conversation we use knowledge to mean:
§ Knowing that (facts and information) and Knowing how (the ability to do something)
§ The state or fact of knowing.
§ Familiarity, awareness, or understanding gained through experience or study.
§ The sum or range of what has been perceived, discovered, or learned.
§ Specific information about something.
Approaches to knowledge representation
§ Knowledge is “the sum of what is known: the body of truth, information, and principles
acquired by mankind.”
§ There are many other definitions such as:
§ Knowledge is "information combined with experience, context, interpretation, and reflection. It is a
high-value form of information that is ready to apply to decisions and actions." (Davenport et al.,
1998)
§ Knowledge is “human expertise stored in a person’s mind, gained through experience, and
interaction with the person’s environment." (Sewery, 2002)
§ Knowledge is “information evaluated and organized by the human mind so that it can be used
purposefully, e.g., conclusions or explanations." (Rousa, 2002)
Approaches to knowledge representation
§ Knowledge consists of information that has been:
§ interpreted,
§ categorised,
§ applied, experienced and revised.
§ In general, knowledge is more than just data,
§ It consist of: facts, ideas, beliefs, heuristics, rules,
relationships, customs.
Approaches to knowledge representation
Types of Knowledge:
Approaches to knowledge representation
§ Knowledge is important in AI for making intelligent machines. Key
issues confronting the designer of AI system are:
§ Knowledge acquisition: Gathering the knowledge from the problem domain
to solve the AI problem.
§ Knowledge representation: Expressing the identified knowledge into some
knowledge representation language such as propositional logic, predicate logic
etc.
§ Knowledge manipulation: Large volume of knowledge has no meaning until
up to it is processed to deduce the hidden aspects of it.
§ Knowledge is manipulated to draw conclusions from knowledge base.
Approaches to knowledge representation
Knowledge representation:
§ Knowledge representation (KR) is the study of how knowledge about the world can be
represented and what kinds of reasoning can be done with that knowledge.
§ Method used to encode knowledge in Intelligent Systems.
§ Used to achieve intelligent behavior, the fundamental goal of KR is to represent
knowledge in a manner as to facilitate inferencing (i.e. drawing conclusions) from
knowledge.
§ A successful representation of some knowledge must, be in a form that is understandable
by humans, and must cause the system using the knowledge to behave as if it knows it
Approaches to knowledge representation
What is Knowledge Representation in AI?
§ Knowledge representation in AI means how AI agents think and how thinking contributes
to behavioural intelligence of the AI agent.
§ The knowledge from natural language of human is not understandable for a machine
without a proper representation.
§ Knowledge representation in AI offers to represent information about the real world so
that a computer can learn in easily.
§ It is very important for a computer to understand knowledge
§ To utilize the knowledge to solve the complex real world problems
§ such as communicating with human
§ diagnosing a medical report.
Approaches to knowledge representation
What is Knowledge Representation in AI?
§ Knowledge Representation in AI (KR in AI) is not just storing data into database.
§ It also ensures that the machine is learning from the knowledge and experiences so they
can behave like a real human.
Elements in Knowledge Representation
Issues in knowledge representation
Some issues that arise in knowledge representation from an AI perspective are:
§ How do people represent knowledge?
§ What is the nature of knowledge and how do we represent it?
§ Should a representation scheme deal with a particular domain or should it be general purpose?
§ How expressive is a representation scheme or formal language?
§ Should the scheme be declarative or procedural?
Issues in knowledge representation
Some issues that arise in knowledge representation from an AI perspective are:
§ Important Attributes :
§ Any attribute of objects so basic that they occur in almost every problem domain ?
§ Relationship among attributes:
§ Any important relationship that exists among object attributes ?
§ Choosing Granularity :
§ At what level of detail should the knowledge be represented ?
§ Set of objects :
§ How sets of objects be represented ?
§ Finding Right structure :
§ Given a large amount of knowledge stored,
§ how can relevant parts be accessed ?
Logic
§ Logic is concerned with the truth of statements about the world.
§ Generally each statement is either TRUE or FALSE.
Logic includes : Syntax , Semantics and Inference Procedure.
Syntax :
§ Specifies the symbols in the language about how they can be combined to form sentences. The facts
about the world are represented as sentences in logic.
Semantic :
§ Specifies how to assign a truth value to a sentence based on its meaning in the world. It Specifies what
facts a sentence refers to.
§ A fact is a claim about the world, and it may be TRUE or FALSE.

Logics are of different types : Propositional logic, Predicate logic, Temporal logic, Modal logic,
Description logic etc;
Propositional Logic

§ Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions.
§ Basic building block of logic
§ A proposition is a declarative statement which is either true or false.
§ It is a technique of knowledge representation in logical and mathematical form.
Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7 (False proposition)
d) 5 is a prime number.
Propositional Logic
Some basic facts about propositional logic:
§ Propositional logic is also called Boolean logic as it works on 0 and 1.
§ In propositional logic, we use symbolic variables to represent the logic, and we can use any
symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
§ Propositions can be either true or false, but it cannot be both.
§ Propositional logic consists of an object, relations or function, and logical connectives.
§ These connectives are also called logical operators.
§ The propositions and connectives are the basic elements of the propositional logic.
§ Connectives can be said as a logical operator which connects two sentences.
§ A proposition which is always true is called tautology, and it is also called a valid sentence.
§ A proposition that is always false is called Contradiction.
§ Statements which are questions, commands, or opinions are not propositions such as "Where is
H", "How are you", "What is your name", are not propositions..
Propositional Logic
Syntax of propositional logic:
§ The syntax of propositional logic defines the allowable sentences for the knowledge representation.
There are two types of Propositions:
§ Atomic Propositions
§ They are the simple propositions. It consists of a single proposition symbol.
§ These are the sentences which must be either true or false
Example:
§ 6+2 is 8, it is an atomic proposition as it is a true fact.
§ "The Snow is hot" is also an atomic proposition as it is a false fact.
§ Compound propositions
They are constructed by combining simpler or atomic propositions, using parenthesis and logical connectives.
Example:
§ "It is raining today, and road is wet."
§ “Hari is a farmer, and his farm is in Butwal."
Propositional Logic
§ If proposition is true, then truth value is "true" .
§ If proposition is false, then truth value is "false" .
Logical Connectives

Truth Table : ?
Limitations of Propositional logic:
§ We cannot represent relations like ALL, some, or none with propositional logic.
Example:
§ All the girls are smart.
§ Some strawberry are soar.
§ Propositional logic has limited expressive power.
§ In propositional logic, we cannot describe statements in terms of their properties or logical
relationships.
Predicate Logic or First order predicate Logic (FOPL)
§ PL is not sufficient to represent the complex sentences or natural
language statements.
§ PL has very limited expressive power.
§ Consider the following sentence, which we cannot represent using PL
logic.
§ "Some humans are intelligent", or
§ "Sachin likes cricket."
§ To represent the above statements, PL logic is not sufficient, so we
required some more powerful logic, such as first-order logic.
Predicate Logic or First order predicate Logic (FOPL)

§ In AI, first-order logic is another method of KR. It's a variant of propositional logic.
§ FOL has enough expressiveness to convey NL statements succinctly.
§ Predicate logic or First-order predicate logic are other names for first-order logic.
§ FOPL is a sophisticated language that makes it easier to build information about objects and
to articulate relationships between them.
§ Like propositional logic, first-order logic (like natural language) implies that the world
contains facts, but it also assumes the following things in the world.
§ Objects: A, B, people, numbers, colors, squares, pits, wars, theories,, ......
§ Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such as: the
sister of, brother of, has color, comes between
§ Function: Father of, best friend, third inning of, end of, ......
§ First-order logic also has two main parts as a natural language:
a)Syntax
b)Semantics
Predicate Logic (PL) or First order predicate Logic (FOPL)

Quantifiers in First-order logic:


§ Quantification describes the quantity of specimen in the universe of discourse, and a
quantifier is a linguistic element that generates quantification.
§ These are the symbols that allow you to determine or identify the variable's range
and scope in a logical expression.
§ There are two different kinds of quantifiers:
§ Universal Quantifier, (for all, everyone, everything)
§ Existential quantifier, (for some, at least one).
Predicate Logic (PL) or First order predicate Logic (FOPL)

Universal Quantifier:
§ A universal quantifier is a logical symbol that indicates that a statement inside its range is
true for everything or every instance of a specific thing.
§ A symbol that resembles an inverted A is used to represent the Universal quantifier.
Note: the implication of universal quantifier is "→".
If x is a variable, then ∀x is read as:
• For all x
• For each x
• For every x.
Example:
All man drink coffee. ∀x man(x) → drink (x, coffee).
Predicate Logic (PL) or First order predicate Logic (FOPL)

Existential Quantifier:
§ Existential quantifiers are a sort of quantifier that expresses that a statement is true for at
least one instance of something within its scope.
§ The logical operator resembles an inverted E and is used to represent it. It is referred to be
an existential quantifier when it is employed with a predicate variable
§ Note: we always use Conjunction symbol (∧) in Existential quantifiers.
§ If x is a variable, the existential quantifier is either x or (x):
§ There exists a 'x.'
§ For some 'x.'
§ For at least one 'x.'
Example:
§ Some boys are intelligent. ∃x: boys(x) ∧ intelligent(x)
Rules of Inference
Inference:
§ In artificial intelligence, we need intelligent computers which can create new logic from old logic or by
evidence,
§ so generating the conclusions from evidence and facts is termed as Inference.
Inference rules:
§ Inference rules are the templates for generating valid arguments.
§ Inference rules are applied to derive proofs in artificial intelligence, and the proof is a sequence of the
conclusion that leads to the desired goal.
§ In inference rules, the implication among all the connectives plays an important role.
§ Following are some terminologies related to inference rules:
§ Implication: It is one of the logical connectives which can be represented as P → Q.
§ It is a Boolean expression.
§ Converse: The converse of implication, which means the right-hand side proposition goes to the
left-hand side and vice-versa.
§ It can be written as Q → P.
§ Contrapositive: The negation of converse is termed as contrapositive
§ it can be represented as ¬ Q → ¬ P.
§ Inverse: The negation of implication is called inverse.
§ It can be represented as ¬ P → ¬ Q.
Rules of Inference
Types of Inference rules:
1. Modus Ponens:
§ The Modus Ponens rule is one of the most important rules of inference, and it states that if P
and P → Q is true, then we can infer that Q will be true.
§ It can be represented as:

Example:
§ Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
§ Statement-2: "I am sleepy" ==> P
§ Conclusion: "I go to bed." ==> Q.
§ Hence, we can say that, if P→ Q is true and P is true then Q will be true.
Rules of Inference
Types of Inference rules:
2. Modus Tollens:
§ The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also true.
§ It can be represented as:

Example
§ Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
§ Statement-2: "I do not go to the bed."==> ~Q
§ Statement-3: Which infers that "I am not sleepy" => ~P
Rules of Inference
Types of Inference rules:
3. Hypothetical Syllogism:
§ According to the Hypothetical Syllogism rule if P→R is true whenever P→Q is true, and
Q→R is true.
§ It can be represented as the following notation:

Example:
§ Statement-1: If you have my home key then you can unlock my home. P→Q
§ Statement-2: If you can unlock my home then you can take my money. Q→R
§ Conclusion: If you have my home key then you can take my money. P→R
Rules of Inference
Types of Inference rules:

4. Disjunctive Syllogism:
§ The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q will be true.
§ It can be represented as:

Example:
§ Statement-1: Today is Sunday or Monday. ==>P∨Q
§ Statement-2: Today is not Sunday. ==> ¬P
§ Conclusion: Today is Monday. ==> Q
Rules of Inference
Types of Inference rules:
5. Addition:
§ The Addition rule is one the common inference rule, and it states that If P is true, then
P∨Q will be true.

Example:
Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream. ==> Q
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
Rules of Inference
Types of Inference rules:
6. Simplification:
§ The simplification rule state that if P∧ Q is true, then Q or P will also be true.
§ It can be represented as:

Example:
Statement: He study very hard and he is the best boy in the class. ==> P ∧ Q
Therefore - He studies very hard
Rules of Inference
Types of Inference rules:
7. Constructive Dilemma:
§ If (P→Q)∧(R→S) and P∨R are two premises, we can use constructive dilemma to derive
Q∨S
§ It can be represented as:

Example
§ “If it rains, I will take a leave”, (P→Q)
§ “If it is hot outside, I will go for a shower”, (R→S)
§ “Either it will rain or it is hot outside”, P∨R
§ Therefore − "I will take a leave or I will go for a shower"
Rules of Inference
Types of Inference rules:
8. Destructive Dilemma:
§ If (P→Q)∧(R→S) and ¬Q∨¬S are two premises, we can use destructive dilemma to
derive ¬P∨¬R
§ It can be represented as:

Example
“If it rains, I will take a leave”, (P→Q)
“If it is hot outside, I will go for a shower”, (R→S)
“Either I will not take a leave or I will not go for a shower”, ¬Q∨¬S
Therefore − "Either it does not rain or it is not hot outside"
Rules of Inference
Types of Inference rules:
9. Resolution:
§ The Resolution rule state that if P∨Q and ¬ PvR is true, then Q∨R will also be true.
§ It can be represented as:

Example:
Statement 1: “I will study discrete math.” è P
Statement 2: “I will study databases.” è Q
Statement 3: “I will study English literature.” è R
“I will study discrete math or I will study databases.”
“I will not study discrete math or I will study English literature.”
“Therefore, I will study databases or I will English literature.”
Statistical Reasoning
Statistical Reasoning

Symbolic versus Statistical reasoning


The symbolic methods basically represent uncertainty belief as being
§ True
§ False or
§ Neither True nor False
Some methods also had problems with
§ Incomplete Knowledge
§ Contradictions in the knowledge
Statistical Reasoning

Statistical method provide a method for representing beliefs


that are uncertain but for which there may be some
supporting (or contradictory) evidence

§ This is useful for dealing with problems where there is randomness


and unpredictability (such as in games of chance)
§ To do all this in a principled way requires techniques for probabilistic
reasoning
Statistical Reasoning

Probabilistic reasoning
§ Way of knowledge representation where we apply the concept of
probability to indicate the uncertainty in knowledge
§ In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as
§ “It will rain today”,
§ “behavior of someone for some situations”,
§ “A match between two teams of two players”
§ These are probable sentences for which we can assume that it will
happen but not sure about it, so here use probabilistic reasoning.
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Statistical Reasoning
Joint probability distribution:

If we have variables x1, x2, x3,....., xn, then the probabilities of a different
combination of x1, x2, x3.. xn, are known as Joint probability distribution.

P[x1, x2, x3,....., xn], it can be written as the following way in terms of the joint
probability distribution.

= P[x1| x2, x3,....., xn]P[x2, x3,....., xn]


= P[x1| x2, x3,....., xn]P[x2|x3,....., xn]....P[xn-1|xn]P[xn].

In general for each variable Xi, we can write the equation as:
P(Xi|Xi-1,........., X1) = P(Xi |Parents(Xi ))
Statistical Reasoning
Bayesian Networks:
§ A Bayesian network (or a belief network) is a probabilistic graphical model that
represents a set of variables and their probabilistic independencies.
§ For example, a Bayesian network could represent the probabilistic
relationships between diseases and symptoms.
§ Given symptoms, the network can be used to compute the probabilities of the
presence of various diseases.
§ Bayesian Networks are also called : Bayes nets, Bayesian Belief Networks
(BBNs) or simply Belief Networks. Causal Probabilistic Networks (CPNs).
Statistical Reasoning
Bayesian Networks in Artificial Intelligence:
§ Bayesian networks are probabilistic, because these networks are built from a
probability distribution
§ Real world applications are probabilistic in nature, and to represent the
relationship between multiple events, we need a Bayesian network
§ It can also be used in various tasks including prediction, anomaly detection,
diagnostics, automated insight, reasoning, time series prediction, and decision
making under uncertainty.
§ Bayesian Network can be used for building models from data and experts
opinions, and it consists of two parts:
§ Directed Acyclic Graph
§ Table of conditional probabilities.
§ The generalized form of Bayesian network that represents and solve decision
problems under uncertain knowledge is known as an Influence diagram.
Statistical Reasoning
Bayesian Networks in Artificial Intelligence:
A Bayesian network graph is made up of nodes and Arcs (directed links), where:
Statistical Reasoning
Bayesian Networks in Artificial Intelligence:
§ Each node corresponds to the random variables, and a variable can be
continuous or discrete.
§ Arc or directed arrows represent the causal relationship or conditional
probabilities between random variables. These directed links or arrows connect
the pair of nodes in the graph.
§ These links represent that one node directly influence the other node, and if
there is no directed link that means that nodes are independent with each other
§ In the above diagram, A, B, C, and D are random variables represented by the
nodes of the network graph.
§ If we are considering node B, which is connected with node A by a directed
arrow, then node A is called the parent of Node B.
§ Node C is independent of node A.
Statistical Reasoning
Bayesian Networks in Artificial Intelligence:
§ The Bayesian network has mainly two components:
§ Causal Component
§ Actual numbers
§ Each node has condition probability distribution P(Xi |Parent(Xi) ), which
determines the effect of the parent on that node.

§ Bayesian network is based on Joint probability distribution and conditional


probability.
Explanation of Bayesian Network:
§ Here’s an example to better understand the concept.
§ You have installed a burglar alarm at home. The alarm not only detects burglary but also
responds to minor earthquakes. You have two neighbors, Hari and Gita, who have agreed
to get in touch with you when the alarm rings. Hari calls you when he hears the alarm but
sometimes confuses it with the telephone ringing and calls. On the other hand, Gita is a
music lover who sometimes misses the alarm due to the loud music she plays.
Explanation of Bayesian Network:
Problem:
§ Based on the evidence on who will or will not call, find the probability of a burglary
occurring in the house.
§ In a Bayesian network, we can see nodes as random variables.
§ There are five nodes:
1. Burglary (B)
Hari
Calls
2. Earthquake (E) (C)

3. Alarm (A)
4. Hari calls ( C )
5. Gita calls (M)
Gita
Calls
(M)
Explanation of Bayesian Network:
§ Links act as causal dependencies that define the relationship between the nodes. Both
Hari and Gita call when there is an alarm.
§ Let’s write the probability distribution function formula for the above five nodes.

§ Now, let's look at the observed values for each of the nodes with the table of
probabilities:
Explanation of Bayesian Network:
Node B: Node E:

Node A:
Explanation of Bayesian Network:
Node C:

Node M:
Explanation of Bayesian Network:
´ From the formula of joint distribution, we can write the problem statement in the form of probability
distribution:
P(C, M, A, ¬B, ¬E) = P (C|A) *P (M|A)*P (A|¬B ^ ¬E) *P (¬B) *P (¬E).
= 0.85* 0.90* 0.001* 0.985*0.999
= 0.0007527

Hence, a Bayesian network can answer any query about the domain by using Joint distribution.
The semantics of Bayesian Network:
There are two ways to understand the semantics of the Bayesian network, which is given below:
1. To understand the network as the representation of the Joint probability distribution.
It is helpful to understand how to construct the network.
2. To understand the network as an encoding of a collection of conditional independence
statements.
It is helpful in designing inference procedure.
Resolution Refutation System
§ Resolution is a theorem proving technique that proceeds by building
refutation proofs, i.e., proofs by contradictions
§ A resolution refutation proof is proof by contradiction using resolution
§ Invented by a Mathematician John Alan Robinson in 1965.
§ It is used, if there are various statements given, and we need to prove a
conclusion of those statements.
§ Unification is a key concept in proofs by resolutions
§ Unification is a process of making two different logical atomic expressions
identical by finding a substitution.
§ It takes two literals as input and makes them identical using substitution.
§ It is a single inference rule which can efficiently operate on the
conjunctive normal form (CNF) or Clausal form.
§ CNF is a particular way to write logical formulas. If logical formulas are
written in 'conjunction' (i.e. clauses joined by and) then it is said to be in
Conjunctive Normal Form.
Resolution Refutation System
Resolution is one kind of proof technique that works this way –
I. Select two clauses that contain conflicting terms
II. Combine those two clauses and
III. Cancel out the conflicting terms.
Terminology:
§ Pair of clauses being resolved is called the Resolvents.
§ The resulting clause is called the Resolute.
§ Choosing the correct pair of Resolvents is a matter of search
Resolution Refutation System
Steps for Resolution:
1. Conversion of facts into first-order-logic
2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by
contradiction)
4. Draw resolution graph (unification)
Resolution Refutation System
2 . Converting to Clause (CNF) Form
Resolution Refutation System
Resolution Refutation System
2 . Converting to Clause (CNF) Form
Resolution Refutation System
2 . Converting to Clause (CNF) Form
Algorithm: Resolution Refutation System
Resolution Refutation System
For example we have following statements,
(1) If it is a pleasant day you will do strawberry picking
(2) If you are doing strawberry picking you are happy.
Above statements can be written in propositional logic like this -
(1) strawberry_picking ← pleasant
(2) happy ← strawberry_picking
And again these statements can be written in CNF like this -
(1) (strawberry_picking ∨~pleasant) ∧
(2) (happy ∨~strawberry_picking)
By resolving these two clauses and cancelling out the conflicting terms
'strawberry_picking' and '~strawberry_picking',
we can have one new clause,
(3) ~pleasant ∨ happy
Resolution Refutation System
§ Assume the following facts: 2020 Fall 3b: [8 marks]
i. John likes all kind of food
ii. Apples and vegetable are food
iii. Anything anyone eats and is not killed is food
iv. Anil eats peanuts and is still alive
v. Harry eats everything that Anil eats

Prove that John likes peanuts using resolution.


Resolution Refutation System
Step-1: Conversion of Facts into FOL d) eats (Anil, Peanuts) Λ alive(Anil)
In the first step we will convert all the given e) ∀x ¬ eats(Anil, x) V eats(Harry, x)
statements into its first order logic.
f) ∀x¬ [¬ killed(x) ] V alive(x)
g) ∀x ¬ alive(x) V ¬ killed(x)
h) likes(John, Peanuts).

2. Move negation (¬)inwards and rewrite


a) ∀x ¬ food(x) V likes(John, x)
b) food(Apple) Λ food(vegetables)
c) ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
d) eats (Anil, Peanuts) Λ alive(Anil)
Step-2: Conversion of FOL into CNF
e) ∀x ¬ eats(Anil, x) V eats(Harry, x)
1. Eliminate all implication (→) and rewrite
f) ∀x killed(x) V alive(x)
a) ∀x ¬ food(x) V likes(John, x)
g) ∀x ¬ alive(x) V ¬ killed(x)
b) food(Apple) Λ food(vegetables)
h) likes(John, Peanuts).
c) ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
Resolution Refutation System
2. Rename variables or standardize variables 4. Drop Universal quantifiers.
a) ∀x ¬ food(x) V likes(John, x) In this step we will drop all universal quantifier since all the
statements are not implicitly quantified so we don't need it.
b) food(Apple) Λ food(vegetables)
a) ¬ food(x) V likes(John, x)
c) ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
b) food(Apple)
d) eats (Anil, Peanuts) Λ alive(Anil)
c) food(vegetables)
e) ∀w¬ eats(Anil, w) V eats(Harry, w)
d) ¬ eats(y, z) V killed(y) V food(z)
f) ∀g ¬killed(g) ] V alive(g)
e) eats (Anil, Peanuts)
g) ∀k ¬ alive(k) V ¬ killed(k)
f) alive(Anil)
h) likes(John, Peanuts).
g) ¬ eats(Anil, w) V eats(Harry, w)
3. Eliminate existential instantiation quantifier by
h) killed(g) V alive(g)
elimination.
i) ¬ alive(k) V ¬ killed(k)
In this step, we will eliminate existential quantifier ∃, and
this process is known as Skolemization. But in this example j) likes(John, Peanuts).
problem since there is no existential quantifier so all the
statements will remain same in this step.
5. Distribute conjunction ∧ over disjunction ¬.
This step will not make any change in this problem.
Resolution Refutation System
Step-3: Negate the statement to be proved
In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)

Step-4: Draw Resolution graph:


Now in this step, we will solve the problem
By resolution tree using substitution.

Hence the negation of the conclusion has


been proved as a complete contradiction
with the given set of statements.
Semantic Networks and Frames
Semantic Nets
§ Semantic networks are alternative of predicate logic for knowledge representation.
§ In Semantic networks, we can represent our knowledge in the form of graphical
networks.
§ This network consists of nodes representing objects and arcs which describe the
relationship between those objects.
§ Semantic networks can categorize the object in different forms and can also link
those objects.
§ Semantic networks are easy to understand and can be easily extended.
§ This representation consist of mainly two types of relations:
§ IS-A relation (Inheritance)
§ Kind-of-relation
Components of Semantic Nets
Lexical part
§ nodes – denoting objects
§ links – denoting relations between objects
§ labels – denoting particular objects and relations
Structural part
§ the links and nodes form directed graphs
§ the labels are placed on the links and nodes
Semantic part
§ meanings are associated with the link and node labels
§ (the details will depend on the application domain)
Procedural part
§ constructors allow creation of new links and nodes
§ destructors allow the deletion of links and nodes
§ writers allow the creation and alteration of labels
§ readers can extract answers to questions
Semantic Networks as Knowledge Representations
Using Semantic Nets for representing knowledge has particular advantages:
§ They allow us to structure the knowledge to reflect the structure of that
part of the world which is being represented.
§ The semantics, i.e. real world meanings, are clearly identifiable.
§ There are very powerful representational possibilities as a result of “is a”
and “is a part of” inheritance hierarchies.
§ They can accommodate a hierarchy of default values
§ for example, we can assume the height of an adult male to be 178cm, but
if we know he is a baseball player we should take it to be 195cm.
§ They can be used to represent events and natural language sentences.
Clearly, the notion of a semantic network is extremely general.
Semantic Nets
For example: The following nets is
intended to represent the data
1. Tom is a cat.
2. Tom caught a bird.
3. Tom is owned by John.
4. Tom is ginger in colour.
5. Cats like cream.
6. The cat sat on the mat.
7. A cat is a mammal.
8. A bird is an animal.
9. All mammals are animals.
10. Mammals have fur.
Semantic Nets
Semantic Nets
Semantic Nets
Semantic Nets
Drawbacks in Semantic representation:
1. They take more computational time at runtime as we need to traverse the complete network
tree to answer some questions.
2. They try to model human-like memory to store the information, but in practice, it is not
possible to build such a vast semantic network.
3. These types of representations are inadequate as they do not have any equivalent quantifier,
e.g., for all, for some, none, etc.
4. Semantic networks do not have any standard definition for the link names.
5. These networks are not intelligent and depend on the creator of the system.
Advantages of Semantic network:
1. They are a natural representation of knowledge.
2. They convey meaning in a transparent manner.
3. These networks are simple and easily understandable.
Frame Representation
§ A frame is a record like structure which consists of a collection of attributes and its values to
describe an entity in the world.
§ Frames are the AI data structure which divides knowledge into substructures by representing
stereotypes situations.
§ It consists of a collection of slots and slot values.
§ These slots may be of any type and sizes. Slots have names and values which are called facets.
Facets:
§ The various aspects of a slot is known as Facets.
§ Facets are features of frames which enable us to put constraints on the frames
§ A frame may consist of any number of slots, and a slot may include any number of facets and facets
may have any number of values.
§ A frame is also known as slot-filter knowledge representation in artificial intelligence.
Frame Representation
§ Frames are derived from semantic networks and later evolved into our modern-day classes and
objects.
§ The simplest type of frame is just a data structure with similar properties and possibilities for
knowledge representation as a semantic nets.
§ Frames become much more powerful when their slots can also contain instructions (procedures)
for computing things from information in other slots or in other frames.
§ A single frame is not much useful.
§ Frames system consist of a collection of frames which are connected.
§ The frame is a type of technology which is widely used in various applications including Natural
language processing and machine visions.
Frame Representation Slots Filters
Example: 1 Title Artificial Intelligence
§ Let's take an example of a frame for a book Genre Computer Science
Author Peter Norvig
Edition Third Edition
Year 1996
Example 2: Page 1152
§ Let's take an entity, Peter. Peter is an engineer as a profession, and his age is 25, he lives in
city London, and the country is England. So following is the frame representation for this:
Slots Filter

Name Peter
Profession Doctor
Age 25
Marital status Single
Weight 78
Frame Representation
Advantages of frame representation:
a) It makes the programming easier by grouping the related data.
b) It is comparably flexible and used by many applications in AI.
c) Very easy to add slots for new attribute and relations.
d) It is easy to include default data and to search for missing values.
e) Easy to understand and visualize.
Disadvantages of frame representation:
a) Inference mechanism is not be easily processed.
b) Inference mechanism cannot be smoothly proceeded by frame representation.
c) Much generalized approach.
Converting between Semantic Nets and Frames
Questions ?
Thank you !
.

You might also like