Unit 3
Unit 3
KNOWLEDGE REPRESENTATION
Variables x, y, z, a, b,....
Connectives ∧, ∨, ¬, ⇒, ⇔
Equality ==
Quantifier ∀, ∃
Atomic sentences
Atomic sentences are the most basic first-order logic sentences. These
sentences are made up of a predicate symbol, a parenthesis, and a series of
terms.
We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences
Connectives are used to join atomic sentences to form complex sentences.
The following are two types of first-order logic statements:
Subject: Subject is the main part of the statement.
Predicate: A predicate can be defined as a relation, which binds two atoms
together in a statement.
Consider the following statement: "x is an integer." It has two parts: the first
component, x, is the statement's subject, and the second part, "is an integer," is
the statement's predicate.
Example
We will use this rule for Kings are evil, so we will find some x such that x is
king, and x is greedy so we can infer that x is evil.
Here let say, p1' is king(John) p1 is king(x)
p2' is Greedy(y) p2 is Greedy(x)
θ is {x/John, y/John} q is evil(x)
SUBST(θ,q).
Propositional logic in Artificial intelligence
The simplest kind of logic is propositional logic (PL), in which all statements
are made up of propositions. The term "proposition" refers to a declarative
statement that can be true or false. It's a method of expressing knowledge in
logical and mathematical terms.
Example
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.
The following are some fundamental propositional logic facts:
Because it operates with 0 and 1, propositional logic is also known as
Boolean logic.
In propositional logic, symbolic variables are used to express the logic,
and any symbol can be used to represent a proposition, such as A, B,
C, P, Q, R, and so on.
Propositions can be true or untrue, but not both at the same time.
An object, relations or functions, and logical connectives make up
propositional logic.
Logical operators are another name for these connectives.
The basic elements of propositional logic are propositions and
connectives.
Connectives are logical operators that link two sentences together.
Tautology, commonly known as a legitimate sentence, is a proposition
formula that is always true.
Contradiction is a proposition formula that is always false.
Statements that are inquiries, demands, or opinions are not
propositions, such as "Where is Rohini," "How are you," and "What is
your name," are not propositions.
Syntax of propositional logic
The allowed sentences for knowledge representation are defined by the syntax
of propositional logic. Propositions are divided into two categories:
Atomic Propositions
Compound propositions
Atomic Proposition: Simple statements are atomic propositions. It is made up
of only one proposition sign. These are the sentences that must be true or
untrue in order to pass.
Example
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold," as well as being a false fact, is a proposition.
Compound propositions are made up of simpler or atomic propositions joined
together with parenthesis and logical connectives.
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives
Logical connectives are used to link two simpler ideas or to logically represent
a statement. With the use of logical connectives, we can form compound
assertions. There are five primary connectives, which are listed below:
Negation: A sentence such as ¬ P is called negation of P. A literal can be
either Positive literal or negative literal.
Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a
conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called
disjunction, where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
Implication: A sentence such as P → Q, is called an implication. Implications
are also known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example
If I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.Following is
the summarized table for Propositional Logic Connectives:
Truth Table
We need to know the truth values of propositions in all feasible contexts in
propositional logic. With logical connectives, we can combine all possible
combinations, and the representation of these combinations in a tabular
manner is known as a truth table. The truth table for all logical connectives is
as follows:
Truth table with three propositions
A proposition can be constructed by combining three propositions: P, Q, and
R. Because we used three proposition symbols, this truth table is made up of
8n Tuples.
Precedence of connectives
Propositional connectors or logical operators, like arithmetic operators, have a
precedence order. When evaluating a propositional problem, this order should
be followed. The following is a list of the operator precedence order:
Precedence Operators
Logical equivalence
One of the characteristics of propositional logic is logical equivalence. If and
only if the truth table's columns are equal, two assertions are said to be
logically comparable.
Let's take two propositions A and B, so for logical equivalence, we can write it
as A⇔B. In below truth table we can see that column for ¬A∨ B and A→B,
are identical hence A is Equivalent to B
Properties of Operators
Commutativity:
P∧ Q= Q ∧ P, or
P ∨ Q = Q ∨ P.
Associativity:
(P ∧ Q) ∧ R= P ∧ (Q ∧ R),
(P ∨ Q) ∨ R= P ∨ (Q ∨ R)
Identity element:
P ∧ True = P,
P ∨ True= True.
Distributive:
P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
DE Morgan's Law:
¬ (P ∧ Q) = (¬P) ∨ (¬Q)
¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
Double-negation elimination:
¬ (¬P) = P.
Limitations of Propositional logic
With propositional logic, we can't represent relations like ALL, SOME, or
NONE. Example:
All the girls are intelligent.
Some apples are sweet.
The expressive power of propositional logic is restricted.
We can't explain propositions in propositional logic in terms of their qualities
or logical relationships.
Rules of Inference in Artificial intelligence
Inference
We need intelligent computers in artificial intelligence to construct new logic
from old logic or evidence, therefore inference is the process of drawing
conclusions from data and facts.
Inference rules
The templates for creating valid arguments are known as inference rules. In
artificial intelligence, inference rules are used to generate proofs, and a proof
is a series of conclusions that leads to the intended outcome.
The implication among all the connectives is vital in inference rules. Some
terms relating to inference rules are as follows:
Implication: It is one of the logical connectives which can be represented as P
→ Q. It is a Boolean expression.
Converse: The converse of implication, which means the right-hand side
proposition goes to the left-hand side and vice-versa. It can be written as Q →
P.
Contrapositive: The negation of converse is termed as contrapositive, and it
can be represented as ¬ Q → ¬ P.
Inverse: The negation of implication is called inverse. It can be represented as
¬ P → ¬ Q.
Some of the compound claims in the above term are equivalent to each other,
which we can establish using a truth table.
Hence from the above truth table, we can prove that P → Q is equivalent to ¬
Q → ¬ P, and Q→ P is equivalent to ¬ P → ¬ Q.
Types of Inference rules
1. Modus Ponens:
One of the most essential laws of inference is the Modus Ponens rule, which
asserts that if P and P →Q are both true, we can infer that Q will be true as
well. It's written like this:
Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
2.Proof by Truth table:
3. Hypothetical Syllogism
The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is
true, and Q→R is true. It can be represented as the following notation:
Example:
Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
Proof by truth table:
4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q
will be true. It can be represented as:
Example:
Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
Proof by truth-table:
5. Addition:
The Addition rule is one the common inference rule, and it states that If P is
true, then P∨Q will be true.
Example:
Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
Proof by Truth-Table:
6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will also be true.
It can be represented as:
Proof by Truth-Table:
7. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will also be
true. It can be represented as
Proof by Truth-Table:
Wumpus world
The Wumpus world is a basic world example that demonstrates the value of a
knowledge-based agent and how knowledge representation is represented. It
was inspired by Gregory Yob's 1973 video game Hunt the Wumpus.
The Wumpus world is a cave with 4x4 rooms and pathways connecting them.
As a result, there are a total of 16 rooms that are interconnected. We now have
a knowledge-based AI capable of progressing in this world. There is an area in
the cave with a beast named Wumpus who eats everybody who enters. The
agent can shoot the Wumpus, but he only has a single arrow. There are some
Pits chambers in the Wumpus universe that are bottomless, and if an agent
falls into one, he will be stuck there indefinitely. The intriguing thing about
this cave is that there is a chance of finding a gold heap in one of the rooms.
So the agent's mission is to find the gold and get out of the cave without
getting eaten by Wumpus or falling into Pits. If the agent returns with gold, he
will be rewarded, but if he is devoured by Wumpus or falls into the pit, he will
be penalised.
A sample diagram for portraying the Wumpus planet is shown below. It
depicts some rooms with Pits, one room with Wumpus, and one agent in the
world's (1, 1) square position.
There are some elements that can assist the agent in navigating the cave. The
following are the components:
The rooms adjacent to the Wumpus room are stinky, thus there is a
stench there.
The room next to PITs has a breeze, so if the agent gets close enough
to PIT, he will feel it.
If and only if the room contains gold, there will be glitter.
If the agent is facing the Wumpus, the agent can kill it, and the
Wumpus will cry horribly, which can be heard throughout the cave.
PEAS description of Wumpus world
To explain the Wumpus world we have given PEAS description as below:
Performance measure:
+1000 reward points if the agent comes out of the cave with the gold.
-1000 points penalty for being eaten by the Wumpus or falling into the
pit.
-1 for each action, and -10 for using an arrow.
The game ends if either agent dies or came out of the cave.
Environment:
A 4*4 grid of rooms.
The agent initially in room square [1, 1], facing toward the right.
Location of Wumpus and gold are chosen randomly except the first
square
Each square of the cave can be a pit with probability 0.2 except the
first square.
Actuators
Left turn,
Right turn
Move forward
Grab
Release
Shoot.
Sensors:
If the agent is in the same room as the Wumpus, he will smell the
stench. (Not on a diagonal.)
If the agent is in the room directly adjacent to the Pit, he will feel a
breeze.
The agent will notice the gleam in the room where the gold is located.
If the agent walks into a wall, he will feel the bump.
When the Wumpus is shot, it lets out a horrifying cry that can be heard
throughout the cave.
These perceptions can be expressed as a five-element list in which
each sensor will have its own set of indicators.
For instance, if an agent detects smell and breeze but no glitter, bump,
or shout, it might be represented as [Stench, Breeze, None, None,
None].
The Wumpus world Properties
Partially observable: The Wumpus world is partially observable
because the agent can only perceive the close environment such as an
adjacent room.
Deterministic: It is deterministic, as the result and outcome of the
world are already known.
Sequential: The order is important, so it is sequential.
Static: It is static as Wumpus and Pits are not moving.
Discrete: The environment is discrete.
One agent: The environment is a single agent as we have one agent
only and Wumpus is not considered as an agent.
Exploring the Wumpus world
Now we'll investigate the Wumpus universe and see how the agent will
achieve its goal through logical reasoning.
The first step for an agent is to:
Initially, the agent is in the first room, or square [1,1], and we already know
that this room is safe for the agent, thus we will add the sign OK to the below
diagram (a) to represent that room is safe. The agent is represented by the
letter A, the breeze by the letter B, the glitter or gold by the letter G, the
visited room by the letter V, the pits by the letter P, and the Wumpus by the
letter W.
At Room [1,1] agent does not feel any breeze or any Stench which means the
adjacent squares are also OK.
Agent's second Step:
Now that the agent must go forward, it will either go to [1, 2] or [2, 1]. Let's
say agent enters room [2, 1], where he detects a breeze, indicating Pit is
present. Because the pit might be in [3, 1] or [2, 2], we'll add the sign P? to
indicate that this is a Pit chamber.
Now the agent will pause and consider his options before doing any
potentially destructive actions. The agent will return to room [1, 1]. The agent
visits the rooms [1,1] and [2,1], thus we'll use the symbol V to symbolise the
squares he's been to.
Agent's third step:
The agent will now proceed to the room [1,2], which is fine. Agent detects a
stink in the room [1,2], indicating the presence of a Wumpus nearby.
However, according to the rules of the game, Wumpus cannot be in the room
[1,1], and he also cannot be in [2,2]. (Agent had not detected any stench when
he was at [2,1]). As a result, the agent infers that Wumpus is in the room [1,3],
and there is no breeze at the moment, implying that there is no Pit and no
Wumpus in [2,2]. So that's safe, and we'll designate it as OK, and the agent
will advance [2,2].
Agent's fourth step:
Because there is no odour and no breeze in room [2,2], let's assume the agent
decides to move to room [2,3]. Agent detects glitter in room [2,3], thus it
should collect the gold and ascend out of the cave.
Unification
Unification is the process of finding a substitute that makes two
separate logical atomic expressions identical. The substitution process
is necessary for unification.
It accepts two literals as input and uses substitution to make them
identical.
Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such
that, Ψ1𝜎 = Ψ2𝜎, then it can be expressed as UNIFY(Ψ1, Ψ2).
Example: Find the MGU for Unify{King(x), King(John)}
Let Ψ1 = King(x), Ψ2 = King(John),
Substitution θ = {John/x} is a unifier for these atoms and applying this
substitution, and both expressions will be identical.
For unification, the UNIFY algorithm is employed, which takes two
atomic statements and returns a unifier for each of them (If any exist).
All first-order inference techniques rely heavily on unification.
If the expressions do not match, the result is failure.
The replacement variables are referred to as MGU (Most General
Unifier).
E.g. Let's assume P(x, y) and P(a, f(z)) are two different expressions.
In this case, we must make both of the preceding assertions identical. We'll
make the substitution in this case.
P(x, y)......... (i)
P(a, f(z))......... (ii)
In the first statement, replace x with a and y with f(z), and the result
will be a/x and f(z)/y.
The first expression will be equal to the second expression with both
replacements, and the substitution set will be [a/x, f(z)/y].
Conditions for Unification
Following are some basic conditions for unification:
Atoms or expressions with various predicate symbols can never be
united.
Both phrases must have the same number of arguments.
If two comparable variables appear in the same expression, unification
will fail.
Unification Algorithm
Algorithm: Unify(Ψ1, Ψ2)
Step. 1: If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not same, then return
FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of arguments, then return
FAILURE.
Step. 4: Set Substitution set(SUBST) to NIL.
Step. 5: For i=1 to the number of elements in Ψ1.
a) Call Unify function with the ith element of Ψ1 and ith element of
Ψ2, and put the result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both L1 and L2.
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.
Implementation of the Algorithm
Step.1: Initialize the substitution set to be empty.
Step.2: Recursively unify atomic sentences:
Check for Identical expression match.
If one expression is a variable vi, and the other is a term ti which does
not contain variable vi, then:
Substitute ti / vi in the existing substitutions
Add ti /vi to the substitution setlist.
If both the expressions are functions, then function name must be
similar, and the number of arguments must be the same in both the
expression.
For each pair of the following atomic sentences find the most general
unifier (If exist).
1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}
Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)
SUBST θ= {f(a) / X}
S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))
SUBST θ= {f(a) / g(y)}, Unification failed.
Unification is not possible for these expressions.
2. Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}
Here, Ψ1 = p(b, X, f(g(Z))) , and Ψ2 = p(Z, f(Y), f(Y))
S0 => { p(b, X, f(g(Z))); p(Z, f(Y), f(Y))}
SUBST θ={b/Z}
S1 => { p(b, X, f(g(b))); p(b, f(Y), f(Y))}
SUBST θ={f(Y) /X}
S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}
SUBST θ= {g(b) /Y}
S2 => { p(b, f(g(b)), f(g(b)); p(b, f(g(b)), f(g(b))} Unified Successfully.
And Unifier = { b/Z, f(Y) /X , g(b) /Y}.
3. Find the MGU of {p (X, X), and p (Z, f(Z))}
Here, Ψ1 = {p (X, X), and Ψ2 = p (Z, f(Z))
S0 => {p (X, X), p (Z, f(Z))}
SUBST θ= {X/Z}
S1 => {p (Z, Z), p (Z, f(Z))}
SUBST θ= {f(Z) / Z}, Unification Failed.
Hence, unification is not possible for these expressions.
4. Find the MGU of UNIFY(prime (11), prime(y))
Here, Ψ1 = {prime(11) , and Ψ2 = prime(y)}
S0 => {prime(11) , prime(y)}
SUBST θ= {11/y}
S1 => {prime(11) , prime(11)} , Successfully unified.
Unifier: {11/y}.
5. Find the MGU of Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x)}
Here, Ψ1 = Q(a, g(x, a), f(y)), and Ψ2 = Q(a, g(f(b), a), x)
S0 => {Q(a, g(x, a), f(y)); Q(a, g(f(b), a), x)}
SUBST θ= {f(b)/x}
S1 => {Q(a, g(f(b), a), f(y)); Q(a, g(f(b), a), f(b))}
SUBST θ= {b/y}
S1 => {Q(a, g(f(b), a), f(b)); Q(a, g(f(b), a), f(b))}, Successfully Unified.
Unifier: [a/a, f(b)/x, b/y].
6. UNIFY(knows(Richard, x), knows(Richard, John))
Here, Ψ1 = knows(Richard, x), and Ψ2 = knows(Richard, John)
S0 => { knows(Richard, x); knows(Richard, John)}
SUBST θ= {John/x}
S1 => { knows(Richard, John); knows(Richard, John)}, Successfully Unified.
Unifier: {John/x}.
Forward Chaining and backward chaining in AI
Forward and backward chaining is an essential topic in artificial intelligence,
but before we go into forward and backward chaining, let's look at where these
two phrases come from.
Inference engine:
In artificial intelligence, the inference engine is a component of the intelligent
system that applies logical rules to the knowledge base to infer new
information from known facts. The expert system included the first inference
engine. Inference engines often operate in one of two modes:
Forward chaining
Backward chaining
Horn Clause and Definite clause
Horn clause and definite clause are sentence types that allow the knowledge
base to apply a more limited and efficient inference procedure. Forward and
backward chaining techniques are used in logical inference algorithms, and
they both need KB in the form of a first-order definite sentence.
A definite clause, sometimes known as a strict horn clause, is a clause that is a
disjunction of literals with exactly one affirmative literal.
Horn clause: A horn clause is a clause that is a disjunction of literals with at
most one positive literal. As a result, every definite clause is a horn clause.
Example: (¬ p V ¬ q V k). It has only one positive literal k.
It is equivalent to p ∧ q → k.
Forward Chaining
When employing an inference engine, forward chaining is also known as
forward deduction or forward reasoning. Forward chaining is a type of
reasoning that starts with atomic sentences in a knowledge base and uses
inference rules (Modus Ponens) to extract more material in the forward
direction until a goal is attained.
The Forward-chaining algorithm begins with known facts, then activates all
rules with satisfied premises and adds their conclusion to the known facts.
This process continues until the issue is resolved.
Properties of Forward-Chaining
As it moves from bottom to top, it is a down-up method.
It is a method of arriving at a conclusion based on known facts or data
by starting at the beginning and working one's way to the end.
Forward-chaining is also known as data-driven since it allows us to
achieve our goal by utilising existing data.
Expert systems, such as CLIPS, business, and production rule systems,
frequently employ the forward-chaining approach.
Consider the following well-known example, which we'll apply to both ways.
Example
"It is illegal for an American to sell weapons to unfriendly countries,
according to the law. Country A, an American foe, has a few missiles, all of
which were sold to it by Robert, an American citizen."
Demonstrate that "Robert is a thief."
To answer the problem, we'll turn all of the facts above into first-order definite
clauses, then utilise a forward-chaining procedure to get to the goal.
Facts Conversion into FOL
It is a crime for an American to sell weapons to hostile nations. (Let's
say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p)
...(1)
Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be
written in two definite clauses by using Existential Instantiation,
introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
Country A is an enemy of America.
Enemy (A, America) .........(7)
Robert is American
American(Robert). ..........(8)
Forward chaining proof
Step-1:
In the first step we will start with the known facts and will choose the
sentences which do not have implications, such as: American(Robert),
Enemy(A, America), Owns(A, T1), and Missile(T1). All these facts will be
represented as below.
Step-2:
At the second step, we will see those facts which infer from available facts and
with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is
added, which infers from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and
which infers from Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the
substitution {p/Robert, q/T1, r/A}, so we can add Criminal(Robert) which
infers all the available facts. And hence we reached our goal statement.
Step-2:
At the second step, we will infer other facts form goal fact which satisfies the
rules. So as we can see in Rule-1, the goal predicate Criminal (Robert) is
present with substitution {Robert/P}. So we will add all the conjunctive facts
below the first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved here.
Step-3:t At step-3, we will extract further fact Missile(q) which infer from
Weapon(q), as it satisfies Rule-(5). Weapon (q) is also true with the
substitution of a constant T1 at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert,
T1, r) which satisfies the Rule- 4, with the substitution of A in place of r. So
these two statements are proved here.
Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which
satisfies Rule- 6. And hence all the statements are proved true using backward
chaining.
Difference between backward chaining and forward chaining
Resolution in FOL
Resolution is a method of theorem proof that involves constructing refutation
proofs, or proofs by contradictions. It was created in 1965 by a mathematician
named John Alan Robinson.
When several statements are supplied and we need to prove a conclusion from
those claims, we employ resolution. In proofs by resolutions, unification is a
crucial idea. Resolution is a single inference rule that can work on either the
conjunctive normal form or the clausal form efficiently.
Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also
known as a unit clause.
Conjunctive Normal Form: A sentence represented as a conjunction of clauses
is said to be conjunctive normal form or CNF.
Representation
Following are the kind of knowledge which needs to be represented in AI
systems:
Object: All the facts about objects in our world domain. E.g., Guitars
contains strings, trumpets are brass instruments.
Events: Events are the actions which occur in our world.
Performance: It describe behavior which involves knowledge about
how to do things.
Meta-knowledge: It is knowledge about what we know.
Facts: Facts are the truths about the real world and what we represent.
Knowledge-Base: The knowledge base is the most important part of
the knowledge-based agents. It's abbreviated as KB. The Sentences are
grouped together in the Knowledgebase (Here, sentences are used as a
technical term and not identical with the English language).
Knowledge: Knowledge is the awareness or familiarity of facts, data,
and circumstances gained through experiences. The types of
knowledge in artificial intelligence are listed below.
Types of knowledge
Following are the various types of knowledge
1. Declarative Knowledge:
Declarative knowledge is the ability to understand something.
It contains ideas, facts, and objects.
It's also known as descriptive knowledge, and it's communicated using
declarative statements.
It is less complicated than procedural programming.
2. Procedural Knowledge
It's sometimes referred to as "imperative knowledge."
Procedure knowledge is a form of knowledge that entails knowing how to do
something.
It can be used to complete any assignment.
It has rules, plans, procedures, and agendas, among other things.
The use of procedural knowledge is contingent on the job at hand.
3. Meta-knowledge:
Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
Heuristic knowledge is the sum of the knowledge of a group of specialists in a
certain field or subject.
Heuristic knowledge refers to rules of thumb that are based on prior
experiences, awareness of methodologies, and are likely to work but not
guaranteed.
5. Structural knowledge:
Basic problem-solving knowledge is structural knowledge.
It describes the connections between distinct concepts such as kind, part of,
and grouping.
It is a term that describes the relationship between two or more concepts or
objects.
The relation between knowledge and intelligence
Real-world knowledge is essential for intelligence, and artificial intelligence
is no exception. When it comes to exhibiting intelligent behaviour in AI
entities, knowledge is crucial. Only when an agent has some knowledge or
expertise with a given input can he act appropriately on it.
Consider what you would do if you encountered someone who spoke to you in
a language you did not understand. The same can be said for the agents'
intelligent behaviour.
One decision maker, as shown in the diagram below, acts by detecting the
environment and applying knowledge. However, if the knowledge component
is missing, it will be unable to demonstrate intelligent behaviour.
AI knowledge cycle
An Artificial intelligence system has the following components for displaying
intelligent behavior:
Perception
Learning
Knowledge Representation and Reasoning
Planning
Execution
The diagram above depicts how an AI system interacts with the real
environment and what components assist it in displaying intelligence.
Perception is a component of an AI system that allows it to gather information
from its surroundings. It can be in the form of visual, aural, or other sensory
input. The learning component is in charge of gaining knowledge from the
data collected by Perception comportment. The main components of the entire
cycle are knowledge representation and reasoning. These two elements have a
role in demonstrating intelligence in machine-like humans. These two
components are independent of one another, but they are also linked. Analysis
of knowledge representation and reasoning is required for planning and
implementation.
Approaches to knowledge representation:
There are mainly four approaches to knowledge representation, which are
givenbelow:
1. Simple relational knowledge:
It is the most basic technique of storing facts that use the relational method,
with each fact about a group of objects laid out in columns in a logical order.
This method of knowledge representation is often used in database systems to
express the relationships between various things.
This method leaves minimal room for inference.
Example: The following is the simple relational knowledge representation.
Player1 65 23
Player2 58 18
Player3 75 24
2. Inheritable knowledge:
All data must be kept in a hierarchy of classes in the inheritable knowledge
approach.
All classes should be organised in a hierarchical or generic fashion.
We use the inheritance property in this method.
Other members of a class pass on their values to elements.
The instance relation is a type of inheritable knowledge that illustrates a
relationship between an instance and a class.
Each individual frame might indicate a set of traits as well as their value.
Objects and values are represented in Boxed nodes in this technique.
Arrows are used to connect objects to their values.
Example:
3. Inferential knowledge:
Knowledge is represented in the form of formal logics in the inferential
knowledge approach.
More facts can be derived using this method.
It ensured that everything was in order.
Example: Let's suppose there are two statements:
a. Marcus is a man
b. All men are mortal
Then it can represent as;
man(Marcus)
∀x = man (x) ----------> mortal (x)s
4. Procedural knowledge:
Small programmes and codes are used in the procedural knowledge approach
to specify how to do specific things and how to proceed.
One significant rule employed in this method is the If-Then rule.
We may employ several coding languages, such as LISP and Prolog, with this
information.
Using this method, we can readily represent heuristic or domain-specific
information.
However, this strategy does not require us to represent all scenarios.
Requirements for knowledge Representation system:
A good knowledge representation system must possess the following
properties.
1. Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.
2. Inferential Adequacy:
KR system should have ability to manipulate the representational structures to
produce new knowledge corresponding to existing structure.
3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most
productive directions by storing appropriate guides.
4. Acquisitional efficiency- The ability to acquire the new knowledge easily
using automatic methods.
ONTOLOGICAL ENGINEERING
Events, Time, Physical Objects, and Beliefs are examples of concepts that
appear in a variety of disciplines. Ontological engineering is a term used to
describe the process of representing abstract concepts.
Because of the habit of showing graphs with the general concepts at the top
and the more specialised concepts below them, the overall framework of
concepts is called an upper ontology, as seen in Figure.
Categories and Objects
The categorization of objects is an important aspect of knowledge
representation. Although much reasoning takes place at the level of categories,
much engagement with the environment takes place at the level of particular
things.
A shopper's goal, for example, would generally be to purchase a basketball
rather than a specific basketball, such as BB9. In first-order logic, there are
two ways to represent categories: predicates and objects. To put it another
way, we can use the predicate Basketball (b) or reify1 the category as an
object, Basketballs.
To state that b is a member of the category of basketballs, we may say
Member(b, Basketballs), which we would abbreviate as b Basketballs.
Basketballs is a subcategory of Balls, thus we say Subset(Basketballs, Balls),
abbreviated as Basketballs Balls. Through inheritance, categories help to
organise and simplify the information base. We can deduce that every apple is
edible if we state that all instances of the category Food are edible and that
Fruit is a subclass of Food and Apples is a subclass of Fruit. Individual apples
are said to inherit the quality of edibility from their membership in the Food
category. By connecting things to categories or quantifying over their
members, first-order logic makes it simple to state truths about categories.
Here are some instances of different types of facts:
• An object is a member of a category.
BB9 ∈ Basketballs
• A category is a subclass of another category. Basketballs ⊂ Balls
• All members of a category have some properties.
(x∈ Basketballs) ⇒ Spherical (x)
• Members of a category can be recognized by some properties. Orange(x) ∧
Round (x) ∧ Diameter(x)=9.5 ∧ x∈ Balls ⇒ x∈ Basketballs
• A category as a whole has some properties.
Dogs ∈ Domesticated Species
Because Dogs is both a category and a subcategory of Domesticated Species,
the latter must be a category of categories. Categories can also be formed by
establishing membership requirements that are both required and sufficient. A
bachelor, for example, is a single adult male:
x∈ Bachelors ⇔ Unmarried(x) ∧ x∈ Adults ∧ x∈ Males
Physical Composition
To declare that one thing is a part of another, we utilise the general PartOf
connection. Objects can be categorised into hierarchies, similar to the Subset
hierarchy:
PartOf (Bucharest, Romania)
PartOf (Romania, EasternEurope)
PartOf(EasternEurope, Europe)
PartOf (Europe, Earth)
The PartOf relation is transitive and reflexive; that is,
PartOf (x, y) ∧PartOf (y, z) ⇒PartOf (x, z)
PartOf (x, x)
Therefore, we can conclude PartOf (Bucharest, Earth).
For example, if the apples are Apple1, Apple2, and Apple3, then
BunchOf ({Apple1,Apple2,Apple3})
denotes the composite object with the three apples as parts (not elements). We
can define BunchOf in terms of the PartOf relation. Obviously, each element
of s is part of
BunchOf (s): ∀x x∈ s ⇒PartOf (x, BunchOf (s)) Furthermore, BunchOf (s) is
the smallest object satisfying this condition. In other words, BunchOf (s) must
be part of any object that has all the elements of s as parts:
∀y [∀x x∈ s ⇒PartOf (x, y)] ⇒PartOf (BunchOf (s), y)
Measurements
In both scientific and commonsense theories of the world, objects have height,
mass, cost, and so on. The values that we assign for these properties are called
measures. Length(L1)=Inches(1.5)=Centimeters(3.81)
Conversion between units is done by equating multiples of one unit to another:
Centimeters(2.54 ×d)=Inches(d)
Similar axioms can be written for pounds and kilograms, seconds and days,
and dollars and cents. Measures can be used to describe objects as follows:
Diameter (Basketball12)=Inches(9.5)
ListPrice(Basketball12)=$(19)
d∈ Days ⇒ Duration(d)=Hours(24)
Time Intervals
Event calculus opens us up to the possibility of talking about time, and time
intervals. We will consider two kinds of time intervals: moments and extended
intervals. The distinction is that only moments have zero duration:
Partition({Moments,ExtendedIntervals},Intervals)
i∈Moments⇔Duration(i)=Seconds(0)
The functions Begin and End pick out the earliest and latest moments in an
interval, and the function Time delivers the point on the time scale for a
moment.
The function Duration gives the difference between the end time and the start
time. Interval (i) ⇒Duration(i)=(Time(End(i)) − Time(Begin(i)))
Time(Begin(AD1900))=Seconds(0)
Time(Begin(AD2001))=Seconds(3187324800)
Time(End(AD2001))=Seconds(3218860800)
Duration(AD2001)=Seconds(31536000) Two intervals Meet if the end time of
the first equals the start time of the second. The complete set of interval
relations, as proposed by Allen (1983), is shown graphically in Figure 12.2
and logically below: Meet(i,j) ⇔ End(i)=Begin(j)
Before(i,j) ⇔ End(i) < Begin(j)
After (j,i) ⇔ Before(i, j)
During(i,j) ⇔ Begin(j) < Begin(i) < End(i) < End(j)
Overlap(i,j) ⇔ Begin(i) < Begin(j) < End(i) < End(j)
Begins(i,j) ⇔ Begin(i) = Begin(j)
Finishes(i,j) ⇔ End(i) = End(j)
Equals(i,j) ⇔ Begin(i) = Begin(j) ∧ End(i) = End(j)
EVENTS
Fluents and events are reified in event calculus. The ability to communicate
fluently At(Shankar, Berkeley) is an object that alludes to the fact that Shankar
is in Berkeley, but it doesn't say whether it's true or not. The predicate T, as in
T(At(Shankar, Berkeley), t), is used to affirm that a fluent is true at some point
in time. Instances of event categories are used to describe events. The event
E1 of Shankar flying from San Francisco to Washington, D.C. is described as
E1 ∈Flyings∧ Flyer (E1, Shankar ) ∧ Origin(E1, SF) ∧ Destination (E1,DC)
we can define an alternative three-argument version of the category of flying
events and say E1 ∈Flyings(Shankar, SF,DC) We then use Happens(E1, i) to
say that the event E1 took place over the time interval i, and we say the same
thing in functional form with Extent(E1)=i. We represent time intervals by a
(start, end) pair of times; that is, i = (t1, t2) is the time interval that starts at t1
and ends at t2. The complete set of predicates for one version of the event
calculus is T(f, t) Fluent f is true at time t Happens(e, i) Event e happens over
the time interval i Initiates(e, f, t) Event e causes fluent f to start to hold at
time t Terminates(e, f, t) Event e causes fluent f to cease to hold at time t
Clipped(f, i) Fluent f ceases to be true at some point during time interval i
Restored (f, i) Fluent f becomes true sometime during time interval i We
assume a distinguished event, Start, that describes the initial state by saying
which fluents are initiated or terminated at the start time. We define T by
saying that a fluent holds at a point in time if the fluent was initiated by an
event at some time in the past and was not made false (clipped) by an
intervening event. A fluent does not hold if it was terminated by an event and
not made true (restored) by another event. Formally, the axioms are:
Happens(e, (t1, t2)) ∧Initiates(e, f, t1) ∧¬Clipped(f, (t1, t)) ∧ t1 < t ⇒T(f,
t)Happens(e, (t1, t2)) ∧ Terminates(e, f, t1)∧¬Restored (f, (t1, t)) ∧ t1 < t ⇒
¬T(f, t)
where Clipped and Restored are defined by Clipped(f, (t1, t2)) ⇔∃ e, t, t3
Happens(e, (t, t3))∧ t1 ≤ t < t2 ∧ Terminates(e, f, t) Restored (f, (t1, t2)) ⇔∃ e,
t, t3 Happens(e, (t, t3)) ∧ t1 ≤ t < t2 ∧ Initiates(e, f, t)
Take note of the structure: a single "spine" starts with the goal clause and
resolves against knowledge base clauses until the empty clause is formed. This
is typical of Horn clause knowledge bases' resolution. In the backward-
chaining algorithm of Figure, the clauses along the main spine correspond to
the sequential values of the objectives variable. This is because, in backward
chaining, we always opt to resolve with a clause whose positive literal unified
with the left most literal of the "current" clause on the spine. Backward
chaining is thus only a special example of resolution with a specific control
mechanism for determining which resolution should be performed next.