Module4 Notes
Module4 Notes
AI_Module4
Topics:
1. First Order Logic:
a. Representation Revisited,
b. Syntax and Semantics of First Order Logic,
c. Using First Order Logic.
d. Knowledge Engineering in First Order Logic
2. Inference in First Order Logic:
a. Propositional Versus First Order Inference,
b. Unification,
c. Forward Chaining,
1|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Programs can usually only store one value per variable and may not handle
partial information well.
2|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Language Thought:
Natural languages, like English or Spanish, are very expressive. In the fields of
linguistics and philosophy, natural language is often seen as a way to represent
knowledge. If we could understand its rules completely, natural language could
help us build systems that can reason and understand vast amounts of written
information. Today, natural language is seen as more of a communication tool
than a way to represent pure facts.
Brain and Word Recognition with fMRI: Until recently, it wasn’t possible
to know if the brain processed information in a similar way. However, studies by
Mitchell et al. (2008) have shown that fMRI can identify patterns when people
see certain words. By scanning brain images of people shown words like "celery"
or "airplane," a computer could predict the word they were shown with 77%
accuracy. Remarkably, this system even worked with words and people the
program had never seen before, suggesting some common brain representation of
words across people.
4|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
This is known as the principle of choosing the most succinct theory. Since the
language used impacts how theories are represented, language ultimately
influences thought and learning processes.
Note: fMRI stands for functional Magnetic Resonance Imaging, a type of brain scan used
to measure and map brain activity. Unlike regular MRI, which shows the structure of the
brain, fMRI detects changes in blood flow to different areas of the brain, which increases in
regions that are more active. Here’s how it works:
1. Blood Flow and Brain Activity: When a specific brain region is active (e.g., during
thinking, moving, or sensing), it requires more oxygen, which is carried by the blood.
fMRI detects these changes in blood oxygen levels.
2. Mapping Activity: fMRI produces images showing which parts of the brain are
working harder at any given moment. This allows researchers to see which areas are
involved in various tasks like speaking, seeing, or remembering.
3. Applications: fMRI is widely used in research to understand how different parts of
the brain function, to study brain disorders, and even to explore how people process
language, memory, and emotions.
Since fMRI is non-invasive (it doesn’t require surgery or radiation), it’s a safe and popular
tool for studying brain activity in real time.
Examples:
o “One plus two equals three” uses objects (numbers), a relation (equals),
and a function (plus).
o “Squares neighboring the wumpus are smelly” has objects (squares,
wumpus), a property (smelly), and a relation (neighboring).
5|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
6|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
7|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example1:
FOL: Likes(John,IceCream)
English: John likes ice cream.
FOL: Person(Mary)
English: Mary is a person.
FOL: ∀x (Person(x)→Mortal(x))
English: All persons are mortal.
8|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
FOL: ∃x (Student(x)∧Enrolled(x,Math))
English: There exists a student who is enrolled in Math.
FOL: FatherOf(John,Mary)
English: John is the father of Mary.
FOL: ∀x ∀y (BrotherOf(x,y)→SiblingOf(x,y))
English: If x is the brother of y, then x is a sibling of y.
FOL: Red(Apple)
English: The apple is red.
FOL: Adjacent(Square(1,1),Square(1,2))
English: Square (1,1) is adjacent to Square (1,2).
Example 2:
Here are 25 complex examples of First-Order Logic (FOL) that make use of
quantifiers (Universal ∀ and Existential ∃) and logical connectives (such as
conjunction ∧ and, disjunction ∨, negation ¬, implication →, and equivalence
↔):
1. ∀x (Human(x)→Mortal(x))
2. ∃x (Human(x)∧¬Mortal(x))
3. ∀x (Dog(x)→∃y (OwnerOf(y,x)))
4. ∀x ∃y (Likes(x,y)∧Animal(y))
5. ∃x (Person(x)∧∀y (Person(y)→Knows(x,y)))
9|Page
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
6. ∀x ∃y (BrotherOf(x,y)→¬SiblingOf(x,y))
English: For every person, there exists a brother who is not a sibling of them.
7. ∃x ∀y (Owns(x,y)→Car(y))
8. ∀x ∃y (WorksAt(x,y)→City(y))
English: For every person, there exists a city where they work.
9. ∀x (Adult(x)→∃y (ChildOf(y,x)))
10. ∃x ∃y (Loves(x,y)∧¬Loves(y,x))
English: There exists someone who loves someone else, but the other person does not love
them back.
12. ∃x ∀y (Owns(x,y)→Car(y))
13. ∀x ∃y (ParentOf(y,x)→Human(y))
17. ∃x ∀y (Animal(x)∧FriendOf(x,y)→Human(y))
10 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
18. ∀x ∀y (Knows(x,y)→Knows(y,x))
19. ∃x ∃y (City(x)∧City(y)∧¬SameCity(x,y))
English: There exist two cities that are not the same.
21. ∃x ∃y (Loves(x,y)∧¬Loves(y,x))
English: There is someone who loves someone, but that person doesn't love them back.
23. ∀x ∃y (Animal(x)∧InZoo(x,y))
24. ∃x ∃y (BankAccount(x)∧Deposit(y,x))
These examples showcase how First-Order Logic uses quantifiers and connectives to form
complex and nuanced statements about the world, capturing relationships, properties, and
conditions in a formal way.
11 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
The basic elements of First-Order Logic (FOL) are the fundamental components
used to express statements and form logical expressions. These elements allow
us to describe objects, relations, and functions within a domain of discourse. The
key components are as follows:
3. Constants
4. Variables
12 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
5. Quantifiers
6. Functions
7. Logical Connectives
13 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
8. Terms
9. Atomic Formulas
14 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example: Consider the figure below which illustrates the model containing five
objects, two binary relations, three unary relations and one unary functions.
15 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Five objects:
1. Richard the Lionheart, King of England from 1189 to 1199;
2. His younger brother, the evil King John, who ruled from 1199 to 1215;
3. The left legs of Richard and John; and
4. Crown
Tuple: The brotherhood relation in this model is the set { <Richard the Lionheart,
King John>, <King John, Richard the Lionheart> } .
Two binary relations: “brother” and “on head” relations are binary relations
Three unary relations/ properties: Person, King and Crown
One unary Function: Left Leg
16 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Choice of Names: Users can choose the names of these symbols freely, similar
to how they choose proposition symbols.
Arity: Each predicate and function symbol has an arity, which indicates the
number of arguments it takes.
Possible Interpretations Count: If there are five objects in a model, there can
be 25 possible interpretations just for the constant symbols Richard and John.
17 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Not every object needs a name; for instance, the crown and legs might not be
named in the intended interpretation.
Objects with Multiple Names: An object can have several names. For example,
both Richard and John could refer to the crown.
Summary of a Model:
For example, if there are two constant symbols and one object, both
symbols must refer to that same object, but this can still apply with more
objects.
If there are more objects than constant symbols, some objects will not have
names.
18 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Fig: Some members of the set of all models for a language with two constant
symbols, R and J, and one binary relation symbol. The interpretation of each
constant symbol is shown by a gray arrow. Within each model, the related
objects are connected by arrows.
19 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
4.1.b.3: Terms
Reasoning About Terms: We can discuss concepts like “everyone has a left leg”
without needing to define what LeftLeg means.This reasoning is different from
programming, where you need a defined subroutine to return a value.
The function symbol f refers to a specific function in the model (let’s call
it F).
The argument terms (like t1, t2) refer to objects in the domain (let's call
them d1, d2).
The entire term refers to the object that results from applying the function
F to the objects d1, d2, etc.
Example: If LeftLeg refers to a function in our model, and John refers to King
John, then LeftLeg(John) represents King John’s left leg.
20 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Complex Terms in Atomic Sentences: Atomic sentences can also use complex
terms as arguments.
Example: Married(Father(Richard), Mother(John)) means that
Richard’s father is married to John’s mother.
Building Complex Sentences: We can use logical connectives (like NOT, AND,
OR, and IMPLIES) to create more complex sentences in first-order logic. These
connectives follow the same rules (syntax and semantics) as in propositional
calculus.
Examples of Complex Sentences: Here are four sentences that are true in a
specific model based on our intended interpretation:
1. ¬Brother(LeftLeg(Richard), John): It is not true that Richard's left leg
is the brother of John.
2. Brother(Richard, John) ∧ Brother(John, Richard): Richard is the
brother of John and John is the brother of Richard.
3. King(Richard) ∨ King(John): Either Richard is a king or John is a king.
4. ¬King(Richard) ⇒ King(John): If Richard is not a king, then John is a
king.
21 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Using the same variable name for different quantifiers can be confusing. It’s
best to use different names.
22 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
∀x ¬P ≡ ¬∃x P
¬∀x P ≡ ∃x ¬P
∀x P ≡ ¬∃x ¬P
∃x P ≡ ¬∀x ¬P
¬(P ∨ Q) ≡ ¬P ∧ ¬Q
¬(P ∧ Q) ≡ ¬P ∨ ¬Q
Quantifiers are essential for expressing general rules and properties about
objects in first-order logic. Understanding how to use universal (∀) and existential
(∃) quantifiers, their nesting, and their connection through negation is crucial for
effective reasoning in logic.
4.1.b.7: Equality
Using Equality in Atomic Sentences: In first-order logic, we can create atomic
sentences using the equality symbol (=) to indicate that two terms refer to the
same object.
Example: Father(John) = Henry means that the object represented by
Father(John) is the same as the object represented by Henry.
Negation of Equality: The equality symbol can also be used with negation (¬) to
assert that two terms are not the same object.
Example: To express that Richard has at least two brothers, we can write:
∃x, y Brother(x, Richard) ∧ Brother(y, Richard) ∧ ¬(x = y)
This states that there exist two different brothers (x and y) of Richard.
23 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Equality in first-order logic is a powerful tool for stating facts about objects. By
using the equality symbol, we can express when two terms refer to the same
object or ensure that they refer to different ones. Being careful with the use of
negation is essential for accurately conveying relationships between objects.
Correct Expression: To accurately convey that "Richard’s brothers are John and
Geoffrey," we need a more detailed expression:
Brother(John, Richard) ∧ Brother(Geoffrey, Richard) ∧ John ≠ Geoffrey
∧ ∀x Brother(x, Richard) ⇒ (x = John ∨ x = Geoffrey).
This ensures: John and Geoffrey are indeed the only brothers of Richard.
Challenges with First-Order Logic: This detailed expression is longer and more
complex than how we naturally speak, making it easy to make mistakes when
translating knowledge into first-order logic. Such errors can lead to unexpected
results in logical reasoning systems.
24 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Possible Models: In database semantics, there are limited possible models for a
situation. For instance:
With two objects, there are 16 different combinations of relationships that can
satisfy the conditions, much fewer than the infinite possibilities in standard
first-order logic.
Choosing the Right Approach: There is no single "correct" way to interpret
logic. The best choice depends on:
o How clear and simple it is to express the knowledge.
o How easy it is to create logical rules from that knowledge.
Database semantics works well when we are sure of the identities of all objects
and have all relevant facts. However, it can be tricky when details are unclear.
Figure below shows some of the models, ranging from the model with no tuples
satisfying the relation to the model with all tuples satisfying the relation. With
two objects, there are four possible two-element tuples, so there are 24 =16
different subsets of tuples that can satisfy the relation. Thus, there are 16
possible models in all—a lot fewer than the infinitely many models for the
standard first-order semantics.
Fig: Some members of the set of all models for a language with two constant
symbols, R and J, and one binary relation symbol, under database semantics.
The interpretation of the constant symbols is fixed, and there is a distinct object
for each constant symbol.
25 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Advanced Example: In the next section, we’ll look at a more detailed example:
electronic circuits.
o Example assertions:
TELL(KB, King(John)) – asserts that John is a king.
TELL(KB, Person(Richard)) – asserts that Richard is a person.
TELL(KB, ∀x King(x) ⇒ Person(x)) – asserts that all kings are
people.
Queries (ASK):
o ASK lets us check if something is true in the KB. These questions are called
queries or goals.
Example: ASK(KB, King(John)) returns true if John is indeed a
king in the KB.
26 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Quantified Queries:
We can use ASK with existential quantifiers to see if there exists an example that
fits the query.
However, this is a general answer and doesn’t provide the specific values
that make the query true.
o ASKVARS works well with KBs made only of Horn clauses, where variables
can always bind to specific values to make queries true.
o In first-order logic, some queries may be true without binding any
variables.
27 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Predicates:
Defining Relationships:
28 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
29 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
We can also use infix notation for readability, which means writing m + n instead
of +(m, n).
For example:
m + 1 means we are adding 1 to m.
The addition axiom using infix notation looks like (m + 1) + n = (m + n) +
1.
This builds addition as a repeated application of the successor function.
Once we define addition with Peano axioms, we can easily build other math
operations:
30 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
We can also define division, remainders, and even concepts like prime
numbers.
So, all of number theory can be constructed using just one constant (0), one
function (successor), one predicate (NatNum), and four axioms. This
foundation supports advanced applications like cryptography.
Sets: Sets are also fundamental in math and common sense. Using sets, we
can define number theory and represent collections of elements, including
the empty set. We build up sets by adding elements, finding intersections
(common elements), and unions (combined elements).
Set Theory Notation as Syntactic Sugar: We use familiar symbols in set
theory as syntactic sugar for clarity. For instance:
o {} represents the empty set.
o Set(x) tells us if something is a set.
o x ∈ s means x is an element of set s.
o s1 ⊆ s2 means set s1 is a subset of s2.
o s1 ∩ s2 and s1 ∪ s2 are the intersection and union of sets s1 and
s2, respectively.
o {x | s} represents a new set created by adding x to s.
31 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Lists
Definition of Lists:
Vocabulary Used:
Syntactic Sugar:
Just like with sets, we often use simpler notation when writing about lists:
o The empty list is written as [].
o Cons(x, y) (where y is a non-empty list) is written as [x|y].
o Cons(x, Nil) (the list containing only the element x) is written
as [x].
o A list with several elements, such as [A, B, C], corresponds to
the nested structure Cons(A, Cons(B, Cons(C, Nil))).
32 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Agent Perception:
Logical Implications:
The percept data leads to certain facts about the current state:
o Example Sentences:
∀t,s,g,m,c Percept([s, Breeze, g, m, c],
t) ⇒ Breeze(t)
∀t,s,b,m,c Percept([s, b, Glitter, m, c],
t) ⇒ Glitter(t)
This is a simple reasoning process called perception and will be explored
further in Chapter 24.
33 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Reflex Behavior:
Objects in the Wumpus World include squares, pits, and the Wumpus.
Instead of naming each square (e.g., Square1, Square2), we use lists to
represent their coordinates:
o Example of Adjacency:
∀x,y,a,b Adjacent([x,y],[a,b]) ⇔ (x = a
∧ (y = b-1 ∨ y = b+1)) ∨ (y = b ∧ (x = a-
1 ∨ x = a+1))
Pits: Use a unary predicate Pit to indicate where pits are located.
Wumpus: Represented simply by a constant, Wumpus.
The agent’s location is tracked over time:
o Example: At(Agent, s, t) means the agent is at square s at
time t.
The Wumpus's location can be fixed:
o Example: ∀t At(Wumpus, [2,2], t)
Properties of Squares:
Deducing Locations:
By identifying breezy and non-breezy squares, the agent can infer where
pits and the Wumpus are located:
34 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
The first-order logic framework allows for more efficient and clear
representations of the Wumpus World, making it easier to manage the agent's
perception, actions, and the environment.
Knowledge engineering projects can vary in many ways, but they all generally
include the following steps:
1. Identify the Task: The knowledge engineer defines what questions the
knowledge base will answer and what facts are needed for each problem. For
example, in the wumpus world, does the knowledge base need to decide on
actions (like moving or grabbing) or just provide information about the
environment (like the presence of pits or wumpus)? This step is similar to the
PEAS process for designing agents discussed in Chapter 2.
2. Assemble Relevant Knowledge: The knowledge engineer may already be an
expert in the field or may need to collaborate with experts to gather the
necessary information, a process known as knowledge acquisition. At this
stage, knowledge is not formally represented. The goal is to understand the
knowledge base's scope based on the task and how the domain functions.
35 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
oFor the wumpus world, which has a defined set of rules, identifying
relevant knowledge is straightforward. For example, the adjacency of
squares needs to be known, but this was not explicitly stated in the
wumpus-world rules. In real-world domains, determining what knowledge
is relevant can be complex, such as deciding if a VLSI simulation needs to
consider stray capacitances and skin effects.
3. Decide on Vocabulary: The engineer translates important concepts into
logical names, including predicates, functions, and constants. This involves
stylistic decisions that can significantly affect the project's success. For
example, should pits be represented as objects or as a unary predicate on
squares? Should the agent’s orientation be a function or a predicate? Should
the wumpus’s location depend on time? The choices made form the ontology
of the domain, which describes what exists without detailing their specific
properties or relationships.
4. Encode General Knowledge: The knowledge engineer writes axioms for all
vocabulary terms. This clarifies the meanings and allows experts to verify the
content. This step often uncovers misunderstandings or gaps, requiring a
return to step 3 for revisions.
For example, if the knowledge base includes a diagnostic rule for finding the
wumpus, such as:
∀s Smelly(s)⇒Adjacent(Home(Wumpus),s)
If this is not a biconditional, the agent will never be able to prove the absence
of wumpuses.
36 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
o Initially, the answers to queries may not match expectations. While the
answers reflect the knowledge base's content, they may not be what the
user anticipates. For example, if an axiom is missing, some queries may
not be answerable. Debugging may involve identifying gaps or weak
axioms by recognizing where reasoning stops unexpectedly. Missing or
weak axioms can lead to incomplete conclusions; for example, a statement
like: ∀xNumOfLegs(x,4)⇒Mammal(x) is false for reptiles, amphibians,
and, more importantly, tables. The falsehood of this sentence can be
determined independently of the rest of the knowledge base. In contrast, a
typical programming error, like: offset=position+1, cannot be assessed
without context, such as whether "offset" refers to the current position or
the next one, or if the value of "position" changes elsewhere in the program.
37 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Fig : A digital circuit C1, purporting to be a one-bit full adder. The first two inputs are the
two bits to be added, and the third input is a carry bit. The first output is the sum, and the
second output is a carry bit for the next adder. The circuit contains two XOR gates, two
ANDgates, and one OR gate.
38 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
3. Decide on Vocabulary
We will discuss circuits, terminals, signals, and gates. Here’s how we will
represent them:
Gates: Each gate is represented as an object with a name, e.g.,
Gate(X1), and its type is defined using a function, like
Type(X1)=XOR.
Circuits: Represented by a predicate, e.g., Circuit(C1).
Terminals: Identified using a predicate, e.g., Terminal(x). Each gate
can have input and output terminals, denoted by functions In(1,X1)
and Out(1,X1).
Connectivity: Represented by a predicate Connected, which connects
terminals, e.g., Connected(Out(1,X1),In(1,X2)).
Signal Values: We can use a predicate On(t) to check if a signal is on,
but it's easier to use objects 1 and 0 with a function Signal(t) to
represent signal values.
3. Connectivity is commutative:
39 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
9. AND, OR, and XOR gates have two inputs and one output; NOT gates
have one input and one output.
40 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Connections:
This final query will return a complete input–output table for the device, which
can be used to check that it does in fact add its inputs correctly. This is a simple
example of circuit verification.
41 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
42 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Let us begin with universal quantifiers. Suppose our knowledge base contains
the standard folkloric axiom stating that all greedy kings are evil:
∀ x King(x) ∧ Greedy(x) ⇒ Evil(x) .
Then it seems quite permissible to infer any of the following sentences:
• King(John) ∧ Greedy(John) ⇒ Evil(John)
• King(Richard) ∧ Greedy(Richard) ⇒ Evil(Richard)
• King(Father (John)) ∧ Greedy(Father (John)) ⇒ Evil(Father (John)) .
• ---------------------
The rule of Universal Instantiation (UI for short) says that we can infer any
sentence obtained by substituting a ground term (a term without variables) for
the variable.
To write out the inference rule formally, we use SUBST(θ, α) denote the result
of applying the substitution θ to the sentence α. Then the rule is written
43 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
In the rule for Existential Instantiation, the variable is replaced by a single new
constant symbol. The formal statement is as follows: for any sentence α,
variable v, and constant symbol k that does not appear elsewhere in the
knowledge base,
Existential Sentences: These sentences state that there is at least one object that
meets a certain condition.
Existential Instantiation: This rule lets us assign a unique name to an object that
meets the condition in an existential sentence. However, this name should not
already belong to any other object.
44 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Application Rules:
Inferential Equivalence: Although the knowledge base isn’t exactly the same
after applying Existential Instantiation, it remains inferentially equivalent. This
means the revised knowledge base will be satisfiable under the same conditions
as the original one.
45 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Challenges – Semidecidability:
If a sentence is not entailed, there is no way to know for sure; the proof
process might continue indefinitely.
This resembles the halting problem in Turing machines, where we can't
always know if a process will end.
The entailment problem in first-order logic is semidecidable:
o Algorithms exist that confirm when a sentence is entailed (saying
“yes”).
o However, no algorithm can reliably say “no” for every non-entailed
sentence.
46 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example : For example, suppose our knowledge base contains just the
sentences
Modus Ponens
Modus Ponens, also known as "the law of detachment", is a fundamental rule
of logic in propositional reasoning. It states that:
If a conditional statement is true (e.g., "If P, then Q") and the antecedent
(P) is true, then the consequent (Q) must also be true.
Formal Representation
Example
Mathematics
Computer Science
Philosophy
Artificial Intelligence
47 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
48 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Unification Algorithm
49 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
51 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
3. If both terms are constants, unification succeeds only if they are the
same.
Example:
o a and a: Succeeds.
o a and b: Fails (different constants).
Conditions:
o The outer operators (function or predicate names) must match.
o The number of arguments must be the same.
o Their arguments must unify recursively.
Example:
o f(x,b) and f(a,y):
52 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Unify the first elements, then unify the rest of the lists recursively.
Example:
o [x,y] and [a,b]:
Unify x with a.
Unify y with b.
Result: {x→a,y→b}.
53 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
These queries form a subsumption lattice, as shown in Figure 9.2(a). The lattice
has some interesting properties. For example, the child of any node in the lattice
is obtained from its parent by a single substitution; and the “highest” common
descendant of any two nodes is the result of applying their most general unifier.
The portion of the lattice above any ground fact can be constructed
systematically (Exercise 9.5). A sentence with repeated constants has a slightly
different lattice, as shown in Figure 9.2(b). Function symbols and variables in the
sentences to be stored introduce still more interesting lattice structures.
54 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Forward chaining is a reasoning method , starts with the known facts and uses
inference rules to derive new conclusions until the goal is reached or no further
inferences can be made.
In essence, it proceeds forward from the premises to the conclusion.
some missiles, and all of its missiles were sold to it by Colonel West, who is
American.
We will prove that West is a criminal.
From these inferred facts, we can conclude that Colonel West is indeed a
criminal since he sold missiles to a hostile nation, which is Nono.
“. . . it is a crime for an American to sell weapons to hostile nations”:
American(West) ∧ Weapon(Missile) ∧ Sells(West, Missile, Nono) ∧
Hostile(Nono) ⇒ Criminal(West) .
DATALOG :
• This knowledge base contains no function symbols and is therefore an
instance of the class of Datalog knowledge bases.
• Datalog is a language that is restricted to first-order definite clauses with
no function symbols.
56 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
• Datalog gets its name because it can represent the type of statements
typically made in relational databases.
Explanation of Algorithm :
This algorithm is an implementation of Forward Chaining with a goal-directed
query mechanism, specifically designed for First-Order Logic (FOL) knowledge
bases.
It's called Forward Chaining with Ask (FOL-FC-ASK). Let's break down the steps:
Algorithm:
1. Inputs:
• KB: The knowledge base, which consists of a set of first-order
definite clauses.
• α: The query, which is an atomic sentence.
2. Loop until no new sentences are inferred:
• Initialize new as an empty set.
3. Iterate through each rule in the knowledge base:
57 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
58 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example1
Query (α):
Grandparent(John, Alice)
Step-by-Step Execution:
1. Iteration 1:
o Rule: Parent(x, y) ∧ Parent(y, z) ⇒ Grandparent(x, z)
o Possible substitutions:
For x=John, y=Mary from Parent(John, Mary).
For y=Mary, z=Alice from Parent(Mary, Alice).
Combined substitution θ = {x=John, y=Mary, z=Alice}
satisfies the premise.
o Conclusion: Apply θ to Grandparent(x, z) →
Grandparent(John, Alice).
o Add Grandparent(John, Alice) to new.
2. Unification:
o Grandparent(John, Alice) unifies with α.
o Return substitution ϕ = {x=John, z=Alice}.
Output:
The algorithm returns the substitution {x=John, z=Alice}, indicating that John is
indeed the grandparent of Alice.
Key Points:
Forward chaining starts with facts and rules in the KB, making inferences step-by-
step until it proves or disproves the query.
It is data-driven, unlike backward chaining, which is goal-driven.
59 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Example2
Example3:
60 | P a g e
Source Book: Artificial Intelligence by Stuart Russel and Peter Norvig. Notes Compiled by: Dr. Thyagaraju G S, Professor, HOD-CSE, SDMIT.
Diff(wa,nt)∧Diff(nt,q)∧Diff(q,nsw)∧Diff(nsw,v)⇒Colorable()
This corresponds to the reduced CSP depicted in Figure 6.12 on page 224.
Algorithms designed for solving tree-structured CSPs can be directly applied to
the problem of rule matching.
only if its premise includes a conjunct that unifies with a newly inferred fact from
the previous iteration. This method maintains efficiency while generating the
same set of facts.
With appropriate indexing, rules that can be triggered by a given fact can be easily
identified, allowing for a system to update dynamically as new facts are
introduced. Typically, only a small subset of rules is activated by a new fact,
which means a lot of unnecessary work occurs when constructing partial matches
with unsatisfied premises. For instance, a partial match is created on the first
iteration but discarded when it fails to succeed until the next iteration. It is more
efficient to retain and complete these partial matches as new facts come in.
The Rete algorithm addresses this issue by pre-processing rules into a dataflow
network where each node represents a literal from a rule premise. This network
captures variable bindings, allowing for the filtering of matches and reducing
recomputation. Rete networks have become crucial in production systems—early
forward-chaining systems like XCON (R1)—which utilized thousands of rules to
configure computer components for customers.
62 | P a g e