0% found this document useful (0 votes)
25 views6 pages

AI Unit 3 Notes

The document provides an overview of First Order Logic (FOL), including its inference rules such as universal generalization, universal instantiation, existential instantiation, and existential introduction. It also discusses unification, forward chaining, backward chaining, and resolution as methods for reasoning in artificial intelligence. Each method is explained with properties and examples to illustrate their application in logical reasoning.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

AI Unit 3 Notes

The document provides an overview of First Order Logic (FOL), including its inference rules such as universal generalization, universal instantiation, existential instantiation, and existential introduction. It also discusses unification, forward chaining, backward chaining, and resolution as methods for reasoning in artificial intelligence. Each method is explained with properties and examples to illustrate their application in logical reasoning.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Unit 3

# First order logic (FOL)


• First order logic is also known as Predicate logic or First order predicate
logic. First order logic is a powerful language that develops information
about the objects in a more easy way and can also express the
relationship between those objects.
• First order logic is sufficiently expressive to represent the natural
language statements in a concise way.
# Inference rules in Fol

1. Universal generalization:
o Universal generalization is a valid inference rule which states that if
premise P(c) is true for any
o arbitrary element c in the universe of discourse, then we can have a
conclusion as ∀ x P(x).
o It can be represented as:
 = P(c) / ∀ x P(x)
o This rule can be used if we want to show that every element has a similar
property.

2. Universal instantiation:
o The UI rule state that we can infer any sentence P(c) by substituting a
ground term c (a constant within domain x) from ∀ x p(x) for any object in
the universe of discourse.
o Universal instantiation is also called as universal elimination or UI is a
valid inference rule. It can be applied multiple times to add new
sentences.
o It can be represented as:

 = ∀ x P(x) / P(c)

3. Existential instantiation:
o Existential instantiation is also called as Existential
o elimination, which is a valid inference rule in first order logic.
o This rule states that one can infer P(c) from the formula given in the form of ∃xP(x)
for a new constant symbol c.
o It can be represented as: =∃ x P(x) / P(c)
4. Existential instantiation:
o Existential instantiation is also called as Existential
o elimination, which is a valid inference rule in first order logic.
o This rule states that one can infer P(c) from the formula given in the form
of ∃xP(x) for a new constant symbol c.
o It can be represented as:
 = ∃ x P(x) / P(c)

5. Existential introduction:
o This rule states that if there is some element c in the universe of
discourse which has a property P, then we can infer that there exists
something in the universe which has the property P.
o It can be represented as:
= P(c) / ∃ x P(x)
# Unification
o Unification is the process of finding substitutions for lifted inference rules,
which can make different logical expression to look similar (identical).
o Unification is a procedure for determining substitutions needed to make
two first order logic expressions match.
o Unification is important component of all first order logic inference
algorithms. The unification algorithm takes two sentences and returns a
unifier for them, if one exists.
# Forward Chaining and Backward Chaining

 Forward Chaining
o Forward chaining is a method of reasoning when using inference
rules in AI.
o Forward chaining starts with the available data and uses inference
rules to extract more data (from an end user) until an optimal goal is
reached.
o An inference engine using forward chaining searches the inference
rules until it finds one where the If clause is known to be true.
o When found it can conclude, or infer, the Then clause, resulting in the
addition of new information to its dataset.
o For example, suppose that the goal is to conclude the colour of my pet
Bruno given that he croaks and eats flies, and that the rule base
contains the following two rules:
 If X croaks and eats flies - Then X is a frog.
 If X is a frog - Then X is red.

 Properties of forward chaining:

1. It is a down-up approach, as it moves from bottom to top.


2. It is a process of making a conclusion based on known facts or
data, by starting from the initial state and reaches the goal state.
3. Forward chaining approach is also called as data- driven as we reach to
the goal using available data.
 Backward Chaining
o Backward chaining starts with a list of goals (or a hypothesis) and works
backwards to see if there is data available that will support any of these
goals.
o Backward Chaining is a reasoning method used in Artificial Intelligence
(AI), particularly in Expert

o Systems and Knowledge-Based Systems, to deduce facts or solutions.


This technique is goal-driven,
o meaning it focuses only on achieving the desired result.
o An inference engine use backward chaining would
o search the inference rules until it finds one which has a Then clause that
matches a desired goal.
o If the If clause of that inference rule is not known to be true, then it is
added to the list of goals.
o For example, suppose that the goal is to conclude the colour of my
pet Bruno given that he croaks and eats flies, and that the rulebase
contains the following two rules:

 If X croaks and eats flies - Then X is a frog.


 b If X is a frog -Then X is red.

 Properties of backward chaining :

1. It is known as a top-down approach.


2. Backward chaining is based on Modus Ponens inference rule.
3. In backward chaining, the goal is broken into sub- goal or sub-goals to
prove the facts true.
# Resolution :
 Resolution is a method used to prove logical statements by
performing a single operation. It is a process that helps with
reasoning using statements in predicate logic.
 Resolution works with statements that have not been simplified
into the most convenient form for reasoning.
 To prove a statement, the resolution procedure tries to show that
the negation (opposite) of the statement
 leads to a contradiction when combined with what we already
know is true. In other words, it shows that the negation of the
statement is unsatisfiable (i.e., it cannot be true). If the negation
leads to a contradiction, then the original statement must be true.

Resolution in Predicate Logic

 Resolution is a rule of inference used in predicate logic to prove the


validity of logical statements.

 The main idea behind resolution is to combine two clauses


 (logical statements) to derive a new clause by eliminating a common term
between them.
 This technique is widely used in automated theorem proving and
artificial intelligence.
Steps for Resolution:

1. Convert the sentences into Conjunctive Normal Form (CNF): Before


applying resolution, the logical
statements need to be in CNF, which is a conjunction of
disjunctions (AND of ORs).

2. Identify complementary literals: A complementary literal is a pair of


literals where one is the negation of the other. For example, P and ¬P
are
complementary literals.

3. Unify the literals: Find a substitution that makes the complementary


literals identical.

4. Resolve the clauses: Combine the two clauses and eliminate the
complementary literals. The result is a new clause that contains all the
literals of the original clauses.
5. Repeat: Continue applying resolution until a contradiction is found
(an empty clause), which indicates that the original set of clauses is
unsatisfiable.
Example:
Let's consider a simple example:
Given Clauses:

1. P(x) ∨ Q(x) (If P(x) is true, then Q(x) can also be true)
2. ¬P(a) (P(a) is false)
3. Q(a) (Q(a) is true)

Goal:
Prove that Q(a) holds true using resolution. Steps:
1. Convert the clauses into CNF:
o Clause 1: P(x) ∨ Q(x)
o Clause 2: ¬P(a)
o Clause 3: Q(a)

2. Identify complementary literals:


o In Clause 1: P(x) and ¬P(a) are complementary when x = a.

3. Unify the literals:


o We unify P(x) with P(a) and ¬P(a) to eliminate
P(a).

4. Resolve the clauses:


o Resolving Clause 1 (P(a) ∨ Q(a)) and Clause 2 (¬P(a)) by
eliminating P(a) gives us Q(a).
Result:
After applying resolution, we derive Q(a), which is the conclusion we
wanted to prove.

You might also like