0% found this document useful (0 votes)
5 views23 pages

Unit - Iii

The document discusses the use of predicate logic for representing knowledge, including the representation of simple facts, instance and ISA relationships, and computable functions. It covers concepts such as quantifiers, resolution, and the unification algorithm, emphasizing the importance of logical inference and the challenges of translating English sentences into logical statements. Additionally, it explains the resolution procedure in predicate logic, which is used to prove statements by refutation.

Uploaded by

varshinip23ccb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views23 pages

Unit - Iii

The document discusses the use of predicate logic for representing knowledge, including the representation of simple facts, instance and ISA relationships, and computable functions. It covers concepts such as quantifiers, resolution, and the unification algorithm, emphasizing the importance of logical inference and the challenges of translating English sentences into logical statements. Additionally, it explains the resolution procedure in predicate logic, which is used to prove statements by refutation.

Uploaded by

varshinip23ccb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT - III

Using Predicate logic: Representing simple facts in logic - Representing Instance and Isa
relationships - Computable functions and predicates - Resolution - Natural deduction.
Representing knowledge using rules: Procedural Vs Declarative knowledge - Logic
programming - Forward Vs Backward reasoning - Matching - Control knowledge.

USING PREDICATE LOGIC


Representation of Simple Facts in Logic
Propositional logic is useful because it is simple to deal with and a decision procedure for it
exists.
Also, In order to draw conclusions, facts are represented in a more convenient way as,
1. Marcus is a man.
●​ man(Marcus)
2. Plato is a man.
●​ man(Plato)
3. All men are mortal.
●​ mortal(men)
But propositional logic fails to capture the relationship between an individual being a man and
that individual being a mortal.
●​ How can these sentences be represented so that we can infer the third sentence from the
first two?
●​ Also, Propositional logic commits only to the existence of facts that may or may not be
the case in the world being represented.
●​ Moreover, It has a simple syntax and simple semantics. It suffices to illustrate the process
of inference.
●​ Propositional logic quickly becomes impractical, even for very small worlds.
Predicate logic
First-order Predicate logic (FOPL) models the world in terms of
●​ Objects, which are things with individual identities
●​ Properties of objects that distinguish them from other objects
●​ Relations that hold among sets of objects
●​ Functions, which are a subset of relations where there is only one “value” for any given
“input”
First-order Predicate logic (FOPL) provides
●​ Constants: a, b, dog33. Name a specific object.
●​ Variables: X, Y. Refer to an object without naming it.
●​ Functions: Mapping from objects to objects.
●​ Terms: Refer to objects
●​ Atomic Sentences: in(dad-of(X), food6) Can be true or false, Correspond to propositional
symbols P, Q.
A well-formed formula (wff) is a sentence containing no “free” variables. So, That is, all
variables are “bound” by universal or existential quantifiers.
(∀x)P(x, y) has x bound as a universally quantified variable, but y is free.

Quantifiers
Universal quantification
●​ (∀x)P(x) means that P holds for all values of x in the domain associated with that
variable
●​ E.g., (∀x) dolphin(x) → mammal(x)
Existential quantification
●​ (∃ x)P(x) means that P holds for some value of x in the domain associated with that
variable
●​ E.g., (∃ x) mammal(x) ∧ lays-eggs(x)
Also, Consider the following example that shows the use of predicate logic as a way of
representing knowledge.
1. Marcus was a man.
●​ man(Marcus)
2. Marcus was a Pompeian.
●​ Pompeian(Marcus)
3. All Pompeians were Romans.
●​ ∀x: Pompeian(x) → Roman(x)
4. Caesar was a ruler.
●​ ruler(Caesar)
5. Also, All Pompeians were either loyal to Caesar or hated him.
●​ inclusive-or
●​ ∀x: Roman(x) → loyalto(x, Caesar) ∨ hate(x, Caesar)
●​ exclusive-or
●​ ∀x: Roman(x) → (loyalto(x, Caesar) ∧¬ hate(x, Caesar)) ∨
●​ (¬loyalto(x, Caesar) ∧ hate(x, Caesar))
6. Everyone is loyal to someone.
●​ ∀x: ∃y: loyalto(x, y)
7. People only try to assassinate rulers they are not loyal to.
●​ ∀x: ∀y: person(x) ∧ ruler(y) ∧
●​ tryassassinate(x, y) →¬loyalto(x, y)
8. Marcus tried to assassinate Caesar.
●​ tryassassinate(Marcus, Caesar)
Now suppose if we want to use these statements to answer the question: Was Marcus loyal to
Caesar?
Also, Now let’s try to produce a formal proof, reasoning backward from the desired goal: ¬
Ioyalto(Marcus, Caesar)
In order to prove the goal, we need to use the rules of inference to transform it into another goal
(or possibly a set of goals) that can, in turn, transformed, and so on, until there are no unsatisfied
goals remaining.
Figure: An attempt to prove ¬loyalto(Marcus, Caesar).
●​ The problem is that, although we know that Marcus was a man, we do not have any way
to conclude from that that Marcus was a person. Also, We need to add the representation
of another fact to our system, namely: ∀ man(x) → person(x)
●​ Now we can satisfy the last goal and produce a proof that Marcus was not loyal to
Caesar.
●​ Moreover, From this simple example, we see that three important issues must be
addressed in the process of converting English sentences into logical statements and then
using those statements to deduce new ones:
1.​ Many English sentences are ambiguous (for example, 5, 6, and 7 above).Choosing
the correct interpretation may be difficult.
2.​ Also, There is often a choice of how to represent the knowledge. Simple
representations are desirable, but they may exclude certain kinds of reasoning.
3.​ Similarly, Even in very simple situations, a set of sentences is unlikely to contain
all the information necessary to reason about the topic at hand. In order to be able
to use a set of statements effectively. Moreover, It is usually necessary to have
access to another set of statements that represent facts that people consider too
obvious to mention.

Representing Instance and ISA Relationships


●​ Specific attributes instance and isa play an important role particularly in a useful form of
reasoning called property inheritance.
●​ The predicates instance and isa explicitly captured the relationships they used to express,
namely class membership and class inclusion.
●​ 4.2 shows the first five sentences of the last section represented in logic in three different
ways.
●​ The first part of the figure contains the representations we have already discussed. In
these representations, class membership represented with unary predicates (such as
Roman), each of which corresponds to a class.
●​ Asserting that P(x) is true is equivalent to asserting that x is an instance (or element) of P.
●​ The second part of the figure contains representations that use the instance predicate
explicitly.

Figure: Three ways of representing class membership: ISA Relationships

●​ The predicate instance is a binary one, whose first argument is an object and whose
the second argument is a class to which the object belongs.
●​ But these representations do not use an explicit isa predicate.
●​ Instead, subclass relationships, such as that between Pompeians and Romans, described
as shown in sentence 3.
●​ The implication rule states that if an object is an instance of the subclass Pompeian then it
is an instance of the superclass Roman.
●​ Note that this rule is equivalent to the standard set-theoretic definition of the subclass
superclass relationship.
●​ The third part contains representations that use both the instance and isa predicates
explicitly.
●​ The use of the isa predicate simplifies the representation of sentence 3, but it requires that
one additional axiom (shown here as number 6) be provided.
Computable Functions and Predicates
●​ To express simple facts, such as the following greater-than and less-than relationships:
gt(1,O) It(0,1) gt(2,1) It(1,2) gt(3,2) It( 2,3)
●​ It is often also useful to have computable functions as well as computable predicates.
Thus we might want to be able to evaluate the truth of gt(2 + 3,1)
●​ To do so requires that we first compute the value of the plus function given the arguments
2 and 3, and then send the arguments 5 and 1 to gt.
Consider the following set of facts, again involving Marcus:
1) Marcus was a man.
​ man(Marcus)
2) Marcus was a Pompeian.
​ Pompeian(Marcus)
3) Marcus was born in 40 A.D.
born(Marcus, 40)
4) All men are mortal.
x: man(x) → mortal(x)
5) All Pompeians died when the volcano erupted in 79 A.D.
​ erupted(volcano, 79) ∧ ∀ x : [Pompeian(x) → died(x, 79)]
6) No mortal lives longer than 150 years.
​ x: t1: At2: mortal(x) born(x, t1) gt(t2 – t1,150) → died(x, t2)
7) It is now 1991.
​ now = 1991
So, Above example shows how these ideas of computable functions and predicates can be useful.
It also makes use of the notion of equality and allows equal objects to be substituted for each
other whenever it appears helpful to do so during a proof.
●​ So, Now suppose we want to answer the question “Is Marcus alive?”
●​ The statements suggested here, there may be two ways of deducing an answer.
●​ Either we can show that Marcus is dead because he was killed by the volcano or we can
show that he must be dead because he would otherwise be more than 150 years old,
which we know is not possible. Also, As soon as we attempt to follow either of those
paths rigorously, however, we discover, just as we did in the last example, that we need
some additional knowledge. For example, our statements talk about dying, but they say
nothing that relates to being alive, which is what the question is asking.
So we add the following facts:
8) Alive means not dead.
x: t: [alive(x, t) → ¬ dead(x, t)] [¬ dead(x, t) → alive(x, t)]
9) If someone dies, then he is dead at all later times.
​ x: t1: At2: died(x, t1) gt(t2, t1) → dead(x, t2)
So, Now let’s attempt to answer the question “Is Marcus alive?” by proving: ¬ alive(Marcus,
now)
Resolution
Propositional Resolution
1. Convert all the propositions of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in
Step 1.
3. Repeat until either a contradiction is found or no progress can be made:
1.​ Select two clauses. Call these the parent clauses.
2.​ Resolve them together. The resulting clause, called the resolvent, will be the
disjunction of all of the literals of both of the parent clauses with the following
exception: If there are any pairs of literals L and ¬ L such that one of the parent
clauses contains L and the other contains ¬L, then select one such pair and
eliminate both L and ¬ L from the resolvent.
3.​ If the resolvent is the empty clause, then a contradiction has been found. If it is
not, then add it to the set of classes available to the procedure.
The Unification Algorithm
●​ In propositional logic, it is easy to determine that two literals cannot both be true at the
same time.
●​ Simply look for L and ¬L in predicate logic, this matching process is more complicated
since the arguments of the predicates must be considered.
●​ For example, man(John) and ¬man(John) is a contradiction, while the man(John) and
¬man(Spot) is not.
●​ Thus, in order to determine contradictions, we need a matching procedure that compares
two literals and discovers whether there exists a set of substitutions that makes them
identical.
●​ There is a straightforward recursive procedure, called the unification algorithm, that does
it.
Algorithm: Unify(L1, L2)
1. If L1 or L2 are both variables or constants, then:
1. If L1 and L2 are identical, then return NIL.
2. Else if L1 is a variable, then if L1 occurs in L2 then return {FAIL}, else return
(L2/L1).
3. Also, Else if L2 is a variable, then if L2 occurs in L1 then return {FAIL}, else
return (L1/L2). d. Else return {FAIL}.
2. If the initial predicate symbols in L1 and L2 are not identical, then return {FAIL}.
3. If LI and L2 have a different number of arguments, then return {FAIL}.
4. Set SUBST to NIL. (At the end of this procedure, SUBST will contain all the
substitutions used to unify L1 and L2.)
5. For I ← 1 to the number of arguments in L1 :
1. Call Unify with the ith argument of L1 and the ith argument of L2, putting the
result in S.
2. If S contains FAIL then return {FAIL}.
3. If S is not equal to NIL then:
2. Apply S to the remainder of both L1 and L2.
3. SUBST: = APPEND(S, SUBST).
6. Return SUBST.
Resolution in Predicate Logic
We can now state the resolution algorithm for predicate logic as follows, assuming a set of given
statements F and a statement to be proved P:
Algorithm: Resolution
1. Convert all the statements of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in
1.
3. Repeat until a contradiction is found, no progress can be made, or a predetermined
amount of effort has expanded.
1. Select two clauses. Call these the parent clauses.
2. Resolve them together. The resolvent will the disjunction of all the literals of
both parent clauses with appropriate substitutions performed and with the
following exception: If there is one pair of literals T1 and ¬T2 such that one of the
parent clauses contains T2 and the other contains T1 and if T1 and T2 are
unifiable, then neither T1 nor T2 should appear in the resolvent. We call T1 and
T2 Complementary literals. Use the substitution produced by the unification to
create the resolvent. If there is more than one pair of complementary literals, only
one pair should omit from the resolvent.
3. If the resolvent is an empty clause, then a contradiction has found. Moreover, If
it is not, then add it to the set of classes available to the procedure.
Resolution Procedure
●​ Resolution is a procedure, which gains its efficiency from the fact that it operates on
statements that have been converted to a very convenient standard form.
●​ Resolution produces proof by refutation.
●​ In other words, to prove a statement (i.e., to show that it is valid), resolution attempts
to show that the negation of the statement produces a contradiction with the known
statements (i.e., that it is unsatisfiable).
●​ The resolution procedure is a simple iterative process: at each step, two clauses, called
the parent clauses, are compared (resolved), resulting in a new clause that has inferred
from them. The new clause represents ways that the two parent clauses interact with each
other. Suppose that there are two clauses in the system:
winter V summer
¬ winter V cold
●​ Now we observe that precisely one of winter and ¬ winter will be true at any point.
●​ If winter is true, then cold must be true to guarantee the truth of the second clause. If ¬
winter is true, then summer must be true to guarantee the truth of the first clause.
●​ Thus we see that from these two clauses we can deduce summer V cold
●​ This is the deduction that the resolution procedure will make.
●​ Resolution operates by taking two clauses that each contains the same literal, in this
example, winter.
●​ Moreover, The literal must occur in the positive form in one clause and in negative form
in the other. The resolvent obtained by combining all of the literals of the two parent
clauses except the ones that cancel.
●​ If the clause that produced is the empty clause, then a contradiction has found.
For example, the two clauses
winter
¬ winter
will produce the empty clause.

Natural Deduction Expanded

Natural Deduction is a formal system for logical reasoning that allows you to derive
conclusions from premises using a set of rules. The system uses basic inference rules that
describe how to manipulate logical formulas.

In propositional logic, the main operations are:

●​ Conjunction (AND: ∧\land∧)


●​ Disjunction (OR: ∨\lor∨)
●​ Implication (IMPLIES: →\to→)
●​ Negation (NOT: ¬\neg¬)

Structure of a Proof

A proof in natural deduction typically follows these steps:

1.​ Start with assumptions: You assume a proposition to derive a conclusion.


2.​ Apply inference rules: You apply logical rules to move from one or more assumptions to
a conclusion.
3.​ Conclude a formula: Once you have derived a conclusion, you may discharge
assumptions (in the case of implications and negations) to finalize your proof.
Key Rules in Natural Deduction (Expanded)

1. Introduction Rules (How to introduce logical connectives)

Conjunction Introduction (∧I)

●​ If you have two propositions PPP and QQQ, you can combine them into a conjunction
P∧QP \land QP∧Q.
●​ Form:​
PPP​
QQQ​
∴P∧Q\therefore P \land Q∴P∧Q
●​ Example:​
If you know PPP (it’s raining) and QQQ (it’s cold), you can conclude P∧QP \land
QP∧Q (It’s raining and it’s cold).

Disjunction Introduction (∨I)

●​ If you know PPP, you can conclude P∨QP \lor QP∨Q, where QQQ is any formula.
●​ Form:​
PPP​
∴P∨Q\therefore P \lor Q∴P∨Q
●​ Example:​
If you know PPP (I will study), you can conclude P∨QP \lor QP∨Q (I will study or I
will play video games).

Implication Introduction (→I)

●​ If assuming PPP leads you to conclude QQQ, you can conclude P→QP \to QP→Q (if
PPP then QQQ).
●​ Form:​
P⊢QP \vdash QP⊢Q​
∴P→Q\therefore P \to Q∴P→Q
●​ Example:​
If assuming "It rains" leads to "I take an umbrella," you can conclude "If it rains, I will
take an umbrella."

Negation Introduction (¬I)

●​ If assuming PPP leads to a contradiction, you can conclude ¬P\neg P¬P (not PPP).
●​ Form:​
P⊢⊥P \vdash \botP⊢⊥ (assuming PPP leads to contradiction)​
∴¬P\therefore \neg P∴¬P
●​ Example:​
If assuming "It’s sunny" leads to "It’s both raining and not raining," you can conclude
"It’s not sunny."

2. Elimination Rules (How to break down logical connectives)

Conjunction Elimination (∧E)

●​ If you have P∧QP \land QP∧Q, you can extract either PPP or QQQ.
●​ Form:​
P∧QP \land QP∧Q​
∴P\therefore P∴P​
or​
P∧QP \land QP∧Q​
∴Q\therefore Q∴Q
●​ Example:​
If you know "It’s raining and it’s cold" (i.e., P∧QP \land QP∧Q), you can conclude
"It’s raining" or "It’s cold."

Disjunction Elimination (∨E)

●​ If you have P∨QP \lor QP∨Q (P or Q) and can derive the same conclusion RRR from
both PPP and QQQ, then you can conclude RRR.​
●​ Form:​
P∨QP \lor QP∨Q​
P→RP \to RP→R​
Q→RQ \to RQ→R​
∴R\therefore R∴R
●​ Example:​
If you know "I will either go to the park or the mall" (i.e., P∨QP \lor QP∨Q), and you
know:
○​ If I go to the park, I will see my friend (i.e., P→RP \to RP→R).
○​ If I go to the mall, I will see my friend (i.e., Q→RQ \to RQ→R).
●​ Then, you can conclude that I will see my friend (i.e., RRR).

Implication Elimination (→E)

●​ If you have P→QP \to QP→Q and PPP, you can conclude QQQ.
●​ Form:​
P→QP \to QP→Q​
PPP​
∴Q\therefore Q∴Q
●​ Example:​
If you know "If it rains, I’ll carry an umbrella" (i.e., P→QP \to QP→Q) and you know
"It rains" (i.e., PPP), you can conclude "I’ll carry an umbrella" (i.e., QQQ).

Negation Elimination (¬E)

●​ From ¬¬P\neg \neg P¬¬P, you can conclude PPP.


●​ Form:​
¬¬P\neg \neg P¬¬P​
∴P\therefore P∴P
●​ Example:​
If you know "It’s not true that it’s not raining," you can conclude "It’s raining."​
Proof Example (Expanded)

Let’s prove P→(Q→P)P \to (Q \to P)P→(Q→P), which means "If PPP is true, then Q→PQ \to
PQ→P is true."

1.​ Assume PPP (Assumption for Implication Introduction)


2.​ Assume QQQ (Assumption for Implication Introduction)
3.​ From assumption 1, we can conclude PPP (Reiteration)
4.​ Conclusion: From 2 and 3, we can derive Q→PQ \to PQ→P (Implication Introduction)
5.​ Conclusion: From 1 and 4, we can derive P→(Q→P)P \to (Q \to P)P→(Q→P)
(Implication Introduction)

Thus, we’ve completed the proof for P→(Q→P)P \to (Q \to P)P→(Q→P).

Practice Problems

1.​ Prove (P→Q)→(¬Q→¬P)(P \to Q) \to (\neg Q \to \neg P)(P→Q)→(¬Q→¬P)


○​ Hint: Use Implication Introduction and Negation Introduction.
2.​ Prove P∧(Q∨R)→(P∧Q)∨(P∧R)P \land (Q \lor R) \to (P \land Q) \lor (P \land
R)P∧(Q∨R)→(P∧Q)∨(P∧R)
○​ Hint: Use Conjunction Elimination and Disjunction Elimination.
3.​ Prove ¬(P∧Q)→(¬P∨¬Q)\neg (P \land Q) \to (\neg P \lor \neg Q)¬(P∧Q)→(¬P∨¬Q)
○​ Hint: Use De Morgan’s laws and negation introduction.

Procedural vs. Declarative Knowledge

In knowledge representation and problem-solving, the distinction between procedural


knowledge and declarative knowledge is fundamental. These two types of knowledge represent
different ways of encoding information and specifying how to use that information for
problem-solving.
Procedural Knowledge

Definition: Procedural knowledge refers to how to do things. It represents knowledge in the


form of procedures, processes, or actions that need to be followed to achieve a particular goal.
The control information necessary to use the knowledge is embedded within the knowledge
itself.

●​ Example: Recipes, computer programs, and instructions are typical representations of


procedural knowledge. For instance, a recipe tells you how to prepare a dish step by
step.
●​ Key Features:
○​ Explicit steps for performing tasks or solving problems.
○​ Control information is embedded directly within the knowledge itself.
○​ It defines what needs to be done and how to execute it in a specific order.

Example: Suppose we have a simple rule about people:​


​ Man(Marcus)
Man(Caesar)
Person(Cleopatra)
∀x: Man(x) → Person(x)

●​ If we were asked: Is Person(y)? The knowledge base might justify multiple answers:
○​ Y = Marcus
○​ Y = Caesar
○​ Y = Cleopatra
●​ The answer depends on how the assertions are examined. Procedural knowledge is
crucial here because the sequence of checks can impact which result is returned first.

Difference from Declarative Knowledge: In procedural knowledge, there is an inherent control


embedded in the representation. The system doesn't just specify facts; it tells the program how to
use them. For example, a computer program or algorithm provides specific instructions on how
to execute tasks, whereas declarative knowledge simply provides facts.
Declarative Knowledge

Definition: Declarative knowledge refers to what is known, typically in the form of facts, rules,
and definitions. It represents knowledge about the world and specifies information that can be
used for reasoning, without prescribing how to use that information.

●​ Example: Laws of physics, definitions, and people’s names are examples of declarative
knowledge. They provide facts but do not define how to use them.
●​ Key Features:
○​ Knowledge is represented in a fact-based or assertion-based manner.
○​ Does not specify how the knowledge should be applied.
○​ The control for using the knowledge must be provided by an external system,
program, or reasoning mechanism.

Example: Suppose we have the following set of assertions:​


​ ∀x: pet(x) ∧ small(x) → apartmentpet(x)
∀x: cat(x) ∧ dog(x) → pet(x)
∀x: poodle(x) → dog(x) ∧ small(x)
poodle(fluffy)

●​ If we want to know if fluffy is an apartment pet, declarative knowledge alone provides


the rules, but we need a mechanism (like a theorem prover) to deduce the answer:
○​ Is fluffy an apartment pet?
■​ We apply the logical assertions: we know that "fluffy" is a poodle (i.e.,
poodle(fluffy)), so from the rule poodle(x) → dog(x) ∧ small(x), we
deduce that fluffy is a dog and small.
■​ Using the rule pet(x) → apartmentpet(x), and knowing fluffy is small, we
conclude that fluffy is an apartment pet.
●​ In this case, declarative knowledge alone does not define how to compute the answer but
provides the logical rules for the computation.​
Difference from Procedural Knowledge:

●​ Declarative knowledge only specifies what is true but does not dictate how to use that
truth. For example, logical assertions in a knowledge base may suggest multiple valid
conclusions, but it is up to the reasoning system to determine how to compute these
conclusions.
●​ No embedded control: The system needs additional mechanisms to search through facts
and apply rules.

Logic Programming: A Combination of Procedural and Declarative Knowledge

Logic Programming is an example of a paradigm that blends both declarative and procedural
knowledge. It allows you to represent knowledge using logical assertions (declarative) while also
providing a controlled procedure to apply those assertions to derive answers (procedural).

PROLOG Example:

In PROLOG, a popular logic programming language, knowledge is represented using Horn


clauses, a special form of logical assertions. These clauses consist of a head (positive literal) and
a body (a conjunction of literals, which may include negative literals). The PROLOG interpreter
then applies a fixed control strategy (search strategy) to find the solutions.

Logical Representation (Declarative Knowledge):​


​ ∀x: pet(x) ∧ small(x) → apartmentpet(x)
∀x: cat(x) ∧ dog(x) → pet(x)
∀x: poodle(x) → dog(x) ∧ small(x)
poodle(fluffy)

1.​ This logical representation simply states the relationships between pets, dogs, small
pets, and apartment pets. It doesn’t specify how the computer should search through the
rules to answer queries.
PROLOG Representation (Procedural Knowledge):​
​ apartmentpet(x) :- pet(x), small(x).
pet(x) :- cat(x).
pet(x) :- dog(x).
dog(x) :- poodle(x).
small(x) :- poodle(x).
poodle(fluffy).

2.​ In the PROLOG representation:


○​ Facts and rules are written using Horn clauses.
○​ The control over how to search and apply these rules is embedded in the
interpreter’s search strategy.
○​ Query: For example, asking ?- apartmentpet(fluffy). would trigger the interpreter
to use the rules and facts to deduce whether "fluffy" is an apartment pet.
3.​ Key Point: The logic (declarative knowledge) specifies what is true, but the PROLOG
engine (procedural knowledge) specifies how to search for the answers.​

Forward vs. Backward Reasoning

Reasoning is a process of problem-solving that involves searching through a set of possibilities


to find a path from an initial state to a goal state. There are two main types of reasoning
strategies for solving problems:

1.​ Forward Reasoning


2.​ Backward Reasoning

Both strategies involve searching through a space of possible states, but they start from different
ends and follow different approaches.
Forward Reasoning

Forward reasoning (also known as forward chaining) starts at the initial state and moves
toward the goal state. It involves generating new states based on the current state using rules and
applying those rules until the goal state is reached.

●​ Steps in Forward Reasoning:


○​ Start with the initial configuration (root of the tree).
○​ Apply rules that match the current state (left-hand side of the rules).
○​ Generate new configurations (right-hand side of the rule) and move to the next
level of the tree.
○​ Repeat the process by considering the new configurations and applying rules to
generate further new states, until you reach the goal.
●​ Example (8 Puzzle): Consider a puzzle where tiles need to be moved. The rule could be:
○​ If Square 1 is empty and Square 2 contains tile n, you can move the tile.
○​ Square 1 is empty → Square 4 contains tile n.
○​ Apply these rules to generate new configurations and move toward solving the
puzzle from the starting state.

Backward Reasoning

Backward reasoning (also known as backward chaining) works backwards from the goal state.
Instead of starting at the initial state, you start at the goal state and work your way backward to
see which actions would lead you to the goal.

●​ Steps in Backward Reasoning:


1.​ Start with the goal state (root of the tree).
2.​ Find rules that can generate this goal state (right-hand side of the rules).
3.​ Apply these rules in reverse (left-hand side is used to generate new states).
4.​ Repeat by considering the new goal states and applying rules backward until you
reach the initial state.
●​ Example (8 Puzzle): If the goal is to arrange tiles in a certain configuration, the
backward reasoning would involve finding which configuration could have led to this
goal state and working backward step by step.

Comparison of Forward and Backward Reasoning

●​ Direction of Search:
○​ Forward reasoning starts from the initial state and generates new states.
○​ Backward reasoning starts from the goal state and looks for the previous states
that could lead to the goal.
●​ Efficiency:
○​ In general, forward reasoning is useful when there are fewer possible start states
than goal states. It can be more efficient if the initial state is well-defined and the
goal state is less constrained.
○​ Backward reasoning is often more efficient when there are fewer goal states or
when the goal state is more tightly defined. For example, backward reasoning is
useful when solving problems like proofs or diagnosis.
●​ Flexibility:
○​ Forward reasoning is more intuitive for some types of problems because we
usually know how to begin a task, and then apply rules to work toward a solution.
○​ Backward reasoning can sometimes be more targeted and efficient, particularly
when the problem involves searching for specific facts or conditions.

Factors Influencing the Choice of Reasoning Strategy

Several factors can influence whether forward or backward reasoning is more appropriate:

1.​ Number of Start vs. Goal States:​


If there are more possible start states than goal states, forward reasoning is more
efficient, as it can more easily cover all possibilities. Conversely, if the goal state is more
constrained, backward reasoning is better.​
2.​ Branching Factor:​
If one direction has a lower branching factor, it is often more efficient to search in that
direction. For example, if exploring the start state has fewer possible transitions than the
goal state, forward reasoning might be preferred.​

3.​ Justification of Reasoning:​


If the reasoning process needs to be explained (for example, in a diagnostic or
decision-making system), backward reasoning may be more natural because it mirrors
how humans often reason: starting with a conclusion and working backwards to figure
out how it was reached.​

4.​ Type of Trigger:​


If the problem-solving process is triggered by a new factor (e.g., new data or a new
input), forward reasoning is often used. On the other hand, if the problem-solving
process is triggered by a query (e.g., a specific question to be answered), backward
reasoning is typically preferred.​

Examples of Forward and Backward Reasoning

1.​ Driving from Home to an Unfamiliar Place:


○​ Forward reasoning: It is easier to drive from an unfamiliar place to home, since
home is familiar and we know the way.
○​ Backward reasoning: If you need to get home from an unfamiliar place, you'd
work backward from your home to the unfamiliar place, determining the best
route as you go.
2.​ Symbolic Integration:
○​ Forward reasoning: Starting with a formula containing an integral and
simplifying it to a formula without an integral.
○​ Backward reasoning: Starting with the integral-free expression and trying to
transform it back into an expression with an integral.
3.​ Medical Diagnosis:​

○​ Backward reasoning: In medicine, doctors often use backward reasoning to


arrive at a diagnosis. They begin with the symptoms (goal) and work backward
through possible causes.
4.​ PROLOG (Backward Chaining):​

○​ PROLOG uses backward chaining to answer queries. Given a set of rules and
facts, PROLOG attempts to match the goal (query) with the head of the rules,
working backward from the goal to find which facts satisfy it.

Combining Forward and Backward Reasoning

In some cases, you might want to combine forward and backward reasoning. One such
strategy is Bi-directional Search:

●​ Bi-directional Search:
○​ This method starts from both the initial state and the goal state simultaneously
and tries to meet in the middle. This approach can reduce the search space
significantly.
○​ However, bi-directional search can be inefficient if the search spaces do not
meet as expected, or if the problem space is poorly structured.
●​ Challenges:
○​ The two searches may pass each other, requiring additional work to connect the
two paths.
○​ If the rules can be applied symmetrically in both forward and backward
reasoning, the combined approach is feasible. However, if the rules are
asymmetric, it may be difficult to combine both strategies.

You might also like