0% found this document useful (0 votes)
44 views67 pages

AI PPT Unit 3

AI

Uploaded by

Ayush Rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views67 pages

AI PPT Unit 3

AI

Uploaded by

Ayush Rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Knowledge Representation

• Knowledge representation in AI is responsible for representing information about


the real world so that a computer can understand and can utilize this knowledge to
solve the complex real world problems such as diagnosis a medical condition or
communicating with humans in natural language.
What to Represent:
• Object: All the facts about objects in our world domain. E.g., Guitars contains
strings, trumpets are brass instruments.
• Events: Events are the actions which occur in our world.
• Performance: It describe behavior which involves knowledge about how to do
things.
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
• Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Knowledge Representation
• Knowledge: Knowledge is awareness or familiarity gained by experiences of facts,
data, and situations. Following are the types of knowledge in artificial intelligence:
Types of knowledge
1. Declarative Knowledge:
• Declarative knowledge is to know about something.
• It includes concepts, facts, and objects.
• It is also called descriptive knowledge and expressed in declarative sentences.
• It is simpler than procedural language.
2. Procedural Knowledge
• It is also known as imperative knowledge.
• Procedural knowledge is a type of knowledge which is responsible for knowing
how to do something.
• It includes rules, strategies, procedures, agendas, etc.
3. Meta-knowledge:
• Knowledge about the other types of knowledge is called Meta-knowledge
Knowledge Representation
4. Heuristic knowledge:
• Heuristic knowledge is representing knowledge of some experts in a filed or
subject.
• Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
5. Structural knowledge:
• Structural knowledge is basic knowledge to problem-solving.
• It describes relationships between various concepts such as kind of, part of, and
grouping of something.
• It describes the relationship that exists between concepts or objects.
Cycle of Knowledge Representation in AI
Artificial Intelligent Systems usually consist of
various components to display their intelligent
behavior. Some of these components include:
• Perception
• Learning
• Knowledge Representation & Reasoning
• Planning
• Execution
Knowledge Representation
• The Perception component retrieves data or information from the environment. With
the help of this component, you can retrieve data from the environment, find out the
source of noises and check if the AI was damaged by anything. Also, it defines how to
respond when any sense has been detected.
• Then, there is the Learning Component that learns from the captured data by the
perception component. The goal is to build computers that can be taught instead of
programming them. Learning focuses on the process of self-improvement. In order to
learn new things, the system requires knowledge acquisition, inference, acquisition of
heuristics, faster searches, etc.
• The main component in the cycle is Knowledge Representation and Reasoning which
shows the human-like intelligence in the machines. Knowledge representation is all
about understanding intelligence. Instead of trying to understand or build brains from the
bottom up, its goal is to understand and build intelligent behavior from the top-down and
focus on what an agent needs to know in order to behave intelligently. Also, it defines
how automated reasoning procedures can make this knowledge available as needed.
• The Planning and Execution components depend on the analysis of knowledge
representation and reasoning. Here, planning includes giving an initial state, finding their
preconditions and effects, and a sequence of actions to achieve a state in which a
particular goal holds. Now once the planning is completed, the final stage is the
execution of the entire process.
Knowledge Representation Techniques
• Logical representation is a language with some concrete rules which deals with
propositions and has no ambiguity in representation. Logical representation means
drawing a conclusion based on various conditions.
• Logical representation can be categorized into mainly two logics:
Propositional Logics
Predicate Logics
• Each sentence can be translated into logics using syntax and semantics.
Syntax:
• Syntaxes are the rules which decide how we can construct legal sentences in the
logic.
• It determines which symbol we can use in knowledge representation.
• How to write those symbols.
Semantics:
• Semantics are the rules by which we can interpret the sentence in the logic.
• Semantic also involves assigning a meaning to each sentence.
Knowledge Representation Techniques
Advantages of logical representation:
• Logical representation enables us to do logical reasoning.
• Logical representation is the basis for the programming languages.
Disadvantages of logical Representation:
• Logical representations have some restrictions and are challenging to work with.
• Logical representation technique may not be very natural, and inference may not be
so efficient.
2. Semantic Network Representation
• Semantic networks are alternative of predicate logic for knowledge representation.
In Semantic networks, we can represent our knowledge in the form of graphical
networks. This network consists of nodes representing objects and arcs which
describe the relationship between those objects.
• This representation consist of mainly two types of relations:
IS-A relation (Inheritance)
Kind-of-relation
Knowledge Representation Techniques
Example:
• Jerry is a cat.
• Jerry is a mammal
• Jerry is owned by Priya.
• Jerry is brown colored.
• All Mammals are animal
Knowledge Representation Techniques
Drawbacks in Semantic representation:
• Semantic networks take more computational time at runtime as we need to traverse
the complete network tree to answer some questions.
• These types of representations are inadequate as they do not have any equivalent
quantifier, e.g., for all, for some, none, etc.
• Semantic networks do not have any standard definition for the link names.
• These networks are not intelligent and depend on the creator of the system.
Advantages of Semantic network:
• Semantic networks convey meaning in a transparent manner.
• These networks are simple and easily understandable.
3. Frame Representation
• A frame is a record like structure which consists of a collection of attributes and its
values to describe an entity in the world. Frames are the AI data structure which
divides knowledge into substructures by representing stereotypes situations. It
consists of a collection of slots and slot values. These slots may be of any type and
sizes. Slots have names and values which are called facets.
Knowledge Representation Techniques
• Frames are derived from semantic networks and later evolved into our modern-day
classes and objects. A single frame is not much useful. Frames system consist of a
collection of frames which are connected
Advantages of frame representation:
• The frame representation is comparably flexible and used by many applications in
AI.
• It is very easy to add slots for new attribute and relations.
• It is easy to include default data and to search for missing values.
Disadvantages of frame representation:
• In frame system inference mechanism is not be easily processed.
• Frame representation has a much generalized approach.
4. Production Rules
• Production rules system consist of (condition, action) pairs which mean, "If
condition then action". It has mainly three parts:
• The set of production rules
• Working Memory
• The recognize-act-cycle
Knowledge Representation Techniques
• In production rules agent checks for the condition and if the condition exists then
production rule fires and corresponding action is carried out. This complete process
is called a recognize-act cycle.
Example:
• IF (at bus stop AND bus arrives) THEN action (get into the bus)
• IF (on the bus AND paid AND empty seat) THEN action (sit down).
Advantages of Production rule:
• The production rules are expressed in natural language.
• The production rules are highly modular, so we can easily remove, add or modify
an individual rule.
Disadvantages of Production rule:
• Production rule system does not exhibit any learning capabilities, as it does not
store the result of the problem for the future uses.
• During the execution of the program, many rules may be active hence rule-based
production systems are inefficient.
Propositional logic In Artificial intelligence
• Proposition is a declarative statement which is either true or false. It is a technique
of knowledge representation in logical and mathematical form.
Facts about propositional logic:
• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositional logic consists of an object, relations or function, and logical
connectives. These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called
a valid sentence.
• A proposition formula which is always false is called Contradiction.
• Statements which are questions, commands, or opinions are not propositions such
as "Where is Rohini", "How are you", "What is your name", are not propositions.
Types of Propositions
Atomic Proposition:
• Atomic propositions are the simple propositions. It consists of a single proposition
symbol. These are the sentences which must be either true or false.
Example:
• a) 2+2 is 4, it is an atomic proposition as it is a true fact.
• b) "The Sun is cold" is also a proposition as it is a false fact.
Compound proposition:
• Compound propositions are constructed by combining simpler or atomic
propositions, using parenthesis and logical connectives.
• Example:
• a) "It is raining today, and street is wet."
• b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:
• Logical connectives are used to connect two simpler propositions or representing a
sentence logically. We can create compound propositions with the help of logical
connectives. There are mainly five connectives, which are given as follows:
Logical Connectives
Truth Table
• We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table.
Truth table with three propositions:
• We can build a proposition composing three propositions P, Q, and R. This truth
table is made-up of 8 n Tuples as we have taken three proposition symbols.
Precedence of connectives
• First Precedence
• Parenthesis
• Second Precedence
• Negation
• Third Precedence
• Conjunction(AND)
• Fourth Precedence
• Disjunction(OR)
• Fifth Precedence
• Implication
• Six Precedence
• Biconditional
Logical equivalence:
• Logical equivalence is one of the features of propositional logic. Two propositions
are said to be logically equivalent if and only if the columns in the truth table are
identical to each other.
Propositional logic In Artificial intelligence

Properties of Operators:
• Commutativity:
– P∧ Q= Q ∧ P, or
– P ∨ Q = Q ∨ P.
• Associativity:
– (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
– (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
Propositional logic In Artificial intelligence
• Identity element:
– P ∧ True = P,
– P ∨ True= True.
• Distributive:
– P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
– P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
• DE Morgan's Law:
– ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
– ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
• Double-negation elimination:
– ¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with propositional logic.
– All the girls are intelligent.
– Some apples are sweet.
• Propositional logic has limited expressive power.
First-Order Predicate logic(FOPL)
• It is an extension to propositional logic.
• First-order logic is also known as Predicate logic or First-order predicate logic.
• First-order logic does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus,
• Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation
such as: the sister of, brother of, has color, comes between
• Function: Father of, best friend, third inning of, end of, ......
• As a natural language, first-order logic also has two main parts:
Syntax
Semantic
Syntax of First-Order logic:
• The syntax of FOL determines which collection of symbols is a logical expression
in first-order logic. The basic syntactic elements of first-order logic are symbols. We
write statements in short-hand notation in FOL.
First-Order Predicate logic(FOPL)
Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function Sqrt, mean

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃
First-Order Predicate logic(FOPL)
Atomic sentences:
• Atomic sentences are the most basic sentences of first-order logic. These sentences
are formed from a predicate symbol followed by a parenthesis with a sequence of
terms.
• We can represent atomic sentences as Predicate (term1, term2, ......, term n).
• Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
• Complex sentences are made by combining atomic sentences using connectives.
First-order logic statements can be divided into two parts:
• Subject: Subject is the main part of the statement.
• Predicate: A predicate can be defined as a relation, which binds two atoms together
in a statement.
Example:
• "x is an integer.", it consists of two parts, the first part x is the subject of the
statement and second part "is an integer," is known as a predicate.
First-Order Predicate logic(FOPL)
Quantifiers in First-order logic:
• These are the symbols that permit to determine or identify the range and scope of
the variable in the logical expression. There are two types of quantifier:
– Universal Quantifier, (for all, everyone, everything)
– Existential quantifier, (for some, at least one).
Universal Quantifier:
• Universal quantifier is a symbol of logical representation, which specifies that the
statement within its range is true for everything or every instance of a particular
thing.
• The Universal quantifier is represented by a symbol ∀, which resembles an inverted
A.
• In universal quantifier we use implication "→".
• If x is a variable, then ∀x is read as:
• For all x
• For each x
• For every x.
First-Order Predicate logic(FOPL)
Example:
• All man drink coffee.
• ∀x man(x) → drink (x, coffee).
• It will be read as: There are all x where x is a man who drink coffee.
Existential Quantifier:
• Existential quantifiers are the type of quantifiers, which express that the statement
within its scope is true for at least one instance of something.
• It is denoted by the logical operator ∃, which resembles as inverted E. When it is
used with a predicate variable then it is called as an existential quantifier.
• In Existential quantifier we always use AND or Conjunction symbol ( ∧).
• If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read
as:
• There exists a 'x.'
• For some 'x.'
• For at least one 'x.‘
Example:
• Some boys are intelligent.
First-Order Predicate logic(FOPL)
• ∃x: boys(x) ∧ intelligent(x)
• It will be read as: There are some x where x is a boy who is intelligent.
Notes:
• The main connective for universal quantifier ∀ is implication →.
• The main connective for existential quantifier ∃ is and ∧.
knowledge-engineering
• The process of constructing a knowledge-base in first-order logic is called as
knowledge- engineering. In knowledge-engineering, someone who investigates a
particular domain, learns important concept of that domain, and generates a formal
representation of the objects, is known as knowledge engineer.
Inference in First-Order Logic
• Inference in First-Order Logic is used to deduce new facts or sentences from
existing sentences
An Example - Facts in FOL
(1) Marcus was a man.
man(Marcus)
(2) Marcus was a Pompeian.
Pompeian(Marcus)
(3) All Pompeians were Romans.
x Pompeian(x)  Roman(x)
(4) Caesar was a ruler.
ruler(Caesar)
(5) All Romans were either loyal to Caesar or hated him.
x Roman(x)  loyalto(x, Caesar)  hate(x, Caesar)
(6) Everyone is loyal to someone.
x y loyalto(x, y)
(7) People only try to assassinate rulers they are not loyal to.
x y person(x)  ruler(y)  tryassassinate(x, y)  loyalto(x, y)
(8) Marcus tried to assassinate Caesar.
tryassassinate(Marcus, Caesar)
Unification
• Unification is a process of making two different logical atomic expressions identical
by finding a substitution. Unification depends on the substitution process.
• It takes two literals as input and makes them identical using substitution.
• Let Ψ1 and Ψ2 be two atomic sentences and 𝜎 be a unifier such that, Ψ1𝜎 = Ψ2𝜎, then
it can be expressed as UNIFY(Ψ1, Ψ2).
Example:
Find the MGU for Unify{King(x), King(John)}
• Let Ψ1 = King(x), Ψ2 = King(John),
• Substitution θ = {John/x} is a unifier for these atoms and applying this substitution,
and both expressions will be identical.
• The UNIFY algorithm is used for unification, which takes two atomic sentences and
returns a unifier for those sentences (If any exist).
• Unification is a key component of all first-order inference algorithms.
• It returns fail if the expressions do not match with each other.
• The substitution variables are called Most General Unifier or MGU.
Conditions for Unification:
• Predicate symbol must be same, atoms or expression with different predicate symbol
Unification
• Number of Arguments in both expressions must be identical.
• Unification will fail if there are two similar variables present in the same
expression.
Algorithm: Unify(Ψ1, Ψ2)
Step. 1: If Ψ1 or Ψ2 is a variable or constant, then:
a) If Ψ1 or Ψ2 are identical, then return NIL.
b) Else if Ψ1is a variable,
a. then if Ψ1 occurs in Ψ2, then return FAILURE
b. Else return { (Ψ2/ Ψ1)}.
c) Else if Ψ2 is a variable,
a. If Ψ2 occurs in Ψ1 then return FAILURE,
b. Else return {( Ψ1/ Ψ2)}.
d) Else return FAILURE.
Step.2: If the initial Predicate symbol in Ψ1 and Ψ2 are not same, then return
FAILURE.
Step. 3: IF Ψ1 and Ψ2 have a different number of arguments, then return FAILURE.
Unification Algorithm
Step. 5: For i=1 to the number of elements in Ψ 1.
a) Call Unify function with the ith element of Ψ 1 and ith element of Ψ2, and put the result into S.
b) If S = failure then returns Failure
c) If S ≠ NIL then do,
a. Apply S to the remainder of both Ψ1 and Ψ2
b. SUBST= APPEND(S, SUBST).
Step.6: Return SUBST.
Implementation of the Algorithm
Step.1: Initialize the substitution set to be empty.
Step.2: Recursively unify atomic sentences:
• Check for Identical expression match.
• If one expression is a variable vi, and the other is a term ti which does not contain variable vi,
then:
– Substitute ti / vi in the existing substitutions
– Add ti /vi to the substitution setlist.
– If both the expressions are functions, then function name must be similar, and the
number of arguments must be the same in both the expression.
• For each pair of the following atomic sentences find the most general unifier (If exist).
Find the most general unifier for each pair of the following atomic statements (If exist).
1. Find the MGU of {p(f(a), g(Y)) and p(X, X)}

Sol: S0 => Here, Ψ1 = p(f(a), g(Y)), and Ψ2 = p(X, X)


SUBST θ = {f(a) / X}
S1 => Ψ1 = p(f(a), g(Y)), and Ψ2 = p(f(a), f(a))
SUBST θ = {f(a) / g(y)}, Unification failed.
Unification is not possible for these expressions.

2. Find the MGU of {p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))}

S0 => { p(b, X, f(g(Z))); p(Z, f(Y), f(Y))}


SUBST θ={b/Z}

S1 => { p(b, X, f(g(b))); p(b, f(Y), f(Y))}


SUBST θ={f(Y) /X}

S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}


SUBST θ= {g(b) /Y}

S2 => { p(b, f(g(b)), f(g(b)); p(b, f(g(b)), f(g(b))} Unified Successfully.


And Unifier = { b/Z, f(Y) /X , g(b) /Y}.
3. Find the MGU of {p (X, X), and p (Z, f(Z))}

Here, Ψ1 = {p (X, X), and Ψ2 = p (Z, f(Z))


S0 => {p (X, X), p (Z, f(Z))}
SUBST θ= {X/Z}
S1 => {p (Z, Z), p (Z, f(Z))}
SUBST θ= {f(Z) / Z}, Unification Failed.
Therefore, unification is not possible for these expressions.
4. Find the MGU of Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x)}

Here, Ψ1 = Q(a, g(x, a), f(y)), and Ψ2 = Q(a, g(f(b), a), x)


S0 => {Q(a, g(x, a), f(y)); Q(a, g(f(b), a), x)}
SUBST θ= {f(b)/x}
S1 => {Q(a, g(f(b), a), f(y)); Q(a, g(f(b), a), f(b))}

SUBST θ= {b/y}
SUBST θ={f(Y) /X}

S2 => { p(b, f(Y), f(g(b))); p(b, f(Y), f(Y))}


S1 => {Q(a, g(f(b), a), f(b)); Q(a, g(f(b), a), f(b))}, Successfully Unified.
Unifier: [a/a, f(b)/x, b/y].

5. UNIFY(knows(Richard, x), knows(Richard, John))

Here, Ψ1 = knows(Richard, x), and Ψ2 = knows(Richard, John)


S0 => { knows(Richard, x); knows(Richard, John)}
S SUBST θ= {John/x}
S1 => { knows(Richard, John); knows(Richard, John)}, Successfully Unified.
Unifier: {John/x}.
Inference Engine

• The inference engine is the component of the intelligent system in artificial


intelligence, which applies logical rules to the knowledge base to infer new
information from known facts. The first inference engine was part of the
expert system. Inference engine commonly proceeds in two modes, which
are:
• Forward chaining
• Backward chaining

• The inference engine is often compared to the human brain, as it is


responsible for making the same kinds of deductions and inferences that we
do. However, the inference engine is not limited by the same constraints as
the human brain. It can process information much faster and is not subject
to the same biases and errors that we are.
• The inference engine is a critical component of AI systems because it is
responsible for making the decisions that the system needs to make in order
to function. Without an inference engine, an AI system would be little more
There are three main components of an inference engine:
1. A knowledge base: This is a collection of facts and rules that
the inference engine can use to make deductions and predictions.
2. A set of reasoning algorithms: These are the algorithms that
the inference engine uses to reason with the knowledge base and
make deductions and predictions.
3. A set of heuristics: These are rules of thumb that the inference
engine can use to make deductions and predictions.
The knowledge base, reasoning algorithms, and heuristics all
work together to allow the inference engine to make deductions
and predictions.
How does an inference engine work?
The inference engine works by first identifying a set of relevant facts and then
using these facts to draw logical conclusions. In order to do this, the engine must
have access to a knowledge base that contains all of the relevant information. The
knowledge base is typically represented as a set of rules or a decision tree.

Once the engine has access to the relevant facts, it will use these facts to draw
conclusions. In order to do this, the engine will use a set of inference rules. These
rules are typically based on logic or probability. The engine will use these rules to
determine what conclusions can be drawn from the evidence.

The inference engine is an important component of an AI system because it is


responsible for making deductions and predictions. Without an inference engine,
an AI system would not be able to make decisions or solve problems.
What are the benefits of using an inference engine?
An inference engine is a type of AI that is used to make
deductions from a set of given facts. It can be used to
solve problems or to make predictions. Inference
engines are used in a variety of fields, including
medicine, law, and finance.
There are many benefits to using an inference engine.
Inference engines can help us to make better decisions
by providing us with more accurate information. They
can also help us to automate decision-making processes.
Inference engines can also help us to save time and
resources by reducing the need for human input.
Horn Clause and Definite clause:
Horn clause and definite clause are the forms of sentences, which
enables knowledge base to use a more restricted and efficient
inference algorithm. Logical inference algorithms use forward and
backward chaining approaches, which require KB in the form of
the first-order definite clause.
Definite clause: A clause which is a disjunction of literals
with exactly one positive literal is known as a definite clause or
strict horn clause.
Horn clause: A clause which is a disjunction of literals with at
most one positive literal is known as horn clause. Hence all the
definite clauses are horn clauses.
Example:
(¬ p V ¬ q V k). It has only one positive literal k.
It is equivalent to p ∧ q → k.
Forward Chaining
• Forward chaining is also known as a forward deduction or forward reasoning
method when using an inference engine. Forward chaining is a form of reasoning
which start with atomic sentences in the knowledge base and applies inference rules
(Modus Ponens) in the forward direction to extract more data until a goal is
reached.
• The Forward-chaining algorithm starts from known facts, triggers all rules whose
premises are satisfied, and add their conclusion to the known facts. This process
repeats until the problem is solved.
Properties of Forward-Chaining:
• It is a down-up approach, as it moves from bottom to top.
• It is a process of making a conclusion based on known facts or data, by starting
from the initial state and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach to the goal
using available data.
• Forward -chaining approach is commonly used in the expert system, such as CLIPS,
business, and production rule systems.
Forward Chaining
• "As per the law, it is a crime for an American to sell weapons to hostile nations.
• Country A, an enemy of America, has some missiles, and all the missiles were
sold to it by Robert, who is an American citizen.
• "Prove that "Robert is criminal.”
Facts Conversion into FOL:
 American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)

 Enemy (A, America) ......(2)


 Enemy (A, America) -> hostile(A)

 Owns(A,T1)
Missile(T1) .......(3)

For all T1 : Missiles(T1) ∧ Owns (A, T1) → Sells (Robert, T1, A) ......(4)
Missile(T1) → Weapons (T1) .......(5)
Backward Chaining

• Backward-chaining is also known as a backward deduction or backward reasoning


method when using an inference engine. A backward chaining algorithm is a form
of reasoning, which starts with the goal and works backward, chaining through
rules to find known facts that support the goal.
Properties of backward chaining:
• It is known as a top-down approach.
• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to prove the
facts true.
• It is called a goal-driven approach, as a list of goals decides which rules are selected
and used.
• Backward -chaining algorithm is used in game theory, automated theorem proving
tools, inference engines, proof assistants, and various AI applications.
• The backward-chaining method mostly used a depth-first search strategy for
proof.
S. Forward Chaining Backward Chaining
No
.
1. Forward chaining starts from known facts and applies Backward chaining starts from the goal and
inference rule to extract more data unit it reaches to the works backward through inference rules to find
goal. the required facts that support the goal.

2. It is a bottom-up approach It is a top-down approach


3. Forward chaining is known as data-driven inference Backward chaining is known as goal-driven
technique as we reach to the goal using the available technique as we start from the goal and divide
data. into sub-goal to extract the facts.
4. Forward chaining reasoning applies a breadth-first Backward chaining reasoning applies a depth-
search strategy. first search strategy.

5. Forward chaining tests for all the available rules Backward chaining only tests for few required
rules.
6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation application. prescription, and debugging application.

7. Forward chaining can generate an infinite number of Backward chaining generates a finite number of
possible conclusions. possible conclusions.

8. It operates in the forward direction. It operates in the backward direction.


Reasoning

• "Reasoning is a way to infer facts from existing data." It is a general process of


thinking rationally, to find valid conclusions.
Types of Reasoning
1. Deductive reasoning:
• Deductive reasoning is deducing new information from logically related known
information. It is the form of valid reasoning, which means the argument's
conclusion must be true when the premises are true.
• Deductive reasoning is a type of propositional logic in AI, and it requires various
rules and facts. It is sometimes referred to as top-down reasoning, and contradictory
to inductive reasoning.
• In deductive reasoning, the truth of the premises guarantees the truth of the
conclusion.
• Premise-1: All the human eats veggies
• Premise-2: Suresh is human.
• Conclusion: Suresh eats veggies.
Reasoning

• The general process of deductive reasoning is given below:

2. Inductive Reasoning:
• Inductive reasoning is a form of reasoning to arrive at a conclusion using limited
sets of facts by the process of generalization. It starts with the series of specific facts
or data and reaches to a general statement or conclusion.
• Inductive reasoning is a type of propositional logic, which is also known as cause-
effect reasoning or bottom-up reasoning.
• In inductive reasoning, we use historical data or various premises to generate a
generic rule, for which premises support the conclusion.
• In inductive reasoning, premises provide probable supports to the conclusion,
so the truth of premises does not guarantee the truth of the conclusion.
Example:
• Premise: All of the pigeons we have seen in the zoo are white.
• Conclusion: Therefore, we can expect all the pigeons to be white
Reasoning

3. Abductive reasoning:
• Abductive reasoning is a form of logical reasoning which starts with single or
multiple observations then seeks to find the most likely explanation or conclusion
for the observation.
• Abductive reasoning is an extension of deductive reasoning, but in abductive
reasoning, the premises do not guarantee the conclusion.
• Example:
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conclusion It is raining.
Reasoning

4. Common Sense Reasoning


• Common sense reasoning is an informal form of reasoning, which can be gained
through experiences.
• Common Sense reasoning simulates the human ability to make presumptions about
events which occurs on every day.
• It relies on good judgment rather than exact logic and operates on heuristic
knowledge and heuristic rules.
Example:
• One person can be at one place at a time.
• If I put my hand in a fire, then it will burn.
• The above two statements are the examples of common sense reasoning which a
human mind can easily understand and assume.
Reasoning

5. Monotonic Reasoning:
• In monotonic reasoning, once the conclusion is taken, then it will remain the same
even if we add some other information to existing information in our knowledge
base. In monotonic reasoning, adding knowledge does not decrease the set of
prepositions that can be derived. Example:
• Earth revolves around the Sun.
• It is a true fact, and it cannot be changed even if we add another sentence in
knowledge base like, "The moon revolves around the earth" Or "Earth is not round,"
etc.
Advantages of Monotonic Reasoning:
• In monotonic reasoning, each old proof will always remain valid.
• If we deduce some facts from available facts, then it will remain valid for always.
Disadvantages of Monotonic Reasoning:
• We cannot represent the real world scenarios using Monotonic reasoning.
• Since we can only derive conclusions from the old proofs, so new knowledge from
the real world cannot be added.
Reasoning

6. Non-monotonic Reasoning
• In Non-monotonic reasoning, some conclusions may be invalidated if we add some
more information to our knowledge base.
Example: Birds can fly
• Penguins cannot fly
• Pitty is a bird
• So from the above sentences, we can conclude that Pitty can fly.
• However, if we add one another sentence into knowledge base "Pitty is a penguin",
which concludes "Pitty cannot fly", so it invalidates the above conclusion.
Advantages of Non-monotonic reasoning:
• In Non-monotonic reasoning, we can choose probabilistic facts or can make
assumptions. Also used in Robot Navigation.
Disadvantages of Non-monotonic Reasoning:
• In non-monotonic reasoning, the old facts may be invalidated by adding new
sentences.
• It cannot be used for theorem proving.
Resolution
• Resolution is a theorem proving technique that proofs by contradictions. It was
invented by a Mathematician John Alan Robinson in the year 1965.
• Resolution is used, if there are various statements given, and we need to prove a
conclusion of those statements. Unification is a key concept in proofs by
resolutions. Resolution is a single inference rule which can efficiently operate on
the conjunctive normal form or clausal form.
• Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also
known as a unit clause.
• Conjunctive Normal Form: A sentence represented as a conjunction of clauses is
said to be conjunctive normal form or CNF.
Resolution
Steps for Resolution:
• Conversion of facts into first-order logic.
• Convert FOL statements into CNF
• Negate the statement which needs to prove (proof by contradiction)
• Draw resolution graph (unification).
Example:
• John likes all kind of food.
• Apple and vegetable are food
• Anything anyone eats and not killed is food.
• Anil eats peanuts and still alive
• Harry eats everything that Anil eats.
Prove by resolution that:
• John likes peanuts.
Resolution
• Step-1: Conversion of Facts into FOL

Step-2: Conversion of FOL into CNF


• Eliminate all implication (→) and rewrite
– ∀x ¬ food(x) V likes(John, x)
– food(Apple) Λ food(vegetables)
– ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
– eats (Anil, Peanuts) Λ alive(Anil)
Resolution
– ∀x ¬ eats(Anil, x) V eats(Harry, x)
– ∀x¬ [¬ killed(x) ] V alive(x)
– ∀x ¬ alive(x) V ¬ killed(x)
– likes(John, Peanuts).
• Move negation (¬)inwards and rewrite
– ∀x ¬ food(x) V likes(John, x)
– food(Apple) Λ food(vegetables)
– ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
– eats (Anil, Peanuts) Λ alive(Anil)
– ∀x ¬ eats(Anil, x) V eats(Harry, x)
– ∀x killed(x) ] V alive(x)
– ∀x ¬ alive(x) V ¬ killed(x)
– likes(John, Peanuts).
• Rename variables or standardize variables
– ∀x ¬ food(x) V likes(John, x)
– food(Apple) Λ food(vegetables)
– ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
– eats (Anil, Peanuts) Λ alive(Anil)
Resolution
– ∀w¬ eats(Anil, w) V eats(Harry, w)
– ∀g ¬killed(g) ] V alive(g)
– ∀k ¬ alive(k) V ¬ killed(k)
– likes(John, Peanuts).
• Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known
as Skolemization. But in this example problem since there is no existential quantifier so
all the statements will remain same in this step.

• Drop Universal quantifiers

In this step we will drop all universal quantifier since all the statements are not implicitly
quantified so we don't need it.
– ¬ food(x) V likes(John, x)
– food(Apple)
– food(vegetables)
– ¬ eats(y, z) V killed(y) V food(z)
– eats (Anil, Peanuts)
– alive(Anil)
Resolution

– ¬ eats(Anil, w) V eats(Harry, w)
– killed(g) V alive(g)
– ¬ alive(k) V ¬ killed(k)
– likes(John, Peanuts).
• Distribute conjunction over disjunction
This step will not make any change in this problem.
• Step-3: Negate the statement to be proved
• In this statement, we will apply negation to the conclusion statements, which will be
written as ¬likes(John, Peanuts)
• Step-4: Draw Resolution graph:
• Now in this step, we will solve the problem by resolution tree using substitution.
For the above problem, it will be given as follows:
Utility Theory: The main idea of utility theory is really simple: an agent's
preferences over possible outcomes can be captured by a function that maps these
outcomes to a real number; the higher the number the more that agent likes that
outcome. The function is called a utility function.

Probabilistic reasoning:
Probabilistic reasoning is a way of knowledge representation where we apply the concept
of probability to indicate the uncertainty in knowledge. In probabilistic reasoning, we
combine probability theory with logic to handle the uncertainty.

Need of probabilistic reasoning in AI:


When there are unpredictable outcomes.
When specifications or possibilities of predicates becomes too large to handle.
When an unknown error occurs during an experiment.
Probability: Probability can be defined as a chance that an uncertain event will occur. It
is the numerical measure of the likelihood that an event will occur. The value of
probability always remains between 0 and 1 that represent ideal uncertainties.

1.0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.


2.P(A) = 0, indicates total uncertainty in an event A.
3.P(A) =1, indicates total certainty in an event A.

We can find the probability of an uncertain event by using the below formula.

Probability of Occurrence = number of desired outcomes/total number of


outcomes
• P(¬A) = probability of a not happening event.
• P(¬A) + P(A) = 1.

Event: Each possible outcome of a variable is called an event.


Sample space: The collection of all possible events is called sample space.

Random variables: Random variables are used to represent the events and objects in
the real world.

Prior probability: The prior probability of an event is probability computed before


observing new information.

Posterior Probability: The probability that is calculated after all evidence or


information has taken into account. It is a combination of prior probability and new
information.

Conditional probability:
Conditional probability is a probability of occurring an event when another event has
already happened.

Let's suppose, we want to calculate the event A when event B has already occurred,
"the probability of A under the conditions of B", it can be written as:

Where P(A⋀B)= Joint probability of a and B


P(A|B) = P(A⋀B)/ P(B)

P(B)= Marginal probability of B.


Markov model
A Markov model is a stochastic method for randomly changing systems that possess the
Markov property. This means that, at any given time, the next state is only dependent on the
current state and is independent of anything in the past. Two commonly applied types of
Markov model are used when the system being represented is autonomous -- that is, when
the system isn't influenced by an external agent. These are as follows:

Markov chains. These are the simplest type of Markov model and are used to represent
systems where all states are observable. Markov chains show all possible states, and
between states, they show the transition rate, which is the probability of moving from one
state to another per unit of time. Applications of this type of model include prediction of
market crashes, speech recognition and search engine algorithms.

Hidden Markov models. These are used to represent systems with some unobservable
states. In addition to showing states and transition rates, hidden Markov models also
represent observations and observation likelihoods for each state. Hidden Markov models
are used for a range of applications, including thermodynamics, finance and pattern
recognition.
How are Markov models represented?
The simplest Markov model is a Markov chain, which can be
expressed in equations, as a transition matrix or as a graph. A
transition matrix is used to indicate the probability of moving from
each state to each other state. Generally, the current states are listed
in rows, and the next states are represented as columns. Each cell
then contains the probability of moving from the current state to the
next state. For any given row, all the cell values must then add up to
one.
A graph consists of circles, each of which represents a state, and
directional arrows to indicate possible transitions between states.
The directional arrows are labeled with the transition probability.
The transition probabilities on the directional arrows coming out of
any given circle must add up to one.

You might also like