Ai Unit Ii
Ai Unit Ii
Ai Unit Ii
UNIT - II
INDEX
Basic Knowledge Representation and
ADVANCED SEARCH Reasoning
• Constructing Search Trees • Propositional Logic
• Stochastic Search • First-Order Logic
• A* Search Implementation • Forward Chaining and Backward
• Minimax Search Chaining
• Alpha-Beta Pruning • Introduction to Probabilistic
Reasoning
• Bayes Theorem
CONSTRUCTING SEARCH TREES
BINARY SEARCH TREE CONSTRUCTION
• Binary search tree is a data structure that quickly allows us to
maintain a sorted list of numbers.
• It is called a binary tree because each tree node has a maximum of
two children.
• It is called a search tree because it can be used to search for the
presence of a number in O(log(n)) time.
• The properties that separate a binary search tree from a regular binary
tree is
• All nodes of left subtree are less than the root node
• All nodes of right subtree are more than the root node
• Both subtrees of each node are also BSTs i.e. they have the above two
properties
• In a binary search tree (BST), each node contains-
• Only smaller values in its left sub tree
• Only larger values in its right sub tree
• Example-
Example-
Construct a Binary Search Tree (BST) for the following sequence of numbers-
50, 70, 60, 20, 90, 10, 40, 100
A B A∨B
True True True
True False True
False True True
False False False
• AND (∧) − The AND operation of two propositions A and B (written
as A∧BA∧B) is true if both the propositional variable A and B is true.
• The truth table is as follows −
A B A∧B
A ¬A
True False
False True
Implication / if-then (→) − An implication A→BA→B is the proposition
“if A, then B”. It is false if A is true and B is false. The rest cases are
true.
• The truth table is as follows −
A B A→B
A B A⇔B
True True True
True False False
False True False
False False True
Contradictions
• A Contradiction is a formula which is always false for every value of its
propositional variables.
• Example − Prove (A∨B)∧[(¬A)∧(¬B)](A∨B)∧[(¬A)∧(¬B)] is a
contradiction
• The truth table is as follows −
A B A∨B ¬A ¬B (¬ A) ∧ ( ¬ (A ∨ B) ∧
B) [( ¬ A) ∧
(¬ B)]
True True True False False False False
A B A∨B ¬A (A ∨ B) ∧ (¬ A)
True True True False False
True False True False False
False True True True True
False False False True False
FIRST ORDER LOGIC
1.FOL is a mode of representation in Artificial Intelligence. It is an
extension of PL.
2.FOL represents natural language statements in a concise way.
3.FOL is also called predicate logic. It is a powerful language used to
develop information about an object and express the relationship
between objects.
4.FOL not only assumes that does the world contains facts (like PL does),
but it also assumes the following:
1. Objects: A, B, people, numbers, colours, wars, theories, squares, pit, etc.
2. Relations: It is unary relation such as red, round, sister of, brother of, etc.
3. Function: father of, best friend, third inning of, end of, etc.
Parts of first-order logic
• FOL also has two parts:
1.Syntax
2.Semantics
• Syntax
• The syntax of FOL decides which
collection of symbols is a logical
expression.
• The basic syntactic elements of FOL
are symbols. We use symbols to write
statements in shorthand notation
Atomic and complex sentences in FOL
1. Atomic Sentence
• This is a basic sentence of FOL formed from a predicate symbol followed by a parenthesis with a sequence of
terms.
• We can represent atomic sentences as a predicate (value1, value2…., value n).
Example
• John and Michael are colleagues → Colleagues (John, Michael)
• German Shepherd is a dog → Dog (German Shepherd)
2. Complex sentence
• Complex sentences are made by combining atomic sentences using connectives.
• FOL is further divided into two parts:
• Subject: the main part of the statement.
• Predicate: defined as a relation that binds two atoms together.
Example
• Colleague (Oliver, Benjamin) ∧ Colleague (Benjamin, Oliver)
• “x is an integer”
It has two parts;
first, x is the subject.
second, “is an integer” is called a predicate.
Propositional logic vs. first-order logic
Propositional logic
• Propositional logic assumes that some facts exist that can either hold
or do not hold.
First-order-logic
• The universe consists of multiple objects with certain relations among
them that can either hold or do not hold.
Predicates and Quantifiers
• Predicate Logic deals with predicates, which are propositions containing variables.
Predicate Logic – Definition
• A predicate is an expression of one or more variables defined on some specific
domain.
• A predicate with variables can be made a proposition by either assigning a value to
the variable or by quantifying the variable.
• The following are some examples of predicates −
Let E(x, y) denote "x = y"
Let X(a, b, c) denote "a + b + c = 0"
Let M(x, y) denote "x is married to y"
Quantifiers
• The variable of predicates is quantified by quantifiers.
• There are two types of quantifier in predicate logic − Universal Quantifier
and Existential Quantifier.
Universal Quantifier
• Universal quantifier states that the statements within its scope are
true for every value of the specific variable. It is denoted by the
symbol ∀∀.
• ∀x P(x) is read as for every value of x, P(x) is true.
Example − "Man is mortal" can be transformed into the propositional
form ∀xP(x) where P(x) is the predicate which denotes x is mortal and
the universe of discourse is all men.
Existential Quantifier
• Existential quantifier states that the statements within its scope
are true for some values of the specific variable.
• It is denoted by the symbol ∃.
• ∃ x P(x) is read as for some values of x, P(x) is true.
• Example − "Some people are dishonest" can be transformed into
the propositional form
• ∃ x P(x) where P(x) is the predicate which denotes x is dishonest
and the universe of discourse is some people.
Nested Quantifiers
• If we use a quantifier that appears within the scope of another
quantifier, it is called nested quantifier.
Example
• ∀ a ∃ b P(x,y) where P(a,b) denotes a+b=0
• ∀ a ∀ b ∀ c P(a,b,c) where P(a,b) denotes a+(b+c)=(a+b)+c
Note − ∀ a ∃ b P(x,y)≠ ∃ a ∀ b P(x,y)
• Backward and forward chaining are methods of reasoning that exist in
the Expert System Domain of artificial intelligence.
• These techniques are used in expert systems such as MYCIN and
DENDRAL to generate solutions to real life problems.
Introduction to the Expert System
• There are three components in an expert system: user interface,
inference engine, and knowledge base.
• The user interface enables users of the system to interact with the
expert system.
• High-quality and domain-specific knowledge is stored in the
knowledge base.
• Backward and forward chaining stem from the inference engine
component.
• This is a component in which logical rules are applied to the
knowledge base to get new information or make a decision.
• The backward and forward chaining techniques are used by the
inference engine as strategies for proposing solutions or deducing
information in the expert system.
Forward chaining
• Forward chaining is a method of reasoning in artificial intelligence in which
inference rules are applied to existing data to extract additional data until an
endpoint (goal) is achieved.
• In this type of chaining, the inference engine starts by evaluating existing
facts, derivations, and conditions before deducing new information.
• An endpoint (goal) is achieved through the manipulation of knowledge that
exists in the knowledge base.
Properties of forward chaining
• The process uses a down-up approach (bottom to top).
• It starts from an initial state and uses facts to make a conclusion.
• This approach is data-driven.
• It’s employed in expert systems and production rule system.
Example of forward chaining
• Tom is running (A)
• If a person is running, he will sweat (A->B)
• Therefore, Tom is sweating. (B)
Advantages
• It can be used to draw multiple conclusions.
• It provides a good basis for arriving at conclusions.
• It’s more flexible than backward chaining because it does not have a
limitation on the data derived from it.
Disadvantages
• The process of forward chaining may be time-consuming. It may take
a lot of time to eliminate and synchronize available data.
• Unlike backward chaining, the explanation of facts or observations for
this type of chaining is not very clear. The former uses a goal-driven
method that arrives at conclusions efficiently.
Backward chaining
• Backward chaining is a concept in artificial intelligence that involves
backtracking from the endpoint or goal to steps that led to the
endpoint. This type of chaining starts from the goal and moves
backward to comprehend the steps that were taken to attain this goal.
• The backtracking process can also enable a person establish logical
steps that can be used to find other important solutions.
Properties of backward chaining
• The process uses an up-down approach (top to bottom).
• It’s a goal-driven method of reasoning.
• The endpoint (goal) is subdivided into sub-goals to prove the truth of
facts.
• A backward chaining algorithm is employed in inference engines,
game theories, and complex database systems.
• The modus ponens rule is used as the basis for the backward chaining
process. This rule states that if both the conditional statement (p->q)
and the antecedent (p) are true, then we can infer the subsequent (q).
Example of backward chaining
• Tom is sweating (B).
• If a person is running, he will sweat (A->B).
• Tom is running (A).
Advantages
• The result is already known, which makes it easy to deduce inferences.
• It’s a quicker method of reasoning than forward chaining because the
endpoint is available.
• In this type of chaining, correct solutions can be derived effectively if
pre-determined rules are met by the inference engine.
Disadvantages
• The process of reasoning can only start if the endpoint is known.
• It doesn’t deduce multiple solutions or answers.
• It only derives data that is needed, which makes it less flexible than
forward chaining.
Introduction to Probabilistic Reasoning
• A→B, which means if A is true then B is true, but consider a situation
where we are not sure about whether A is true or not then we cannot
express this statement, this situation is called uncertainty.
Causes of uncertainty:
• Following are some leading causes of uncertainty to occur in the real
world.
1.Information occurred from unreliable sources.
2.Experimental Errors
3.Equipment fault
4.Temperature variation
5.Climate change.
• We use probability in probabilistic reasoning because it provides a
way to handle the uncertainty that is the result of someone's laziness
and ignorance.
• In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as
"It will rain today,"
"behaviour of someone for some situations,"
"A match between two teams or two players."
• These are probable sentences for which we can assume that it will
happen but not sure about it, so here we use probabilistic reasoning.
• In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
Bayes' rule
Bayesian Statistics
Probability: Probability can be defined as a chance that an uncertain event will
occur. It is the numerical measure of the likelihood that an event will occur. The
value of probability always remains between 0 and 1 that represent ideal
uncertainties.