Ai Unit 2 Notes
Ai Unit 2 Notes
AI Unit-2 notes
Syllabus
ARTIFICIAL INTELLIGENCE
UNIT - I
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search,
Iterative deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search
Strategies: Greedy best-first search, A* search, Heuristic Functions, Beyond Classical Search:
Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces,
Searching with Non-Deterministic Actions, Searching wih Partial Observations, Online
Search Agents and Unknown Environment .
UNIT - II
Problem Solving by Search-II and Propositional Logic
Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect
Real-Time Decisions.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint
Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of
Problems.
Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional
Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn
clauses and definite clauses, Forward and backward chaining, Effective Propositional Model
Checking, Agents Based on Propositional Logic.
UNIT - III
Logic and Knowledge Representation
First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-
Order Logic, Knowledge Engineering in First-Order Logic.
Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification and
Lifting, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events.
Mental Events and Mental Objects, Reasoning Systems for Categories, Reasoning with
Default Information.
UNIT - IV
Planning
Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-
Space Search, Planning Graphs, other Classical Planning Approaches, Analysis of Planning
approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical
Planning, Planning and Acting in Nondeterministic Domains, Multi agent Planning.
UNIT - V
Uncertain knowledge and Learning
Uncertainty: Acting under Uncertainty, Basic Probability Notation, Inference Using Full
Joint Distributions, Independence, Bayes’ Rule and Its Use,
Probabilistic Reasoning: Representing Knowledge in an Uncertain Domain, The Semantics
Of Bayesian Networks, Efficient Representation of Conditional Distributions, Approximate
TEXT BOOKS
1. Artificial Intelligence A Modern Approach, Third Edition, Stuart Russell and Peter
Norvig, Pearson Education.
REFERENCES:
1. Artificial Intelligence, 3rd Edn., E. Rich and K. Knight (TMH)
2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education.
3. Artificial Intelligence, Shivani Goel, Pearson Education.
4. Artificial Intelligence and Expert systems – Patterson, Pearson Education
LECTURE NOTES
Adversarial search is a search, where we examine the problem which arises when we try to plan ahead of
the world and other agents are planning against us.
o In previous topics, we have studied the search strategies which are only associated with a single
agent that aims to find the solution which often expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the solution in the
same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in which each
agent is an opponent of other agent and playing against each other. Each agent needs to consider
the action of other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the same
search space for the solution, are called adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are the two
main factors which help to model and solve games in AI.
as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
A game can be defined as a type of search in AI which can be formalized of the following elements:
Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the moves by
players. Game tree involves initial state, actions function, and result Function.
Example: Tic-Tac-Toe game tree:The following figure is showing part of the game-tree for tic-tac-toe
game. Following are some key points of the game:
Example Explanation :
o From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN place
o, and both player plays alternatively until we reach a leaf node where one player has three in a
row or all squares are filled.
o Both players will compute each node, minimax, the minimax value which is the best achievable
utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player
is doing his best to prevent another one from winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply. Max
place x, then MIN puts o to prevent Max from winning, and this game continues until the
terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space of
possibilities that MIN and MAX are playing tic-tac-toe and taking turns alternately.
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get
the utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the
tree. Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find
the 3rdlayer node values.
o For node B= min(4,6) = 4
o For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find
the maximum value for the root node. In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more than 4 layers.
o For node A max(4, -3)= 4
The main drawback of the minimax algorithm is that it gets really slow for complex games such
as Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide.
Let's take an example of two-player search tree to understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the
same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min,
Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B
now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -
∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha
will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right
successor of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values
now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the
final game tree which is the showing the nodes which are computed and nodes which has never
computed. Hence the optimal value for the maximizer is 3 for this example.
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
Move order is an important aspect of alpha-beta pruning.
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of
the tree, and works exactly as minimax algorithm. In this case, it also consumes more time
because of alpha-beta factors, such a move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in
the tree, and best moves occur at the left side of the tree. We apply DFS hence it first search left
of the tree and go deep twice as minimax algorithm in the same amount of time. Complexity in
ideal ordering is O(bm/2).
Constraint satisfaction problems (CSP) is defined by a set of variables, X1, X2, …Xn, and a
set of constraints, C1, C2,…Cm. Each variable Xi has a nonempty domain Di of all possible
values. A complete assignment is one in which every variable is mentioned, and a solution to a
CSP is a complete assignment that satisfies all the constraints. Some CSPs also require a
solution that maximizes an objective function.
The n queens
Across word
problem
A map coloring problem
Constraint graph: A CSP is usually represented as an undirected graph, called constraint
graph where the nodes are the variables and the edges are the binary constraints.
Initial state: the empty assignment { }, in which all variables are unassigned.
Successor function: assign a value to an unassigned variable, provided that it
does not conflict with previously assigned variables.
Goal test: the current assignment is complete.
Discrete variables:
1) Finite domains: For n variables with a finite domain size d, the complexity is O (dn).
Complete assignment is possible.E.g. Map-coloring problems.
2) Infinite domains: For n variables with infinite domain size such as strings, integers etc.
E.g. set of strings and set of integers.
Types of Constraints:
Backtracking search, a form of depth first search that chooses values for one variable at a
time and backtracks when a variable has no legal values left to assign. The algorithm is
shown in figure.
return failure
Constraint propagation:
1. Alldiff constraint: All the variables involved must have distinct values.
Chronological backtracking: when a branch of the search f ails, back up to the preceding
variable and try a different value for it. Here the most recent decision point is revisited.
Local search using the min-conflicts heuristic has been applied to constraint satisfaction
problems with great success. They use a complete-state formulation: the initial state assigns
a value to every variable, and the successor function usually works by changing the value
of one variable at a time.
In choosing a new value for a variable, the most obvious heuristic is to select the value that
results in the minimum number of conflicts with other variables—the min-conflicts
heuristic.
Min-conflicts is surprisingly effective for many CSPs, particularly when given a reasonable
initial state. Amazingly, on the n-queens problem, if you don‘t count the initial placement
of queens, the runtime of minconflicts is roughly independent of problem size.
return failure
The complexity of solving CSP is strongly related to the structure of its constraint graph. If
the CSP can be divided into independent sub problems, then each sub problem is solved
independently then the solutions are combined. When n variables are divided as n/c sub
problems, each will take dc work to solve. Hence the total work is O (dc n/c).
Any tree structured CSP can be solved in time linear in the number of variables..
The algorithm has the following steps:
1. Choose any variable as the root of the tree and order the variables from the root to
the leaves in such a way that every node‘s parent in the tree precedes it in the
ordering label the variables X1, …Xn in order, every variable except the root has
exactly one parent variable.
2. For j from n down to 2, apply arc consistency to the arc(Xi, Xj), where Xi is the
parent of Xj, removing values from DOMAIN[Xi] as necessary.
3. For j from 1 to n, assign any value for Xj consistent with the value assigned for Xi,
where Xi is the parent of Xj.
General constraint graphs can be reduced to trees on two ways. They are:
Knowledge-Based Agents
o An intelligent agent needs knowledge about the real world for taking decisions and reasoning to
act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an internal
state of knowledge, reason over that knowledge, update their knowledge after observations
and take actions. These agents can represent the world with some formal representation
and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.
The above diagram is representing a generalized architecture for a knowledge-based agent. The
knowledge-based agent (KBA) take input from the environment by perceiving the environment. The input
is taken by the inference engine of the agent and which also communicate with KB to decide as per the
knowledge store in KB. The learning element of KBA regularly updates the KB by learning new
knowledge.
Knowledge-base is required for updating knowledge for an agent to learn with experiences and take
action as per the knowledge.
Inference system
Inference means deriving new sentences from old. Inference system allows us to add a new sentence to
the knowledge base. A sentence is a proposition about the world. Inference system applies logical rules to
the KB to deduce new information.
Inference system generates new facts so that an agent can update the KB. An inference system works
mainly in two rules which are given as:
o Forward chaining
o Backward chaining
Following are three operations which are performed by KBA in order to show the intelligent
behavior:
1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.
1. function KB-AGENT(percept):
2. persistent: KB, a knowledge base
3. t, a counter, initially 0, indicating time
4. TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
5. Action = ASK(KB, MAKE-ACTION-QUERY(t))
6. TELL(KB, MAKE-ACTION-SENTENCE(action, t))
7. t=t+1
8. return action
The knowledge-based agent takes percept as input and returns an action as output. The agent maintains
the knowledge base, KB, and it initially has some background knowledge of the real world. It also has a
counter to indicate the time for the whole process, and this counter is initialized with zero.
Each time when the function is called, it performs its three operations:
The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent perceived the given
percept at the given time.
The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at the current
time.
MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action was executed.
A knowledge-based agent can be viewed at different levels which are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the
agent knows, and what the agent goals are. With these specifications, we can fix its behavior. For
example, suppose an automated taxi agent needs to go from a station A to station B, and he knows the
way from A to B, so this comes at the knowledge level.
2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored. At this level,
sentences are encoded into different logics. At the logical level, an encoding of knowledge into logical
sentences occurs. At the logical level we can expect to the automated taxi agent to reach to the destination
B.
3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level agent perform
actions as per logical and knowledge level. At this level, an automated taxi agent actually implement his
knowledge and logic so that he can reach to the destination.
However, in the real world, a successful agent can be built by combining both declarative and
procedural approaches, and declarative knowledge can often be compiled into more efficient
procedural code.
The Wumpus world is a simple world example to illustrate the worth of a knowledge-based agent and to
represent knowledge representation. It was inspired by a video game Hunt the Wumpus by Gregory Yob
in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are total 16
rooms which are connected with each other. We have a knowledge-based agent who will go forward in
this world. The cave has a room with a beast which is called Wumpus, who eats anyone who enters the
room. The Wumpus can be shot by the agent, but the agent has a single arrow. In the Wumpus world,
there are some Pits rooms which are bottomless, and if agent falls in Pits, then he will be stuck there
forever. The exciting thing with this cave is that in one room there is a possibility of finding a heap of
gold. So the agent goal is to find the gold and climb out the cave without fallen into Pits or eaten by
Wumpus. The agent will get a reward if he comes out with gold, and he will get a penalty if eaten by
Wumpus or falls in the pit.
Following is a sample diagram for representing the Wumpus world. It is showing some rooms with Pits,
one room with Wumpus and one agent at (1, 1) square location of the world.
Now we will explore the Wumpus world and will determine how the agent will find its goal by applying
logical reasoning.
Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe
for the agent, so to represent on the below diagram (a) that room is safe we will add symbol OK. Symbol
A is used to represent agent, symbol B for the breeze, G for Glitter or gold, V for the visited room, P for
pits, W for Wumpus.
At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also
OK.
Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves to
the room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The pit
can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?
Now agent will stop and think and will not make any harmful move. The agent will go back to the [1, 1]
room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the visited
squares.
At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent perceives a
stench which means there must be a Wumpus nearby. But Wumpus cannot be in the room [1,1] as by
rules of the game, and also not in [2,2] (Agent had not detected any stench when he was at [2,1]).
Therefore agent infers that Wumpus is in the room [1,3], and in current state, there is no breeze which
means in [2,2] there is no Pit and no Wumpus. So it is safe, and we will mark it OK, and the agent moves
further in [2,2].
At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to [2,3]. At
room [2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.
Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions.
A proposition is a declarative statement which is either true or false. It is a technique of knowledge
representation in logical and mathematical form.
Example:
1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.
o A proposition formula which is always true is called tautology, and it is also called a valid
sentence.
o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called
o Statements which are questions, commands, or opinions are not propositions such as "Where is
Rohini", "How are you", "What is your name", are not propositions.
The syntax of propositional logic defines the allowable sentences for the knowledge representation. There
are two types of Propositions:
Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
Example:
Example:
Logical connectives are used to connect two simpler propositions or representing a sentence logically. We
can create compound propositions with the help of logical connectives. There are mainly five connectives,
which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P
and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as
if-then rules. It can be represented as
Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible scenarios. We can
combine all the possible combination with logical connectives, and the representation of these
combinations in a tabular format is called Truth table. Following are the truth table for all logical
connectives:
Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences. Before
understanding the FOL inference rule, let's understand some basic terminologies used in FOL.
Substitution:
Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we
write F[a/x], so it refers to substitute a constant "a" in place of variable "x".
Equality:
First-Order logic does not only use predicate and terms for making atomic sentences but also uses another
way, which is equality in FOL. For this, we can use equality symbols which specify that the two terms
refer to the same object.
As in the above example, the object referred by the Brother (John) is similar to the object referred
by Smith. The equality symbol can also be used with negation to represent that two terms are not the
same objects.
As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:
o Universal Generalization
o Universal Instantiation
o Existential Instantiation
o Existential introduction
1. Universal Generalization:
o Universal generalization is a valid inference rule which states that if premise P(c) is true for any
arbitrary element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.", it
will also be true.
2. Universal Instantiation:
o Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
o The new KB is logically equivalent to the previous KB.
o As per UI, we can infer any sentence obtained by substituting a ground term for the
variable.
o The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
Example:1.
Example: 2.
"All kings who are greedy are Evil." So let our knowledge base contains this detail as in the form of FOL:
So from this information, we can infer any of the following statements using Universal Instantiation:
3. Existential Instantiation:
o Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
o It can be applied only once to replace the existential sentence.
o The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
o This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new
constant symbol c.
o The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
Example:
So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge base.
4. Existential introduction
For the inference process in FOL, we have a single inference rule which is called Generalized Modus
Ponens. It is lifted version of Modus ponens.
Generalized Modus Ponens can be summarized as, " P implies Q and P is asserted to be true, therefore Q
must be True."
According to Modus Ponens, for atomic sentences pi, pi', q. Where there is a substitution θ such that
SUBST (θ, pi',) = SUBST(θ, pi), it can be represented as:
Example:
We will use this rule for Kings are evil, so we will find some x such that x is king, and x is greedy so
we can infer that x is evil.
Resolution
Resolution is a theorem proving technique that proceeds by building refutation proofs, i.e., proofs by
contradictions. It was invented by a Mathematician John Alan Robinson in the year 1965.
Resolution is used, if there are various statements are given, and we need to prove a conclusion of those
statements. Unification is a key concept in proofs by resolutions. Resolution is a single inference rule
which can efficiently operate on the conjunctive normal form or clausal form.
Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit clause.
The resolution rule for first-order logic is simply a lifted version of the propositional rule. Resolution can
resolve two clauses if they contain complementary literals, which are assumed to be standardized apart so
that they share no variables.
This rule is also called the binary resolution rule because it only resolves exactly two literals.
Example:
Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)
These literals can be unified with unifier θ= [a/f(x), and b/x] , and it will generate a resolvent clause:
To better understand all the above steps, we will take an example in which we will apply resolution.
Example:
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.
Prove by resolution that:
f. John likes peanuts.
In the first step we will convert all the given statements into its first order logic.
In First order logic resolution, it is required to convert the FOL into CNF as CNF form makes easier for
resolution proofs.
a. food(Apple) Λ food(vegetables)
b. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀x ¬ eats(Anil, x) V eats(Harry, x)
e. ∀x ¬killed(x) ] V alive(x)
f. ∀x ¬ alive(x) V ¬ killed(x)
g. likes(John, Peanuts).
Rename variables or standardize variables
. ∀x ¬ food(x) V likes(John, x)
a. food(Apple) Λ food(vegetables)
b. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀w¬ eats(Anil, w) V eats(Harry, w)
e. ∀g ¬killed(g) ] V alive(g)
f. ∀k ¬ alive(k) V ¬ killed(k)
g. likes(John, Peanuts).
Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known as Skolemization. But in
this example problem since there is no existential quantifier so all the statements will remain same in this
step.
Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we
don't need it.
. ¬ food(x) V likes(John, x)
a. food(Apple)
b. food(vegetables)
c. ¬ eats(y, z) V killed(y) V food(z)
d. eats (Anil, Peanuts)
e. alive(Anil)
f. ¬ eats(Anil, w) V eats(Harry, w)
g. killed(g) V alive(g)
h. ¬ alive(k) V ¬ killed(k)
i. likes(John, Peanuts).
In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)
Now in this step, we will solve the problem by resolution tree using substitution. For the above problem,
it will be given as follows:
Hence the negation of the conclusion has been proved as a complete contradiction with the given set of
statements.
o In the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)
o In the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get resolved (canceled)
by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V killed(y) .
o In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get resolved
by substitution {Anil/y}, and we are left with Killed(Anil) .
o In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil) .
o In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.
The definite clause language does not allow a contradiction to be stated. However, a simple expansion of
the language can allow proof by contradiction.
An integrity constraint is a clause of the form
false←a1∧...∧ak.
where the ai are atoms and false is a special atom that is false in all interpretations.
A Horn clause is either a definite clause or an integrity constraint. That is, a Horn clause has
either false or a normal atom as its head.
Integrity constraints allow the system to prove that some conjunction of atoms is false in all models of a
knowledge base - that is, to prove disjunctions of negations of atoms. Recall that ¬p is the negation of p,
which is true in an interpretation when p is false in that interpretation, and p∨q is
the disjunction of p and q, which is true in an interpretation if p is true or q is true or both are true in the
interpretation. The integrity constraint false←a1∧...∧ak is logically equivalent to ¬a1∨...∨¬ak.
A Horn clause knowledge base can imply negations of atoms, as shown in Example 5.16.
Example 5.16: Consider the knowledge base KB1:
false←a∧b.
a←c.
b←c.
The atom c is false in all models of KB1. If c were true in model I of KB1, then a and b would both
betrue in I (otherwise I would not be a model of KB1). Because false is false in I and a and b are true in I,
the first clause is false in I, a contradiction to I being a model of KB1. Thus, c is false in all models of KB1.
This is expressed as
KB1 ¬c
which means that ¬c is true in all models of KB1, and so c is false in all models of KB1.
A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method when using an
inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more data
until a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied,
and add their conclusion to the known facts. This process repeats until the problem is solved.
Properties of Forward-Chaining:
Step-1:
In the first step we will start with the known facts and will choose the sentences which do not have
implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All
these facts will be represented as below.
Step-2:
At the second step, we will see those facts which infer from available facts and with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the
conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we can
add Criminal(Robert)which infers all the available facts. And hence we reached our goal statement.
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when using an
inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and
works backward, chaining through rules to find known facts that support the goal.
o
o Step-2:
o At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can
see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we
will add all the conjunctive facts below the first level and will replace p with Robert.
o Here we can see American (Robert) is a fact, so it is proved here.
o
o Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
o
o Step-4:
o At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.
o
o Step-5:
o At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.
Tutorial Questions
1. What is a local minima problem?
2. How does alpha-beta pruning technique works?
3. What is the use of online search agents in unknown environments?
4. Specify the complexity of expectiminimax.
5. How to improve the effectiveness of a search-based problem-solving
technique?
6. What is a constraint satisfaction problem?
7. Differentiate greedy search with A* search.
8. Write short notes on monotonocity and optimality of A* search.
9. Give example for effective branching factor.
10. Define relaxed problem.
Descriptive Questions
ASSIGNMENT QUESTIONS
AI TEST PAPERS
Max marks:10
SET : 2
UNIT TEST –I
AI
Max marks:10
SET : 3
UNIT TEST –I
AI
Max marks:10
SET : 4
UNIT TEST –I
AI
Max marks:10
https://nptel.ac.in/courses/106/105/106105077/11
https://nptel.ac.in/courses/106/105/106105077/09
https://nptel.ac.in/courses/106/105/106105077/10
https://nptel.ac.in/courses/106/105/106105077/12
https://nptel.ac.in/courses/106/105/106105077/13
https://nptel.ac.in/courses/106/105/106105077/14
UNIVERSITY QUESTIONS
Question Bank:
Part A
Short Questions
PART B
Objective questions
Answer: a
in which the utility values at the end of the game are always the same.
a) True
b) False
Answer: b
Answer: c
6. A game can be formally defined as a kind of search problem with the following
components:
a) Initial State
b) Successor Function
c) Terminal Test
d) All of the mentioned
Answer: d
7. The initial state and the legal moves for each side define the __________ for the game.
a) Search Tree
b) Game Tree
c) State Space Search
d) Forest
Answer: b
8. General algorithm applied on game tree for making decision of win/lose is ____________
a) DFS/BFS Search Algorithms
b) Heuristic Search Algorithms
c) Greedy Search Algorithms
d) MIN/MAX Algorithms8
Answer: d
9. Which search is equal to minimax search but eliminates the branches that can’t influence the
final decision?
a) Depth-first search
b) Breadth-first search
c) Alpha-beta pruning
d) None of the mentioned
Answer: c
Answer: a
Real-time Applications
Topic: Adversial Search
Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.
o In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form
of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and playing
against each other. Each agent needs to consider the action of other agent and effect
of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying
to explore the same search space for the solution, are called adversarial
searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and
these are the two main factors which help to model and solve games in AI.
o Perfect information: A game with the perfect information is that in which agents
can look into the complete board. Agents have all the information about the game,
and they can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the
game and not aware with what's going on, such type of games are called the game
with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict
pattern and set of rules for the games, and there is no randomness associated with
them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
Zero-Sum Game
o Zero-sum games are adversarial search which involves pure competition.
o In Zero-sum game each agent's gain or loss of utility is exactly balanced by the
losses or gains of utility of another agent.
o One player of the game try to maximize one single value, while other player tries to
minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.
o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their actions. This
requires embedded thinking or backward reasoning to solve the game problems in AI.
o Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the
outcomes are a win, loss, or draw and its payoff values are +1, 0, ½. And for tic-tac-
toe, utility values are +1, -1, and 0.
Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are
the moves by players. Game tree involves initial state, actions function, and result Function.
The following figure is showing part of the game-tree for tic-tac-toe game. Following are
some key points of the game:
Real-time Application 2
Blooms Taxonomy
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along the
path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as
the standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
Adversarial search is a search, where we examine the problem which arises when we try to plan
ahead of the world and other agents are planning against us.
o there might be some situations where more than one agent is searching for the solution in
the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in
which each agent is an opponent of other agent and playing against each other. Each
agent needs to consider the action of other agent and effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the
same search space for the solution, are called adversarial searches, often known as
Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are
the two main factors which help to model and solve games in AI.