0% found this document useful (0 votes)
15 views

Ai Unit 2 Notes

ai-unit-2-notes

Uploaded by

Priya Mannem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Ai Unit 2 Notes

ai-unit-2-notes

Uploaded by

Priya Mannem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

lOMoARcPSD|31075044

AI Unit-2 notes

Computer Science And Engineering (Jawaharlal Nehru Technological University,


Hyderabad)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Priya Mannem ([email protected])
lOMoARcPSD|31075044

Syllabus

ARTIFICIAL INTELLIGENCE

B.TECH IV Year I Sem. L T P C

UNIT - I
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions,
Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search,
Iterative deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search
Strategies: Greedy best-first search, A* search, Heuristic Functions, Beyond Classical Search:
Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces,
Searching with Non-Deterministic Actions, Searching wih Partial Observations, Online
Search Agents and Unknown Environment .

UNIT - II
Problem Solving by Search-II and Propositional Logic
Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect
Real-Time Decisions.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint
Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of
Problems.
Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional
Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn
clauses and definite clauses, Forward and backward chaining, Effective Propositional Model
Checking, Agents Based on Propositional Logic.

UNIT - III
Logic and Knowledge Representation
First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-
Order Logic, Knowledge Engineering in First-Order Logic.
Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification and
Lifting, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events.
Mental Events and Mental Objects, Reasoning Systems for Categories, Reasoning with
Default Information.

UNIT - IV
Planning
Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-
Space Search, Planning Graphs, other Classical Planning Approaches, Analysis of Planning
approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical
Planning, Planning and Acting in Nondeterministic Domains, Multi agent Planning.

UNIT - V
Uncertain knowledge and Learning
Uncertainty: Acting under Uncertainty, Basic Probability Notation, Inference Using Full
Joint Distributions, Independence, Bayes’ Rule and Its Use,
Probabilistic Reasoning: Representing Knowledge in an Uncertain Domain, The Semantics
Of Bayesian Networks, Efficient Representation of Conditional Distributions, Approximate

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Inference in Bayesian Networks, Relational and First-Order Probability, Other Approaches to


Uncertain Reasoning; Dempster-Shafer theory.
Learning: Forms of Learning, Supervised Learning, Learning Decision Trees.Knowledge in
Learning: Logical Formulation of Learning, Knowledge in Learning, Explanation-Based
Learning, Learning Using Relevance Information, Inductive Logic Programming.

TEXT BOOKS
1. Artificial Intelligence A Modern Approach, Third Edition, Stuart Russell and Peter
Norvig, Pearson Education.

REFERENCES:
1. Artificial Intelligence, 3rd Edn., E. Rich and K. Knight (TMH)
2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education.
3. Artificial Intelligence, Shivani Goel, Pearson Education.
4. Artificial Intelligence and Expert systems – Patterson, Pearson Education

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

LECTURE NOTES

2.1 Adversarial Search

Adversarial search is a search, where we examine the problem which arises when we try to plan ahead of
the world and other agents are planning against us.

o In previous topics, we have studied the search strategies which are only associated with a single
agent that aims to find the solution which often expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the solution in the
same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in which each
agent is an opponent of other agent and playing against each other. Each agent needs to consider
the action of other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the same
search space for the solution, are called adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are the two
main factors which help to model and solve games in AI.

Types of Games in AI:

Deterministic Chance Moves

Perfect Chess, Checkers, Backgammon,


information go, Othello monopoly
Imperfect Battleships, blind, Bridge, poker, scrabble,
information tic-tac-toe nuclear war
o Perfect information: A game with the perfect information is that in which agents can look into
the complete board. Agents have all the information about the game, and they can see each other
moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game and not
aware with what's going on, such type of games are called the game with imperfect information,
such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern and set
of rules for the games, and there is no randomness associated with them. Examples are chess,
Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various unpredictable
events and has a factor of chance or luck. This factor of chance or luck is introduced by either
dice or cards. These are random, and each action response is not fixed. Such games are also called

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

2.2 Optimal Decision in Games

A game can be defined as a type of search in AI which can be formalized of the following elements:

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of moves in the state space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The state
where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game that ends in terminal
states s for player p. It is also called payoff function. For Chess, the outcomes are a win, loss, or
draw and its payoff values are +1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.

Game tree:

A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the moves by
players. Game tree involves initial state, actions function, and result Function.

Example: Tic-Tac-Toe game tree:The following figure is showing part of the game-tree for tic-tac-toe
game. Following are some key points of the game:

o There are two players MAX and MIN.


o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.

Example Explanation :
o From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN place
o, and both player plays alternatively until we reach a leaf node where one player has three in a
row or all squares are filled.
o Both players will compute each node, minimax, the minimax value which is the best achievable
utility against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player
is doing his best to prevent another one from winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply. Max
place x, then MIN puts o to prevent Max from winning, and this game continues until the
terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space of
possibilities that MIN and MAX are playing tic-tac-toe and taking turns alternately.

2.3 Mini-Max Algorithm in Artificial Intelligence

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making


and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the current
state.
o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the maximized
value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get
the utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the
tree. Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find
the 3rdlayer node values.
o For node B= min(4,6) = 4
o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find
the maximum value for the root node. In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more than 4 layers.
o For node A max(4, -3)= 4

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the
finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of
the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).

Limitation of the minimax Algorithm:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

The main drawback of the minimax algorithm is that it gets really slow for complex games such
as Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide.

2.4 Alpha-Beta Pruning

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization


technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can
cut it to half. Hence there is a technique by which without checking each node of the game tree
we can compute the correct minimax decision, and this technique is called pruning. This
involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta
pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along the path
of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.

Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.


o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of values of
alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the
same value to its child D.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min,
Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B
now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -
∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha
will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right
successor of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values
now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the
final game tree which is the showing the nodes which are computed and nodes which has never
computed. Hence the optimal value for the maximizer is 3 for this example.

Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of
the tree, and works exactly as minimax algorithm. In this case, it also consumes more time

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

because of alpha-beta factors, such a move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in
the tree, and best moves occur at the left side of the tree. We apply DFS hence it first search left
of the tree and go deep twice as minimax algorithm in the same amount of time. Complexity in
ideal ordering is O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.


o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first, then
threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may repeat.

2.5 Constraint satisfaction problems (CSP)

Constraint satisfaction problems (CSP) is defined by a set of variables, X1, X2, …Xn, and a
set of constraints, C1, C2,…Cm. Each variable Xi has a nonempty domain Di of all possible
values. A complete assignment is one in which every variable is mentioned, and a solution to a
CSP is a complete assignment that satisfies all the constraints. Some CSPs also require a
solution that maximizes an objective function.

Some examples for CSP‘s are:

The n queens
Across word
problem
A map coloring problem
Constraint graph: A CSP is usually represented as an undirected graph, called constraint
graph where the nodes are the variables and the edges are the binary constraints.

CSP can be viewed as an incremental formulation as a standard search problem as follows:

Initial state: the empty assignment { }, in which all variables are unassigned.
Successor function: assign a value to an unassigned variable, provided that it
does not conflict with previously assigned variables.
Goal test: the current assignment is complete.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Path cost: a constant cost for every step.

Discrete variables:
1) Finite domains: For n variables with a finite domain size d, the complexity is O (dn).
Complete assignment is possible.E.g. Map-coloring problems.

2) Infinite domains: For n variables with infinite domain size such as strings, integers etc.
E.g. set of strings and set of integers.

Continuous variables: Linear constraints solvable in polynomial time by linear


programming. E.g. start / end times for Hubble space telescope observations.

Types of Constraints:

1. Unary constraints, which restricts the value of a single variable.


2. Binary constraints, involve pair of variables.
3. Higher order constraints, involve three or more variables.

2.6 Backtracking Search for CSP’s

Backtracking search, a form of depth first search that chooses values for one variable at a
time and backtracks when a variable has no legal values left to assign. The algorithm is
shown in figure.

function BACKTRACKING-SEARCH(csp) returns a solution, or failure

return RECURSIVE-BACKTRACKING( { }, csp)

function RECURSIVE-BACKTRACKING(assignmen,csp) returns a solution, or failure

if assignment is complete then return assignment

var ← SELECT-UNASSIGNED-VARIABLE(VARIABLE[csp],assignment, csp)

for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do

if value is consistent with assignment according to CONSTRAINT[csp] then

add { var=value} to assignment

result ← RECURSIVE-BACKTRACKING(assignment, csp)


if result ≠ failure then return result

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

remove { var=value} from assignment

return failure

Figure A simple backtracking algorithm for CSP

Propagating information through constraints

Forward checking: The key steps of forward checking process are:

Keep track of remaining legal values for unassigned


variables Terminate search when any variable has no legal
values
This method propagates information from assigned to unassigned variables, but does not
provide early detection for all failures.

Constraint propagation:

Constraint propagation repeatedly enforces constraints locally to detect inconsistencies.


This propagation can be done with different types of consistency techniques. They are:

1. Node consistency (one consistency): The node representing a variable V in


constraint graph is node consistent if for every value X in the current domain of V,
each unary constraint on V is satisfied. The node inconsistency can be eliminated
by simply removing those values from the domain D of each variable that do not
satisfy unary constraint on V.
2. Arc consistency (two consistency): Arc refers to a directed arc in the constraint
graph. The different versions of Arc consistency algorithms are exist such as AC-
1, upto AC-7, but frequently used are AC-3 or AC-4.
3. Path consistency (K-consistency): An algorithm for making a constraint graph
strongly three consistent that is usually referred as path consistency, ensures that
problem can be solved without backtracking.

Handling special constraints

1. Alldiff constraint: All the variables involved must have distinct values.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

2. Resource constraint: Higher order or atmost constraint, in which consistency is


achieved by deleting the maximum value of any domain if it is not consistent with
minimum values of the other domains.
Intelligent backtracking

Chronological backtracking: when a branch of the search f ails, back up to the preceding
variable and try a different value for it. Here the most recent decision point is revisited.

Conflict directed backjumping: It backtracks directly to the source of the problem.

2.7 Local search for CSP

Local search using the min-conflicts heuristic has been applied to constraint satisfaction
problems with great success. They use a complete-state formulation: the initial state assigns
a value to every variable, and the successor function usually works by changing the value
of one variable at a time.

In choosing a new value for a variable, the most obvious heuristic is to select the value that
results in the minimum number of conflicts with other variables—the min-conflicts
heuristic.

Min-conflicts is surprisingly effective for many CSPs, particularly when given a reasonable
initial state. Amazingly, on the n-queens problem, if you don‘t count the initial placement
of queens, the runtime of minconflicts is roughly independent of problem size.

function MIN-CONFLICTS(csp, max steps) returns a solution or failure

inputs: csp, a constraint satisfaction problem

max steps, the number of steps allowed before giving up


current an initial complete assignment for csp
for i = 1 to max steps do

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

if current is a solution for csp then return current

var a randomly chosen, conflicted variable from VARIABLES[csp] value


the value v for var that minimizes CONFLICTS(var, v, current, csp) set
var =value in current

return failure

Figure: The MIN-CONFLICTS algorithm for solving CSP

2.8 The Structure of problems

The complexity of solving CSP is strongly related to the structure of its constraint graph. If
the CSP can be divided into independent sub problems, then each sub problem is solved
independently then the solutions are combined. When n variables are divided as n/c sub
problems, each will take dc work to solve. Hence the total work is O (dc n/c).

Any tree structured CSP can be solved in time linear in the number of variables..
The algorithm has the following steps:

1. Choose any variable as the root of the tree and order the variables from the root to
the leaves in such a way that every node‘s parent in the tree precedes it in the
ordering label the variables X1, …Xn in order, every variable except the root has
exactly one parent variable.
2. For j from n down to 2, apply arc consistency to the arc(Xi, Xj), where Xi is the
parent of Xj, removing values from DOMAIN[Xi] as necessary.
3. For j from 1 to n, assign any value for Xj consistent with the value assigned for Xi,
where Xi is the parent of Xj.

The complete algorithm runs in time O (nd2).

General constraint graphs can be reduced to trees on two ways. They are:

1. Removing nodes – Cutest conditioning


2. Collapsing nodes together – Tree decomposition

2.9 Propositional Logic:

Knowledge-Based Agents

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o An intelligent agent needs knowledge about the real world for taking decisions and reasoning to
act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an internal
state of knowledge, reason over that knowledge, update their knowledge after observations
and take actions. These agents can represent the world with some formal representation
and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.

A knowledge-based agent must able to do the following:

o An agent should be able to represent states, actions, etc.


o An agent Should be able to incorporate new percepts
o An agent can update the internal representation of the world
o An agent can deduce the internal representation of the world
o An agent can deduce appropriate actions.

The architecture of knowledge-based agent:

The above diagram is representing a generalized architecture for a knowledge-based agent. The
knowledge-based agent (KBA) take input from the environment by perceiving the environment. The input
is taken by the inference engine of the agent and which also communicate with KB to decide as per the
knowledge store in KB. The learning element of KBA regularly updates the KB by learning new
knowledge.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Knowledge base: Knowledge-base is a central component of a knowledge-based agent, it is also known


as KB. It is a collection of sentences (here 'sentence' is a technical term and it is not identical to sentence
in English). These sentences are expressed in a language which is called a knowledge representation
language. The Knowledge-base of KBA stores fact about the world.

Why use a knowledge base?

Knowledge-base is required for updating knowledge for an agent to learn with experiences and take
action as per the knowledge.

Inference system

Inference means deriving new sentences from old. Inference system allows us to add a new sentence to
the knowledge base. A sentence is a proposition about the world. Inference system applies logical rules to
the KB to deduce new information.

Inference system generates new facts so that an agent can update the KB. An inference system works
mainly in two rules which are given as:

o Forward chaining
o Backward chaining

Operations Performed by KBA

Following are three operations which are performed by KBA in order to show the intelligent
behavior:

1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.

A generic knowledge-based agent:

Following is the structure outline of a generic knowledge-based agents program:

1. function KB-AGENT(percept):
2. persistent: KB, a knowledge base
3. t, a counter, initially 0, indicating time
4. TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
5. Action = ASK(KB, MAKE-ACTION-QUERY(t))
6. TELL(KB, MAKE-ACTION-SENTENCE(action, t))
7. t=t+1
8. return action

The knowledge-based agent takes percept as input and returns an action as output. The agent maintains
the knowledge base, KB, and it initially has some background knowledge of the real world. It also has a
counter to indicate the time for the whole process, and this counter is initialized with zero.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Each time when the function is called, it performs its three operations:

o Firstly it TELLs the KB what it perceives.


o Secondly, it asks KB what action it should take
o Third agent program TELLS the KB that which action was chosen.

The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent perceived the given
percept at the given time.

The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at the current
time.

MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action was executed.

Various levels of knowledge-based agent:

A knowledge-based agent can be viewed at different levels which are given below:

1. Knowledge level

Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the
agent knows, and what the agent goals are. With these specifications, we can fix its behavior. For
example, suppose an automated taxi agent needs to go from a station A to station B, and he knows the
way from A to B, so this comes at the knowledge level.

2. Logical level:

At this level, we understand that how the knowledge representation of knowledge is stored. At this level,
sentences are encoded into different logics. At the logical level, an encoding of knowledge into logical
sentences occurs. At the logical level we can expect to the automated taxi agent to reach to the destination
B.

3. Implementation level:

This is the physical representation of logic and knowledge. At the implementation level agent perform
actions as per logical and knowledge level. At this level, an automated taxi agent actually implement his
knowledge and logic so that he can reach to the destination.

Approaches to designing a knowledge-based agent:

There are mainly two approaches to build a knowledge-based agent:

1. 1. Declarative approach: We can create a knowledge-based agent by initializing with an empty


knowledge base and telling the agent all the sentences with which we want to start with. This
approach is called Declarative approach.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

2. 2. Procedural approach: In the procedural approach, we directly encode desired behavior as a


program code. Which means we just need to write a program that already encodes the desired
behavior or agent.

However, in the real world, a successful agent can be built by combining both declarative and
procedural approaches, and declarative knowledge can often be compiled into more efficient
procedural code.

2.10 Wumpus world:

The Wumpus world is a simple world example to illustrate the worth of a knowledge-based agent and to
represent knowledge representation. It was inspired by a video game Hunt the Wumpus by Gregory Yob
in 1973.

The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are total 16
rooms which are connected with each other. We have a knowledge-based agent who will go forward in
this world. The cave has a room with a beast which is called Wumpus, who eats anyone who enters the
room. The Wumpus can be shot by the agent, but the agent has a single arrow. In the Wumpus world,
there are some Pits rooms which are bottomless, and if agent falls in Pits, then he will be stuck there
forever. The exciting thing with this cave is that in one room there is a possibility of finding a heap of
gold. So the agent goal is to find the gold and climb out the cave without fallen into Pits or eaten by
Wumpus. The agent will get a reward if he comes out with gold, and he will get a penalty if eaten by
Wumpus or falls in the pit.

Following is a sample diagram for representing the Wumpus world. It is showing some rooms with Pits,
one room with Wumpus and one agent at (1, 1) square location of the world.

Exploring the Wumpus world:

Now we will explore the Wumpus world and will determine how the agent will find its goal by applying
logical reasoning.

Agent's First step:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe
for the agent, so to represent on the below diagram (a) that room is safe we will add symbol OK. Symbol
A is used to represent agent, symbol B for the breeze, G for Glitter or gold, V for the visited room, P for
pits, W for Wumpus.

At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also
OK.

Agent's second Step:

Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves to
the room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The pit
can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?

Now agent will stop and think and will not make any harmful move. The agent will go back to the [1, 1]
room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the visited
squares.

Agent's third step:

At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent perceives a
stench which means there must be a Wumpus nearby. But Wumpus cannot be in the room [1,1] as by
rules of the game, and also not in [2,2] (Agent had not detected any stench when he was at [2,1]).
Therefore agent infers that Wumpus is in the room [1,3], and in current state, there is no breeze which
means in [2,2] there is no Pit and no Wumpus. So it is safe, and we will mark it OK, and the agent moves
further in [2,2].

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Agent's fourth step:

At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to [2,3]. At
room [2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.

2.11 Propositional logic in Artificial intelligence

Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions.
A proposition is a declarative statement which is either true or false. It is a technique of knowledge
representation in logical and mathematical form.

Example:
1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
3. c) 3+3= 7(False proposition)
4. d) 5 is a prime number.

Following are some basic facts about propositional logic:

o Propositional logic is also called Boolean logic as it works on 0 and 1.


o In propositional logic, we use symbolic variables to represent the logic, and we can use any
symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
o Propositions can be either true or false, but it cannot be both.
o Propositional logic consists of an object, relations or function, and logical connectives.
o These connectives are also called logical operators.
o The propositions and connectives are the basic elements of the propositional logic.
o Connectives can be said as a logical operator which connects two sentences.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o A proposition formula which is always true is called tautology, and it is also called a valid
sentence.
o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called
o Statements which are questions, commands, or opinions are not propositions such as "Where is
Rohini", "How are you", "What is your name", are not propositions.

Syntax of propositional logic:

The syntax of propositional logic defines the allowable sentences for the knowledge representation. There
are two types of Propositions:

 Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.

Example:

1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.


2. b) "The Sun is cold" is also a proposition as it is a false fact.

 Compound proposition: Compound propositions are constructed by combining simpler or


atomic propositions, using parenthesis and logical connectives.

Example:

1. a) "It is raining today, and street is wet."


2. b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:

Logical connectives are used to connect two simpler propositions or representing a sentence logically. We
can create compound propositions with the help of logical connectives. There are mainly five connectives,
which are given as follows:

1. Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or
negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. → P∧ Q.
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P
and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication. Implications are also known as
if-then rules. It can be represented as

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

If it is raining, then the street is wet.


Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am
breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.

Following is the summarized table for Propositional Logic Connectives:

Truth Table:

In propositional logic, we need to know the truth values of propositions in all possible scenarios. We can
combine all the possible combination with logical connectives, and the representation of these
combinations in a tabular format is called Truth table. Following are the truth table for all logical
connectives:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

2.12 Propositional Theorem Proving: Inference and proofs

Inference in First-Order Logic

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences. Before
understanding the FOL inference rule, let's understand some basic terminologies used in FOL.

Substitution:

Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference
systems in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we
write F[a/x], so it refers to substitute a constant "a" in place of variable "x".

Equality:

First-Order logic does not only use predicate and terms for making atomic sentences but also uses another
way, which is equality in FOL. For this, we can use equality symbols which specify that the two terms
refer to the same object.

Example: Brother (John) = Smith.

As in the above example, the object referred by the Brother (John) is similar to the object referred
by Smith. The equality symbol can also be used with negation to represent that two terms are not the
same objects.

Example: ¬(x=y) which is equivalent to x ≠y.

FOL inference rules for quantifier:

As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:

o Universal Generalization
o Universal Instantiation
o Existential Instantiation
o Existential introduction

1. Universal Generalization:

o Universal generalization is a valid inference rule which states that if premise P(c) is true for any
arbitrary element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).

o It can be represented as: .


o This rule can be used if we want to show that every element has a similar property.
o In this rule, x must not appear as a free variable.

Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.", it
will also be true.

2. Universal Instantiation:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can
be applied multiple times to add new sentences.
o The new KB is logically equivalent to the previous KB.
o As per UI, we can infer any sentence obtained by substituting a ground term for the
variable.
o The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.

o It can be represented as: .

Example:1.

IF "Every person like ice-cream"=> ∀x P(x) so we can infer that


"John likes ice-cream" => P(c)

Example: 2.

Let's take a famous example,

"All kings who are greedy are Evil." So let our knowledge base contains this detail as in the form of FOL:

∀x king(x) ∧ greedy (x) → Evil (x),

So from this information, we can infer any of the following statements using Universal Instantiation:

o King(John) ∧ Greedy (John) → Evil (John),


o King(Richard) ∧ Greedy (Richard) → Evil (Richard),
o King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),

3. Existential Instantiation:

o Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
o It can be applied only once to replace the existential sentence.
o The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was
satisfiable.
o This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new
constant symbol c.
o The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.

o It can be represented as:

Example:

From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge base.

o The above used K is a constant symbol, which is called Skolem constant.


o The Existential instantiation is a special case of Skolemization process.

4. Existential introduction

o An existential introduction is also known as an existential generalization, which is a valid


inference rule in first-order logic.
o This rule states that if there is some element c in the universe of discourse which has a property P,
then we can infer that there exists something in the universe which has the property P.

o It can be represented as:


o Example: Let's say that,
"Priyanka got good marks in English."
"Therefore, someone got good marks in English."

Generalized Modus Ponens Rule:

For the inference process in FOL, we have a single inference rule which is called Generalized Modus
Ponens. It is lifted version of Modus ponens.

Generalized Modus Ponens can be summarized as, " P implies Q and P is asserted to be true, therefore Q
must be True."

According to Modus Ponens, for atomic sentences pi, pi', q. Where there is a substitution θ such that
SUBST (θ, pi',) = SUBST(θ, pi), it can be represented as:

Example:

We will use this rule for Kings are evil, so we will find some x such that x is king, and x is greedy so
we can infer that x is evil.

1. Here let say, p1' is king(John) p1 is king(x)


2. p2' is Greedy(y) p2 is Greedy(x)
3. θ is {x/John, y/John} q is evil(x)
4. SUBST(θ,q).

2.13 Proof by resolution:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Resolution

Resolution is a theorem proving technique that proceeds by building refutation proofs, i.e., proofs by
contradictions. It was invented by a Mathematician John Alan Robinson in the year 1965.

Resolution is used, if there are various statements are given, and we need to prove a conclusion of those
statements. Unification is a key concept in proofs by resolutions. Resolution is a single inference rule
which can efficiently operate on the conjunctive normal form or clausal form.

Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit clause.

Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to be conjunctive


normal form or CNF.

The resolution inference rule:

The resolution rule for first-order logic is simply a lifted version of the propositional rule. Resolution can
resolve two clauses if they contain complementary literals, which are assumed to be standardized apart so
that they share no variables.

Where li and mj are complementary literals.

This rule is also called the binary resolution rule because it only resolves exactly two literals.

Example:

We can resolve two clauses which are given below:

[Animal (g(x) V Loves (f(x), x)] and [¬ Loves(a, b) V ¬Kills(a, b)]

Where two complimentary literals are: Loves (f(x), x) and ¬ Loves (a, b)

These literals can be unified with unifier θ= [a/f(x), and b/x] , and it will generate a resolvent clause:

[Animal (g(x) V ¬ Kills(f(x), x)].

Steps for Resolution:


1. Conversion of facts into first-order logic.
2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by contradiction)
4. Draw resolution graph (unification).

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

To better understand all the above steps, we will take an example in which we will apply resolution.

Example:
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats.
Prove by resolution that:
f. John likes peanuts.

Step-1: Conversion of Facts into FOL

In the first step we will convert all the given statements into its first order logic.

Step-2: Conversion of FOL into CNF

In First order logic resolution, it is required to convert the FOL into CNF as CNF form makes easier for
resolution proofs.

o Eliminate all implication (→) and rewrite


a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
d. eats (Anil, Peanuts) Λ alive(Anil)
e. ∀x ¬ eats(Anil, x) V eats(Harry, x)
f. ∀x¬ [¬ killed(x) ] V alive(x)
g. ∀x ¬ alive(x) V ¬ killed(x)
h. likes(John, Peanuts).
Move negation (¬)inwards and rewrite
. ∀x ¬ food(x) V likes(John, x)

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

a. food(Apple) Λ food(vegetables)
b. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀x ¬ eats(Anil, x) V eats(Harry, x)
e. ∀x ¬killed(x) ] V alive(x)
f. ∀x ¬ alive(x) V ¬ killed(x)
g. likes(John, Peanuts).
Rename variables or standardize variables
. ∀x ¬ food(x) V likes(John, x)
a. food(Apple) Λ food(vegetables)
b. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
c. eats (Anil, Peanuts) Λ alive(Anil)
d. ∀w¬ eats(Anil, w) V eats(Harry, w)
e. ∀g ¬killed(g) ] V alive(g)
f. ∀k ¬ alive(k) V ¬ killed(k)
g. likes(John, Peanuts).
Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known as Skolemization. But in
this example problem since there is no existential quantifier so all the statements will remain same in this
step.
Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we
don't need it.
. ¬ food(x) V likes(John, x)
a. food(Apple)
b. food(vegetables)
c. ¬ eats(y, z) V killed(y) V food(z)
d. eats (Anil, Peanuts)
e. alive(Anil)
f. ¬ eats(Anil, w) V eats(Harry, w)
g. killed(g) V alive(g)
h. ¬ alive(k) V ¬ killed(k)
i. likes(John, Peanuts).

o Distribute conjunction ∧ over disjunction ¬.


This step will not make any change in this problem.

Step-3: Negate the statement to be proved

In this statement, we will apply negation to the conclusion statements, which will be written as
¬likes(John, Peanuts)

Step-4: Draw Resolution graph:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Now in this step, we will solve the problem by resolution tree using substitution. For the above problem,
it will be given as follows:

Hence the negation of the conclusion has been proved as a complete contradiction with the given set of
statements.

Explanation of Resolution graph:

o In the first step of resolution graph, ¬likes(John, Peanuts) , and likes(John, x) get
resolved(canceled) by substitution of {Peanuts/x}, and we are left with ¬ food(Peanuts)
o In the second step of the resolution graph, ¬ food(Peanuts) , and food(z) get resolved (canceled)
by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V killed(y) .
o In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get resolved
by substitution {Anil/y}, and we are left with Killed(Anil) .
o In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil) .
o In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.

2.14 Horn clauses and definite clauses :

The definite clause language does not allow a contradiction to be stated. However, a simple expansion of
the language can allow proof by contradiction.
An integrity constraint is a clause of the form

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

false←a1∧...∧ak.

where the ai are atoms and false is a special atom that is false in all interpretations.
A Horn clause is either a definite clause or an integrity constraint. That is, a Horn clause has
either false or a normal atom as its head.
Integrity constraints allow the system to prove that some conjunction of atoms is false in all models of a
knowledge base - that is, to prove disjunctions of negations of atoms. Recall that ¬p is the negation of p,
which is true in an interpretation when p is false in that interpretation, and p∨q is
the disjunction of p and q, which is true in an interpretation if p is true or q is true or both are true in the
interpretation. The integrity constraint false←a1∧...∧ak is logically equivalent to ¬a1∨...∨¬ak.
A Horn clause knowledge base can imply negations of atoms, as shown in Example 5.16.
Example 5.16: Consider the knowledge base KB1:

false←a∧b.
a←c.
b←c.

The atom c is false in all models of KB1. If c were true in model I of KB1, then a and b would both
betrue in I (otherwise I would not be a model of KB1). Because false is false in I and a and b are true in I,
the first clause is false in I, a contradiction to I being a model of KB1. Thus, c is false in all models of KB1.
This is expressed as
KB1 ¬c

which means that ¬c is true in all models of KB1, and so c is false in all models of KB1.

2.15 Forward and backward chaining

A. Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using an
inference engine. Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract more data
until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied,
and add their conclusion to the known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining:

o It is a down-up approach, as it moves from bottom to top.


o It is a process of making a conclusion based on known facts or data, by starting from the initial
state and reaches the goal state.
o Forward-chaining approach is also called as data-driven as we reach to the goal using available
data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Forward chaining proof:

Step-1:

In the first step we will start with the known facts and will choose the sentences which do not have
implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and Missile(T1). All
these facts will be represented as below.

Step-2:

At the second step, we will see those facts which infer from available facts and with satisfied premises.

Rule-(1) does not satisfy premises, so it will not be added in the first iteration.

Rule-(2) and (3) are already added.

Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the
conjunction of Rule (2) and (3).

Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).

Step-3:

At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we can
add Criminal(Robert)which infers all the available facts. And hence we reached our goal statement.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Hence it is proved that Robert is Criminal using forward chaining approach.

B. Backward Chaining:

Backward-chaining is also known as a backward deduction or backward reasoning method when using an
inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and
works backward, chaining through rules to find known facts that support the goal.

Properties of backward chaining:

o It is known as a top-down approach.


o Backward-chaining is based on modus ponens inference rule.
o In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
o It is called a goal-driven approach, as a list of goals decides which rules are selected and used.
o Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
o The backward-chaining method mostly used a depth-first search strategy for proof.
o Backward-Chaining proof:
o In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and
then infer further rules.
o Step-1:
o At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and
at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.

o
o Step-2:
o At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can
see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

will add all the conjunctive facts below the first level and will replace p with Robert.
o Here we can see American (Robert) is a fact, so it is proved here.

o
o Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.

o
o Step-4:
o At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o
o Step-5:
o At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Tutorial Questions
1. What is a local minima problem?
2. How does alpha-beta pruning technique works?
3. What is the use of online search agents in unknown environments?
4. Specify the complexity of expectiminimax.
5. How to improve the effectiveness of a search-based problem-solving
technique?
6. What is a constraint satisfaction problem?
7. Differentiate greedy search with A* search.
8. Write short notes on monotonocity and optimality of A* search.
9. Give example for effective branching factor.
10. Define relaxed problem.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Descriptive Questions

1) Explain about adversial Search

2) Explain about Zero sum game in ai

3) Summarize about Minimax algorithm

4) Briefly explain about alpha beta pruning

5) Write about imperfect real time decisions in ai

6) Explain CSP with map Coloring example

7) Summarize about Local Search for CSP

8) Explain about Wumpus world problem

9) Write about propositional logic

10) List out the rules of Inference

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

ASSIGNMENT QUESTIONS

1)Briefly explain about alpha beta pruning

2) Write about imperfect real time decisions in ai

3) Explain CSP with map Coloring example

4) Summarize about Local Search for CSP

5) Explain about Wumpus world problem

6) Write about propositional logic

7) List out the rules of Inference

8) Explain about Wumpus world problem

9) Write about propositional logic

10) List out the rules of Inference

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

AI TEST PAPERS

UNIT TEST –I SET : 1


AI

Max marks:10

I. Answer Any ONE Question.

1) a) Explain minimax algorithm?


b) Explain explain alpha beta prunning?
2) a) Explain Knowledge based agent?
b) Explain the wumpus world problem

II. Fill in the blanks

1) Inference means ______________


2) Knowledge base is a collection of ________________.
3) Knowledge base take input from ___________.
4) An agent can _____________the internal representation of the world
5) Knowledge based agents represent the ___________________
6) Knowledge based agents maintains ________________of knowledge
7) Backtracking search is a form of _____________________
8) Optimization technique for minimax algorithm is _______________
9) A game can be defined by the _______________________________.
10) Adversial search is search where we __________________.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

SET : 2
UNIT TEST –I
AI

Max marks:10

I.Answer Any ONE Question.

1) a) Explain Horn Clauses and definite clauses?.


b) Explain Forward chaining with example.

2) a) Explain backward chaining with example


b) Expalin propositional logic.

II. Fill in the blanks

1) In propositional logic statemengts are made of ___________.


2) Atomic propositions consists of ______________
3) There are mainly ____________________connectives used in propositional logic
4) Resolution is a _____________________
5) Forward chaining is a form of ________________.
6) Forward chaining is a _____________
7) Forward chaining is a process of making a conclusion based on ___________________ _
8) ___________________is a optimization technique for minimax algorithm
9) Inference engine takes input from the __________________.
10) New sentences can be derived from old sentences it is called as _______________

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

SET : 3
UNIT TEST –I
AI
Max marks:10

I .Answer Any ONE Question.


1) a)Explain Knowledge based agent with neat diagram
b) Explain the properties of Forward chaining.

2) a) Explain local search for CSP.


b) Explain the Structure of problems

II. Fill in the blanks


1) There are mainly ____________________connectives used in propositional logic
2) Resolution is a _____________________
3) Forward chaining is a form of ________________.
4) Forward chaining is a _____________
5) Forward chaining is a process of making a conclusion based on ___________________ _
6) Knowledge based agents maintains ________________of knowledge
7) Backtracking search is a form of _____________________
8) Optimization technique for minimax algorithm is _______________
9) A game can be defined by the _______________________________.
10) Adversial search is search where we __________________.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

SET : 4
UNIT TEST –I
AI
Max marks:10

I. Answer Any ONE Question.

1) a) Explain Backward chaining with example?


b) Explain the knowledge based architecture?

2) a) Explain about local search for CSP.


b) Explain minimax algorithm?.

II. Fill in the blanks


1) Inference means ______________
2) Knowledge base is a collection of ________________.
3) Knowledge base take input from ___________.
4) An agent can _____________the internal representation of the world
5) Knowledge based agents represent the ___________________
6) Forward chaining is a _____________
7) Forward chaining is a process of making a conclusion based on ___________________ _
8) ___________________is a optimization technique for minimax algorithm
9) Inference engine takes input from the __________________.
10) New sentences can be derived from old sentences it is called as _______________

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

NPTEL VIDEO LINKS

https://nptel.ac.in/courses/106/105/106105077/11

https://nptel.ac.in/courses/106/105/106105077/09

https://nptel.ac.in/courses/106/105/106105077/10

https://nptel.ac.in/courses/106/105/106105077/12

https://nptel.ac.in/courses/106/105/106105077/13

https://nptel.ac.in/courses/106/105/106105077/14

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

UNIVERSITY QUESTIONS

1) Explain adversial search with example. JNTUH June 2018


2) Explain wumpus world problem. JNTUH May 2018
3) Explain Minmax algorithm with example JNTUH May 2017
4) Explain Optimal decision in game playing JNTUH June 2016
5) Explain alpha beta pruning with example JNTUH june 2017

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Question Bank:
Part A

Short Questions

1. What is a local minima problem?


2. How does alpha-beta pruning technique works?
3. What is the use of online search agents in unknown environments?
4. Specify the complexity of expectiminimax.
5. How to improve the effectiveness of a search-based problem-solving technique?
6. What is a constraint satisfaction problem?
7. Differentiate greedy search with A* search.
8. Write short notes on monotonocity and optimality of A* search.
9. Give example for effective branching factor.
10. Define relaxed problem.

PART B

1) Explain adversial search with example.


2) Explain wumpus world problem.
3) Explain Minmax algorithm with example
4) Explain Optimal decision in game playing
5) Explain alpha beta pruning with example
6) Define pruning.
7) State the reasons to avoid the disadvantage of minimax algorithm.
8) Define alpha, beta cutoff with an example.
9) List the different types and applications of local search algorithm.
10) What is the need of memory bounded heuristic search

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Objective questions

1. General games involves


a) Single-agent
b) Multi-agent
c) Neither Single-agent nor Multi-agent
d) Only Single-agent and Multi-agent
Answer: d
2. Adversarial search problems uses
a) Competitive Environment
b) Cooperative Environment
c) Neither Competitive nor Cooperative Environment
d) Only Competitive and Cooperative Environment

Answer: a

3 Mathematical game theory, a branch of economics, views any multi-agent environment


as a
game provided that the impact of each agent on the others is “significant,” regardless of
whether the agents are cooperative or competitive.
a) True
b) False
Answer: a
4. Zero sum games are the one in which there are two agents whose actions must alternate
and

in which the utility values at the end of the game are always the same.
a) True
b) False

Answer: b

5. Zero sum game has to be a ______ game.


a) Single player
b) Two player
c) Multiplayer
d) Three player

Answer: c

6. A game can be formally defined as a kind of search problem with the following
components:
a) Initial State
b) Successor Function

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

c) Terminal Test
d) All of the mentioned

Answer: d

7. The initial state and the legal moves for each side define the __________ for the game.
a) Search Tree
b) Game Tree
c) State Space Search
d) Forest

Answer: b

8. General algorithm applied on game tree for making decision of win/lose is ____________
a) DFS/BFS Search Algorithms
b) Heuristic Search Algorithms
c) Greedy Search Algorithms
d) MIN/MAX Algorithms8

Answer: d

9. Which search is equal to minimax search but eliminates the branches that can’t influence the
final decision?
a) Depth-first search
b) Breadth-first search
c) Alpha-beta pruning
d) None of the mentioned

Answer: c

10. Which values are independant in minimax search algorithm?


a) Pruned leaves x and y
b) Every states are dependant
c) Root is independant
d) None of the mentioned

Answer: a

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Real-time Applications
Topic: Adversial Search

Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.

o In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the form
of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and playing
against each other. Each agent needs to consider the action of other agent and effect
of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying
to explore the same search space for the solution, are called adversarial
searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and
these are the two main factors which help to model and solve games in AI.

Types of Games in AI:


Deterministic Chance Moves

Perfect information Chess, Checkers, go, Othello Backgammon, monopoly

Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nu

o Perfect information: A game with the perfect information is that in which agents
can look into the complete board. Agents have all the information about the game,
and they can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the
game and not aware with what's going on, such type of games are called the game
with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict
pattern and set of rules for the games, and there is no randomness associated with
them. Examples are chess, Checkers, Go, tic-tac-toe, etc.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o Non-deterministic games: Non-deterministic are those games which have various


unpredictable events and has a factor of chance or luck. This factor of chance or luck
is introduced by either dice or cards. These are random, and each action response is
not fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Zero-Sum Game
o Zero-sum games are adversarial search which involves pure competition.
o In Zero-sum game each agent's gain or loss of utility is exactly balanced by the
losses or gains of utility of another agent.
o One player of the game try to maximize one single value, while other player tries to
minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking


The Zero-sum game involved embedded thinking in which one agent or player is trying to
figure out:

o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do

Each of the players is trying to find out the response of his opponent to their actions. This
requires embedded thinking or backward reasoning to solve the game problems in AI.

Formalization of the problem:


A game can be defined as a type of search in AI which can be formalized of the
following elements:

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of moves in the
state space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false at any
case. The state where the game ends is called terminal states.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

o Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the
outcomes are a win, loss, or draw and its payoff values are +1, 0, ½. And for tic-tac-
toe, utility values are +1, -1, and 0.

Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are
the moves by players. Game tree involves initial state, actions function, and result Function.

Example: Tic-Tac-Toe game tree:

The following figure is showing part of the game-tree for tic-tac-toe game. Following are
some key points of the game:

o There are two players MAX and MIN.


o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Real-time Application 2

Topic: Minimax algorithm

o Mini-max algorithm is a recursive or backtracking algorithm which is used in


decision-making and game theory. It provides an optimal move for the player
assuming that opponent is also playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers,
tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is called
MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of
the complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree,
then backtrack the tree as the recursion

Working of Min-Max Algorithm:


o The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-player
game.
o In this example, there are two players one is called Maximizer and other is called
Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get
the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through
the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value
and backtrack the tree until the initial state occurs. Following are the main steps
involved in solving the two-player game tree:

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Blooms Taxonomy

TOPIC: Alpha Beta pruning

1) What is alpha beta pruning?

Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization


technique for the minimax algorithm.

2) Explain alpha beta prunning?

o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along the
path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as
the standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.

Key points about alpha-beta pruning:


o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha and
beta.
o We will only pass the alpha, beta values to the child nodes.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

TOPIC: Adversial search

1) What is adversial search?

Adversarial search is a search, where we examine the problem which arises when we try to plan
ahead of the world and other agents are planning against us.

2 ).Explain Adversial Search

o there might be some situations where more than one agent is searching for the solution in
the same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in
which each agent is an opponent of other agent and playing against each other. Each
agent needs to consider the action of other agent and effect of that action on their
performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the
same search space for the solution, are called adversarial searches, often known as
Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are
the two main factors which help to model and solve games in AI.

Types of Games in AI:

Deterministic Chance Moves


Perfect Chess, Checkers, Backgammon,
information go, Othello monopoly
Imperfect Battleships, blind, Bridge, poker,
information tic-tac-toe scrabble, nuclear war
o Perfect information: A game with the perfect information is that in which agents can
look into the complete board. Agents have all the information about the game, and they
can see each other moves also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game
and not aware with what's going on, such type of games are called the game with
imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern
and set of rules for the games, and there is no randomness associated with them.
Examples are chess, Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various
unpredictable events and has a factor of chance or luck. This factor of chance or luck is
introduced by either dice or cards. These are random, and each action response is not
fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Downloaded by Priya Mannem ([email protected])


lOMoARcPSD|31075044

Downloaded by Priya Mannem ([email protected])

You might also like