0% found this document useful (0 votes)
42 views47 pages

Unit 2 - AI

Game theory focuses on decision-making among rational agents, where outcomes depend on the actions of all players involved. The document discusses key concepts such as the Minimax algorithm, Alpha-Beta pruning, and reinforcement learning, which are used to optimize decision-making in two-player games. It also highlights the limitations of the Minimax algorithm and the efficiency improvements offered by Alpha-Beta pruning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views47 pages

Unit 2 - AI

Game theory focuses on decision-making among rational agents, where outcomes depend on the actions of all players involved. The document discusses key concepts such as the Minimax algorithm, Alpha-Beta pruning, and reinforcement learning, which are used to optimize decision-making in two-player games. It also highlights the limitations of the Minimax algorithm and the efficiency improvements offered by Alpha-Beta pruning.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit 2

Game theory is about decision-making among rational agents. It’s the study
of strategic interaction where the outcome for each player depends not just
on their own actions, but also on the actions of others.
Games are usually intriguing because they are difficult to solve. Chess, for
example, has an average branching factor of around 35, and games
frequently stretch to 50 moves per player, therefore the search tree has
roughly 35100 or 10154 nodes (despite the search graph having “only”
about 1040 unique nodes). As a result, games, like the real world,
necessitate the ability to make some sort of decision even when
calculating the best option is impossible.
Inefficiency is also heavily punished in games. Whereas a half-efficient
implementation of A search will merely take twice as long to complete, a
chess software that is half as efficient in utilizing its available time will
almost certainly be beaten to death, all other factors being equal. As a
result of this research, a number of intriguing suggestions for making the
most use of time have emerged.

Optimal Decision Making in Games

Let us start with games with two players, whom we’ll refer to as MAX and
MIN for obvious reasons. MAX is the first to move, and then they take
turns until the game is finished. At the conclusion of the game, the
victorious player receives points, while the loser receives penalties. A
game can be formalized as a type of search problem that has the
following elements:
 S0: The initial state of the game, which describes how it is set up at the
start.
 Player (s): Defines which player in a state has the move.
 Actions (s): Returns a state’s set of legal moves.
 Result (s, a): A transition model that defines a move’s outcome.
 Terminal-Test (s): A terminal test that returns true if the game is over
but false otherwise. Terminal states are those in which the game has
come to a conclusion.
 Utility (s, p): A utility function (also known as a payout function or
objective function ) determines the final numeric value for a game that
concludes in the terminal state s for player p. The result in chess is a
win, a loss, or a draw, with values of +1, 0, or 1/2.
Various techniques and approaches have been developed to enable AI
agents to make optimal decisions in games. Here are some key concepts
and methods:
Minimax Algorithm:

Minimax is a decision-making algorithm commonly used in two-player,


zero-sum games (e.g., chess, tic-tac-toe).
It involves creating a game tree that represents all possible moves and
counter-moves for both players.
The algorithm seeks to minimize the maximum possible loss (hence the
name "minimax") by selecting the best move at each turn.

Alpha-Beta Pruning:

Alpha-beta pruning is an optimization technique used in conjunction with


the minimax algorithm.

It reduces the number of nodes evaluated in the game tree by eliminating


branches that are guaranteed to be suboptimal.

Reinforcement Learning (RL):

RL involves training agents to make decisions by interacting with an


environment and receiving rewards based on their actions.

Evolutionary Algorithms:

Evolutionary algorithms involve generating and evolving a population of


candidate solutions over multiple generations.

Mini-Max Algorithm in Artificial Intelligence


 Mini-max algorithm is a recursive or backtracking algorithm which is used
in decision-making and game theory.
 Mini-Max algorithm uses recursion to search through the game-tree.
 Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. In this algorithm
two players play the game, one is called MAX and other is called MIN.
 Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
 Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
 The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
 The minimax algorithm proceeds all the way down to the terminal node of
the tree, then backtrack the tree as the recursion.

Pseudo-code for MinMax Algorithm:


1. function minimax(node, depth, maximizingPlayer) is

2. if depth ==0 or node is a terminal node then

3. return static evaluation of node

5. if MaximizingPlayer then // for Maximizer Player

6. maxEva= -infinity

7. for each child of node do

8. eva= minimax(child, depth-1, false)

9. maxEva= max(maxEva,eva) //gives Maximum of the values

10. return maxEva

11.

12. else // for Minimizer player

13. minEva= +infinity

14. for each child of node do

15. eva= minimax(child, depth-1, true)


16. minEva= min(minEva, eva) //gives minimum of the values

17. return minEva

Initial call .:

Minimax(node, 3, true)

Working of Min-Max Algorithm:


 The working of the minimax algorithm can be easily described using an
example. Below we have taken an example of game-tree which is
representing the two-player game.
 In this example, there are two players one is called Maximizer and other is
called Minimizer.
 Maximizer will try to get the Maximum possible score, and Minimizer will
try to get the minimum possible score.
 This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
 At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the
main steps involved in solving the two-player game tree

Step-1: In the first step, the algorithm generates the entire game-tree and apply the
utility function to get the utility values for the terminal states. In the below tree
diagram, let's take A is the initial state of the tree. Suppose maximizer takes first
turn which has worst-case initial value =- infinity, and minimizer will take next
turn which has worst-case initial value = +infinite
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞, so we will compare each value in terminal state with initial value of Maximizer
and determines the higher nodes values. It will find the maximum among the all. o
For node D o For Node E o For Node F o For node G max(-1,- -∞) => max(-1,4)=
4 max(2, -∞) => max(2, 6)= 6 max(-3, -∞) => max(-3,-5) = -3 max(0, -∞) = max(0,
7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values. o For node B= min(4,6) = 4 o For
node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree, there
are only 4 layers, hence we reach immediately to the root node, but in real games,
there will be more than 4 layers. o For node A max(4, -3)= 4
Properties of Mini-Max algorithm:
 Complete- Min-Max algorithm is Complete. It will definitely find a solution
(if exist), in the finite search tree.
 Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
 Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of
the game-tree, and m is the maximum depth of the tree.
 Space Complexity- Space complexity of Mini-max algorithm is also similar
to DFS which is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc.

What is Alpha-Beta Pruning?


 Alpha-beta pruning is a technique used to improve the efficiency of the
minimax algorithm by reducing the number of nodes that need to be
evaluated in a game tree. It is particularly useful in two-player, turn-based
games where the goal is to find the best possible move while assuming that
your opponent is also playing optimally.

 In minimax, every possible move and counter-move is evaluated, but many


of these moves don’t affect the final decision. Alpha-beta pruning works by
eliminating branches that are guaranteed to not influence the outcome. By
doing so, the algorithm avoids unnecessary calculations, allowing it to
search deeper into the tree more quickly.

At each step of the algorithm, two values, alpha and beta, are used:

 Alpha: Represents the best value the maximizing player (the one trying to
win) can guarantee so far.
 Beta: Represents the best value the minimizing player (the one trying to
make the opponent lose) can guarantee so far.
As the algorithm explores different branches of the game tree, if it finds that a
particular branch can’t improve the final outcome for either player, it “prunes” that
branch, meaning it stops further evaluation of that part of the tree.

Condition for Alpha-Beta Pruning


The core idea behind alpha-beta pruning is based on the values of alpha and beta.
These values represent the best scores for the maximizing and minimizing players,
and they help in deciding whether or not a branch of the game tree can be pruned.

Alpha (α):
 Alpha represents the best score that the maximizing player can guarantee
at a given level or above. The maximizing player is always looking for the
highest possible score.
Beta (β):
 Beta represents the best score that the minimizing player can guarantee at
a given level or below. The minimizing player aims to minimize the score,
meaning they are looking for the lowest possible score.

Condition for Pruning:

Alpha-beta pruning occurs when we find a situation where:

 The minimizing player finds a value that is lower than or equal to alpha (β ≤
α). In this case, there’s no need to explore further because the minimizing
player will not let the maximizing player choose that branch, so we can
prune it.
 Similarly, the maximizing player can prune a branch when they find a value
higher than or equal to beta (α ≥ β), as the maximizing player will never
choose a move that results in a worse score.

Key Steps:

1. The algorithm starts by checking if it has reached a terminal node or the


maximum depth. If so, it returns the heuristic value of that node.
2. If it’s the maximizing player’s turn, it tries to maximize the score. For each
child node, it updates alpha, and if beta becomes smaller or equal to alpha,
it prunes the rest of the tree.
3. If it’s the minimizing player’s turn, it tries to minimize the score. Similarly,
beta is updated, and if alpha becomes larger or equal to beta, the algorithm
prunes the tree.

Key Points in Alpha-Beta Pruning


 Prunes Unnecessary Branches: Alpha-beta pruning cuts off branches in the
game tree that won’t affect the final decision, saving time without losing
accuracy.
 Works for Both Players: It works for both the maximizing player (trying to
get the highest score) and the minimizing player (trying to reduce the
score).
 Move Order Matters: The effectiveness of pruning depends on the order in
which moves are evaluated. If better moves are checked first, more
branches can be pruned early.
 Faster than Minimax: By skipping unnecessary parts of the tree, alpha-beta
pruning reduces computation time compared to the standard minimax
algorithm.
 Same Result as Minimax: Even though it skips some calculations, alpha-
beta pruning still arrives at the same optimal result as the minimax
algorithm.

In-Depth Example:
Let’s walk through an example where we have a game tree that represents a series
of moves in a turn-based game (like chess or tic-tac-toe). The goal is for the
maximizing player to find the best possible move while the minimizing player tries
to counter it.
Imagine this tree:

1. Initial Setup:

The maximizing player starts at the root node A and wants to maximize their score.
Each branch represents a different move the player could make.

2. Evaluating Node B:

 First, we evaluate node B. Let’s say B leads to two possible


outcomes: D and E.
 Alpha and beta start at -∞ and +∞, respectively. These values represent the
worst-case scenarios for both players.
3. Exploring Node D:

 We evaluate node D first. Assume D gives a score of 3.


 Since this is the first node being evaluated, alpha is updated to 3 because
the maximizing player now knows they can guarantee at least a score of 3 if
they choose this path.
4. Exploring Node E:

 Next, the algorithm evaluates node E. Suppose E returns a score of 5.


 Since 5 is greater than 3, alpha is updated to 5, meaning the maximizing
player can now guarantee a score of 5 if they pick this branch.
5. Evaluating Node C:

 After evaluating all of B‘s children, the algorithm moves to node C.


 Now, the minimizing player is active, meaning beta is updated based on the
children of C.
 First, the algorithm evaluates node F. Assume F returns a score of 2. Since 2
is less than beta (which is +∞ initially), beta is updated to 2.
6. Pruning Node G:

 Now, we look at the next node, G. But before evaluating G, the algorithm
checks the alpha and beta values.
 At this point, alpha is 5 (from node E) and beta is 2 (from node F). Since
alpha is greater than beta (5 > 2), the algorithm knows that the maximizing
player will never choose this branch because they can already guarantee a
better score by choosing the path through B.
 Therefore, the algorithm prunes node G, meaning it doesn’t bother
evaluating it because it won’t impact the final decision.
working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working


of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A
where α= -∞ and β= +∞, these value of alpha and beta passed down to
node B where again α= -∞ and β= +∞, and Node B passes the same value
to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The
value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will
be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will
change as this is a turn of Min, Now β= +∞, will compare with the available
subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞,
and β= 3.

In the next step, algorithm traverse the next successor of Node B which is
node E, and the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change.
The current value of alpha will be compared with 5, so max (-∞, 5) = 5,
hence at node E α= 5 and β= 3, where α>=β, so the right successor of E will
be pruned, and algorithm will not traverse it, and the value at node E will be
5.

Step 5: At next step, algorithm again backtrack the tree, from node B to
node A. At node A, the value of alpha will be changed the maximum
available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now
passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node
F.

Step 6: At node F, again the value of α will be compared with left child
which is 0, and max(3,0)= 3, and then compared with right child which is 1,
and max(3,1)= 3 still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞,
here the value of beta will be changed, it will compare with 1 so min (∞, 1) =
1. Now at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the
next child of C which is G will be pruned, and the algorithm will not
compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3,
1) = 3. Following is the final game tree which is the showing the nodes
which are computed and nodes which has never computed. Hence the
optimal value for the maximizer is 3 for this example.
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in
which each node is examined. Move order is an important aspect of alpha-
beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does


not prune any of the leaves of the tree, and works exactly as minimax
algorithm. In this case, it also consumes more time because of alpha-
beta factors, such a move of pruning is called worst ordering. In this
case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs
when lots of pruning happens in the tree, and best moves occur at the
left side of the tree. We apply DFS hence it first search left of the tree
and go deep twice as minimax algorithm in the same amount of time.
Complexity in ideal ordering is O(bm/2).
What is a Constraint Satisfaction Problem (CSP)?

A Constraint Satisfaction Problem is a mathematical problem where the


solution must meet a number of constraints. In a CSP, the objective is to
assign values to variables such that all the constraints are satisfied. CSPs
are used extensively in artificial intelligence for decision-making problems
where resources must be managed or arranged within strict guidelines.

Common applications of CSPs include:


 Scheduling: Assigning resources like employees or equipment while
respecting time and availability constraints.
 Planning: Organizing tasks with specific deadlines or sequences.
 Resource Allocation: Distributing resources efficiently without overuse.

Components of Constraint Satisfaction Problems


CSPs are composed of three key elements:
1. Variables: The things that need to be determined are variables.
Variables in a CSP are the objects that must have values assigned to
them in order to satisfy a particular set of constraints. Boolean, integer,
and categorical variables are just a few examples of the various types of
variables, for instance, could stand in for the many puzzle cells that
need to be filled with numbers in a sudoku puzzle.

2. Domains: The range of potential values that a variable can have is


represented by domains. Depending on the issue, a domain may be
finite or limitless.

3. Constraints: The guidelines that control how variables relate to one


another are known as constraints. Constraints in a CSP define the
ranges of possible values for variables.

Types of Constraint Satisfaction Problems

1. Binary CSPs: In these problems, each constraint involves only two


variables. For example, in a scheduling problem, the constraint could
specify that task A must be completed before task B.
2. Non-Binary CSPs: These problems have constraints that involve more
than two variables. For instance, in a seating arrangement problem, a
constraint could state that three people cannot sit next to each other

3. Hard and Soft Constraints: Hard constraints must be strictly satisfied,


while soft constraints can be violated, but at a certain cost. This
distinction is often used in real-world applications where not all
constraints are equally important.

Representation of Constraint Satisfaction Problems (CSP)

In Constraint Satisfaction Problems (CSP), the solution process


involves the interaction of variables, domains, and constraints. Below is a
structured representation of how CSP is formulated:
1. Finite Set of Variables (V1,V2,…,Vn)(V1,V2,…,Vn):
The problem consists of a set of variables, each of which needs to be
assigned a value that satisfies the given constraints.
2. Non-Empty Domain for Each Variable (D1,D2,…,Dn)(D1,D2,…,Dn):
Each variable has a domain—a set of possible values that it can take.
For example, in a Sudoku puzzle, the domain could be the numbers 1 to
9 for each cell.
3. Finite Set of Constraints (C1,C2,…,Cm)(C1,C2,…,Cm):
Constraints restrict the possible values that variables can take. Each
constraint defines a rule or relationship between variables.
4. Constraint Representation:
Each constraint CiCi is represented as a pair <scope, relation>,
where:
 Scope: The set of variables involved in the constraint.
 Relation: A list of valid combinations of variable values that satisfy
the constraint.
5. Example:
Let’s say you have two variables V1V1 and V2V2. A possible constraint
could be V1≠V2V1 =V2, which means the values assigned to these
variables must not be equal.
 Detailed Explanation:
o Scope: The variables V1V1 and V2V2.
o Relation: A list of valid value combinations where V1V1 is not
equal to V2V2.

CSP Algorithms: Solving Constraint Satisfaction Problems

1. Backtracking Algorithm
The backtracking algorithm is a depth-first search method used to
systematically explore possible solutions in CSPs. It operates by
assigning values to variables and backtracks if any assignment violates a
constraint.

How it works:
 The algorithm selects a variable and assigns it a value.
 It recursively assigns values to subsequent variables.
 If a conflict arises (i.e., a variable cannot be assigned a valid value), the
algorithm backtracks to the previous variable and tries a different value.
 The process continues until either a valid solution is found or all
possibilities have been exhausted.
This method is widely used due to its simplicity but can be inefficient for
large problems with many variables.

2. Forward-Checking Algorithm
The forward-checking algorithm is an enhancement of the backtracking
algorithm that aims to reduce the search space by applying local
consistency checks.

How it works:
 For each unassigned variable, the algorithm keeps track of remaining
valid values.
 Once a variable is assigned a value, local constraints are applied to
neighboring variables, eliminating inconsistent values from their
domains.
 If a neighbor has no valid values left after forward-checking, the
algorithm backtracks.
This method is more efficient than pure backtracking because it prevents
some conflicts before they happen, reducing unnecessary computations.

3. Constraint Propagation Algorithms


Constraint propagation algorithms further reduce the search space by
enforcing local consistency across all variables.

How it works:
 Constraints are propagated between related variables.
 Inconsistent values are eliminated from variable domains by leveraging
information gained from other variables.
 These algorithms refine the search space by making inferences,
removing values that would lead to conflicts.
Constraint propagation is commonly used in conjunction with other CSP
algorithms, such as backtracking, to increase efficiency by narrowing
down the solution space early in the search process.
Introduction to Constraint Propagation
Constraint propagation is a fundamental concept in constraint satisfaction
problems (CSPs). A CSP involves variables that must be assigned values
from a given domain while satisfying a set of constraints. Constraint
propagation aims to simplify these problems by reducing the domains of
variables, thereby making the search for solutions more efficient.

Key Concepts

1. Variables: Elements that need to be assigned values.


2. Domains: Possible values that can be assigned to the variables.
3. Constraints: Rules that define permissible combinations of values for
the variables.

How Constraint Propagation Works


Constraint propagation works by iteratively narrowing down the domains
of variables based on the constraints. This process continues until no
more values can be eliminated from any domain. The primary goal is to
reduce the search space and make it easier to find a solution.

Steps in Constraint Propagation


1. Initialization: Start with the initial domains of all variables.
2. Propagation: Apply constraints to reduce the domains of variables.
3. Iteration: Repeat the propagation step until a stable state is reached,
where no further reduction is possible.
Example
Consider a simple CSP with two variables, X and Y, each with domains {1,
2, 3}, and a constraint X ≠ Y. Constraint propagation will iteratively reduce
the domains as follows:
 If X is assigned 1, then Y cannot be 1, so Y's domain becomes {2, 3}.
 If Y is then assigned 2, X cannot be 2, so X's domain is reduced to {1,
3}.
 This process continues until a stable state is reached.
Applications of Constraint Propagation
Constraint propagation is widely used in various AI applications. Some
notable areas include:
 Scheduling
In scheduling problems, tasks must be assigned to time slots without
conflicts. Constraint propagation helps by reducing the possible time slots
for each task based on constraints like availability and dependencies.
 Planning
AI planning involves creating a sequence of actions to achieve a goal.
Constraint propagation simplifies the planning process by reducing the
possible actions at each step, ensuring that the resulting plan satisfies all
constraints.
 Resource Allocation
In resource allocation problems, resources must be assigned to tasks in a
way that meets all constraints, such as capacity limits and priority rules.

Implementing Constraint Propagation in AI


In this section, we are going to implement constraint propagation using
Python. This example demonstrates a basic constraint satisfaction
problem (CSP) solver using arc consistency. We'll create a CSP for a
simple problem, such as assigning colors to a map (map coloring
problem), ensuring that no adjacent regions share the same color.
Let's say we have a map with four regions (A, B, C, D, D) and we need to
assign one of three colors (Red, Green, Blue) to each region. The
constraint is that no two adjacent regions can have the same color.

Step 1: Import Required Libraries


Import the necessary libraries for visualization and graph handling.

Step 2: Define the CSP Class


Define a class to represent the Constraint Satisfaction Problem (CSP) and
its related methods.

Step 3: Consistency Check Method


Method to check if an assignment is consistent by ensuring all constraints
are satisfied.
Step 4: AC-3 Algorithm Method
Method to enforce arc consistency using the AC-3 algorithm
.
Step 5: Revise Method
Method to revise the domain of a variable to satisfy the constraint
between two variables.

Step 6: Backtracking Search Method


Method to perform backtracking search to find a solution to the CSP.

Step 7: Select Unassigned Variable Method


Method to select an unassigned variable using a simple heuristic.

Step 8: Constraint Function


Define the constraint function to ensure no two adjacent regions have the
same color.

Step 9: Visualization Function


Function to visualize the solution using matplotlib and network
.
Step 10: Define Variables, Domains, and Neighbors
Define the variables, their domains, and their neighbors for the CSP.

Step 11: Create CSP Instance and Apply AC-3 Algorithm


Create an instance of the CSP class and apply the AC-3 algorithm for
constraint propagation.

Advantages and Limitations

Advantages
1. Efficiency: Reduces the search space, making it easier to find
solutions.
2. Scalability: Can handle large problems by breaking them down into
smaller subproblems.
3. Flexibility: Applicable to various types of constraints and domains.
Limitations
1. Computational Cost: Higher levels of consistency can be
computationally expensive.
2. Incomplete Propagation: May not always reduce the domains enough
to find a solution directly.

Backtracking Search

Backtracking search is a depth-first search algorithm that incrementally


builds candidates for the solutions, abandoning a candidate (backtracks)
as soon as it determines that the candidate cannot possibly be completed
to a valid solution
.
Steps in Backtracking

1. Initialization: Start with an empty assignment.


2. Selection: Choose an unassigned variable.
3. Assignment: Assign a value to the chosen variable.
4. Consistency Check: Check if the current assignment is consistent with
the constraints.
5. Recursion: If the assignment is consistent, recursively try to assign
values to the remaining variables.
6. Backtrack: If the assignment is not consistent, or if further assignments
do not lead to a solution, undo the last assignment (backtrack) and try
the next possible value.

Implementing Backtracking Search Algorithm to solve CSP


Here's a Python implementation of a backtracking search algorithm to
solve a simple CSP: the N-Queens problem.
Step 1: Define "is_safe" function
 This function checks if it's safe to place a queen at the
position board[row][col].
Step 2: Define the solve_n_queens Function
 This function attempts to solve the N-Queens problem by placing
queens one column at a time.
 It uses recursion to place queens and backtracks if a solution cannot be
found.
Step 3: Define the print_board Function
 This function prints the board configuration with queens placed.
Step 4: Define the n_queens Function
 This function initializes the board and calls
the solve_n_queens function to solve the problem.
 If a solution is found, it prints the board. Otherwise, it indicates that no
solution exists.

Optimization Techniques

1. Forward Checking: After assigning a value to a variable, eliminate


inconsistent values from the domains of the unassigned variables.
2. Constraint Propagation: Use algorithms like AC-3 (Arc Consistency 3)
to reduce the search space by enforcing constraints locally.
3. Heuristics: Employ heuristics such as MRV (Minimum Remaining
Values) and LCV (Least Constraining Value) to choose the next variable
to assign and the next value to try.

Advantages

1. Simplicity: The algorithm is easy to implement and understand.


2. Effectiveness: It works well for many practical CSPs, especially when
combined with heuristics.
3. Flexibility: Can be adapted and optimized with various strategies like
variable ordering and constraint propagation

Limitations

1. Inefficiency for Large Problems: The algorithm can be slow for large
or highly constrained problems.
2. Redundancy: Without optimization techniques, the search might
redundantly explore many invalid paths.
3. Space Complexity: It requires significant memory for storing the state
of the search tree.

What Are Knowledge-Based Agents

 Knowledge-based agents are specialized AI systems that operate


using a structured repository of information. These agents analyze
stored data, apply logical rules, and make decisions that emulate
human reasoning.
 They differ from basic AI systems by focusing on informed, context-
aware actions rather than reactive responses. For instance,
a knowledge-based system in AI, like an automated tech support
agency, can troubleshoot issues by referencing a vast database of
solutions and tailoring responses based on the specific problem.

Knowledge-Based Agent in Artificial intelligence

 An intelligent agent needs knowledge about the real world for taking
decisions and reasoning to act efficiently.
 Knowledge-based agents are those agents who have the capability of
maintaining an internal state of knowledge, reason over that knowledge,
update their knowledge after observations and take actions. These agents can
represent the world with some formal representation and act intelligently.

Knowledge-based agents are composed of two main parts:

 Knowledge-base and
 Inference system.

A knowledge-based agent must able to do the following:

 An agent should be able to represent states, actions, etc.


 An agent Should be able to incorporate new percepts
 An agent can update the internal representation of the world
 An agent can deduce the internal representation of the world
 An agent can deduce appropriate actions.

The architecture of knowledge-based agent:


The above diagram is representing a generalized architecture for a knowledge-
based agent. The knowledge-based agent (KBA) take input from the environment
by perceiving the environment. The input is taken by the inference engine of the
agent and which also communicate with KB to decide as per the knowledge store
in KB. The learning element of KBA regularly updates the KB by learning new
knowledge.

Knowledge base:

Knowledge-base is a central component of a knowledge-based agent, it is also


known as KB. It is a collection of sentences (here 'sentence' is a technical term and
it is not identical to sentence in English). These sentences are expressed in a
language which is called a knowledge representation language. The Knowledge-
base of KBA stores fact about the world

Why use a knowledge base?

Knowledge-base is required for updating knowledge for an agent to learn with


experiences and take action as per the knowledge

Inference system

Inference means deriving new sentences from old. Inference system allows us to
add a new sentence to the knowledge base. A sentence is a proposition about the
world. Inference system applies logical rules to the KB to deduce new information.

Inference system generates new facts so that an agent can update the KB. An
inference system works mainly in two rules which are given as

 Forward chaining
 Backward chaining

Operations Performed by KBA

Following are three operations which are performed by KBA in order to show the
intelligent behavior:

1. TELL: This operation tells the knowledge base what it perceives from the
environment.

2. ASK: This operation asks the knowledge base what action it should perform.

3. Perform: It performs the selected action.


A generic knowledge-based agent:

Following is the structure outline of a generic knowledge-based agents program:

1. function KB-AGENT(percept):

2. persistent: KB, a knowledge base

3. t, a counter, initially 0, indicating time

4. TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))

5. Action = ASK(KB, MAKE-ACTION-QUERY(t))

6. TELL(KB, MAKE-ACTION-SENTENCE(action, t))

7. t = t + 1

8. return action

The knowledge-based agent takes percept as input and returns an action as output.
The agent maintains the knowledge base, KB, and it initially has some background
knowledge of the real world. It also has a counter to indicate the time for the whole
process, and this counter is initialized with zero

Each time when the function is called, it performs its three operations:

 Firstly it TELLs the KB what it perceives.


 Secondly, it asks KB what action it should take
 Third agent program TELLS the KB that which action was chosen.

The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent


perceived the given percept at the given time. |

The MAKE-ACTION-QUERY generates a sentence to ask which action should be


done at the current time.

MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen


action was executed.

Various levels of knowledge-based agent:

A knowledge-based agent can be viewed at different levels which are given below:

1.Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we
need to specify what the agent knows, and what the agent goals are. With these
specifications, we can fix its behavior. For example, suppose an automated taxi
agent needs to go from a station A to station B, and he knows the way from A to B,
so this comes at the knowledge level

2.Logical level:

At this level, we understand that how the knowledge representation of knowledge


is stored. At this level, sentences are encoded into different logics. At the logical
level, an encoding of knowledge into logical sentences occurs. At the logical level
we can expect to the automated taxi agent to reach to the destination B.

3.Implementation level:

This is the physical representation of logic and knowledge. At the


implementation level agent perform actions as per logical and knowledge level.
At this level, an automated taxi agent actually implement his knowledge and
logic so that he can reach to the destination

. Approaches to designing a knowledge-based agent:

There are mainly two approaches to build a knowledge-based agent:

Declarative approach:

We can create a knowledge-based agent by initializing with an empty knowledge


base and telling the agent all the sentences with which we want to start with. This
approach is called Declarative approach.

Procedural approach:

In the procedural approach, we directly encode desired behavior as a program


code. Which means we just need to write a program that already encodes the
desired behavior or agent.
Propositional logic in Artificial intelligence
Propositional logic (PL) is the simplest form of logic where all the statements are
made by propositions. A proposition is a declarative statement which is either true
or false. It is a technique of knowledge representation in logical and mathematical
form.

Example:

1. a) It is Sunday.

2. b) The Sun rises from West (False proposition)

3. c) 3+3= 7(False proposition)

4. d) 5 is a prime number.

Following are some basic facts about propositional logic:

 Propositional logic is also called Boolean logic as it works on 0 and 1.


 In propositional logic, we use symbolic variables to represent the logic, and
we can use any symbol for a representing a proposition, such A, B, C, P, Q,
R, etc.
 Propositions can be either true or false, but it cannot be both.
 Propositional logic consists of an object, relations or function, and logical
connectives.
 These connectives are also called logical operators.
 The propositions and connectives are the basic elements of the propositional
logic.
 Connectives can be said as a logical operator which connects two sentences.
 A proposition formula which is always true is called tautology, and it is also
called a valid sentence.
 A proposition formula which is always false is called Contradiction.
 Statements which are questions, commands, or opinions are not propositions
such as "Where is Rohini", "How are you", "What is your name", are not
propositions.

Syntax of propositional logic:


The syntax of propositional logic defines the allowable sentences for the
knowledge representation. There are two types of Propositions

a. Atomic Propositions

b. Compound propositions

a. Atomic Proposition:

Atomic propositions are the simple propositions. It consists of a single proposition


symbol. These are the sentences which must be either true or false.

Example: 1. a) 2+2 is 4, it is an atomic proposition as it is a true fact.

2 b) "The Sun is cold" is also a proposition as it is a false fact.

Compound proposition:

Compound propositions are constructed by combining simpler or atomic


propositions, using parenthesis and logical connectives.

Example:
1. a) "It is raining today, and street is wet."
2. b) "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:

Logical connectives are used to connect two simpler propositions or


representing a sentence logically. We can create compound propositions
with the help of logical connectives. There are mainly five connectives,
which are given as follows:
1. Negation: A sentence such as ¬ P is called negation of P. A literal can be
either Positive literal or negative literal.
2. Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called
a conjunction. Example: Rohan is intelligent and hardworking. It can be
written as, P= Rohan Q= Rohan is hardworking. → P∧ Q. is intelligent,
3. Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called
disjunction, where P and Q are the propositions
Example: "Ritika is a doctor or Engineer", Here P= Ritika is Doctor. Q=
Ritika is Doctor, so we can write it as P ∨ Q.
4. Implication: A sentence such as P → Q, is called an implication.
Implications are also known as if-then rules. It can be represented as If it is
raining, then the street is wet. Let P= It is raining, and Q= Street is wet, so it
is represented as P → Q
5. Biconditional: A sentence such as P⇔ Q is a Biconditional sentence,
example If I am breathing, then I am alive P= I am breathing, Q= I am alive,
it can be represented as P ⇔ Q.

Summarized table for propositional logic connectives

Truth Table of Propositional Logic

1. Negation
If p p is a proposition, then the negation of p p is denoted by ¬p ¬p , which
when translated to simple English means- “It is not the case that p” or simply
“not p“. The truth value of -p is the opposite of the truth value of p. The truth
table of -p is:
p ¬p

T F

F T

Example, Negation of “It is raining today”, is “It is not the case that is raining
today” or simply “It is not raining today”.

2. Conjunction
For any two propositions p p and q q , their conjunction is denoted
by p∧q p∧q , which means “p p and q q “. The conjunction p∧q p∧q is True
when both p p and q q are True, otherwise False. The truth table
of p∧q p∧q is:

p q p∧q

T T T

T F F

F T F

F F F

Example, Conjunction of the propositions p p – “Today is Friday” and q q –


“It is raining today”, p∧q p∧q is “Today is Friday and it is raining today”. This
proposition is true only on rainy Fridays and is false on any other rainy day or on
Fridays when it does not rain.
3. Disjunction
For any two propositions p p and q q , their disjunction is denoted
by p∨q p∨q , which means “p p or q q “. The disjunction p∨q p∨q is True
when either p p or q q is True, otherwise False. The truth table of p∨q p∨q is:
p q p∨q

T T T

T F T

F T T

F F F

Example, Disjunction of the propositions p p – “Today is Friday” and q q – “It


is raining today”, p∨q p∨q is “Today is Friday or it is raining today”. This
proposition is true on any day that is a Friday or a rainy day(including rainy
Fridays) and is false on any day other than Friday when it also does not rain.

4. Exclusive Or
For any two propositions p p and q q , their exclusive or is denoted
by p⊕q p⊕q , which means “either p p or q q but not both”. The exclusive
or p⊕q p⊕q is True when either p p or q q is True, and False when both are
true or both are false. The truth table of p⊕q p⊕q is:
p q p⊕q

T T F

T F T

F T T

F F F
Example, Exclusive or of the propositions p p – “Today is Friday” and q q –
“It is raining today”, p⊕q p⊕q is “Either today is Friday or it is raining today,
but not both”. This proposition is true on any day that is a Friday or a rainy
day(not including rainy Fridays) and is false on any day other than Friday when it
does not rain or rainy Fridays.

5 Implication
For any two propositions p p and q q , the statement “if p p then q q ” is
called an implication and it is denoted by p→q p→q .
p q p→q

T T T

T F F

F T T

F F T

Example, “If it is Friday then it is raining today” is a proposition which is of the


form p→q p→q . The above proposition is true if it is not Friday(premise is
false) or if it is Friday and it is raining, and it is false when it is Friday but it is
not raining.

6 Biconditional or Double Implication

For any two propositions p p and q q , the statement “p p if and only


if(iff) q q ” is called a biconditional and it is denoted by p↔q p↔q . The
statement p↔q p↔q is also called a bi-implication. p↔q p↔q has the same
truth value as (p→q)∧(q→p) (p→q)∧(q→p) The implication is true
when p p and q q have same truth values, and is false otherwise. The truth table
of p↔q p↔q is:
p q p↔q

T T T

T F F
p q p↔q

F T F

F F T
Example, “It is raining today if and only if it is Friday today.” is a proposition
which is of the form p↔q p↔q . The above proposition is true if it is not Friday
and it is not raining or if it is Friday and it is raining, and it is false when it is not
Friday or it is not raining.

Properties of Operators

The logical operators in propositional logic have several important properties:


1. Commutativity:
 P∧Q≡Q∧P
 P∨Q≡Q∨P

2. Associativity:
 (P ∧ Q) ∧ R ≡ P ∧ (Q ∧ R)
 (P ∨ Q) ∨ R ≡ P ∨ (Q ∨ R)

3. Distributivity:
 P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
 P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)

4. Identity:
 P ∧ true ≡ P
 P ∨ false ≡ P

5. Domination:
 P ∨ true ≡ true
 P ∧ false ≡ false

6. Double Negation:
 ¬ (¬P) ≡ P

7. Idempotence:
 P∧P≡P
 P∨P≡P
Propositional Theorem Proving
It is a different approach to using logic to solve problems is to use logical rules of
inference to generate logical implications

Rules of Inference in Artificial intelligence

Inference:

In artificial intelligence, we need intelligent computers which can create new logic
from old logic or by evidence, so generating the conclusions from evidence and
facts is termed as Inference.

Inference rules:
Inference rules are the templates for generating valid arguments. Inference rules
are applied to derive proofs in artificial intelligence, and the proof is a sequence of
the conclusion that leads to the desired goal.

In inference rules, the implication among all the connectives plays an important
role. Following are some terminologies related to inference rules:
 Implication: It is one of the logical connectives which can be represented
as P → Q. It is a Boolean expression.
 Converse: The converse of implication, which means the right-hand side
proposition goes to the left-hand side and vice-versa. It can be written as Q
→ P.
 Contrapositive: The negation of converse is termed as contrapositive, and
it can be represented as ¬ Q → ¬ P.
 Inverse: The negation of implication is called inverse. It can be represented
as ¬ P → ¬ Q.

From the above term some of the compound statements are equivalent to
each other, which we can prove using truth table:
Types of Inference rules:
1 Modus Ponens (Law of Detachment)

If a conditional statement (“if-then” statement) is true, and its antecedent (the “if”
part) is true, then its consequent (the “then” part) must also be true.
Form: If p → q and p, then q.

Example:
 Premise: If it rains, the ground will be wet.
 Premise: It is raining.
 Conclusion: The ground is wet.

2. Modus Tollens (Law of Contrapositive)


If a conditional statement is true, and its consequent is false, then its antecedent
must also be false.
Form: If p → q and ¬q, then ¬p.

Example:
 Premise: If it rains, the ground will be wet.
 Premise: The ground is not wet.
 Conclusion: It is not raining.

3. Hypothetical Syllogism
If two conditional statements are true, where the consequent of the first is the
antecedent of the second, then a third conditional statement combining the
antecedent of the first and the consequent of the second is also true.
Form: If p → q and q → r, then p → r.
Example:
 Premise: If it rains, the ground will be wet.
 Premise: If the ground is wet, the plants will grow.
 Conclusion: If it rains, the plants will grow.

4. Disjunctive Syllogism
If a disjunction (an “or” statement) is true, and one of the disjuncts (the parts of
the “or” statement) is false, then the other disjunct must be true.

Form: If p ∨ q and ¬p, then q.


Example:
 Premise: It is either raining or sunny.
 Premise: It is not raining.
 Conclusion: It is sunny.

5. Conjunction
If two statements are true, then their conjunction (an “and” statement) is also true.

Form:If p and q, then p ∧ q.


Example:
 Premise: It is raining.
 Premise: It is windy.
 Conclusion: It is raining and windy.

6. Simplification
If a conjunction (an “and” statement) is true, then each of its conjuncts is also
true.

Form:
If p ∧ q, then p
If p ∧ q, then q
Example:
 Premise: It is raining and windy.
 Conclusion: It is raining.

7. Addition
If a statement is true, then the disjunction (an “or” statement) of that statement
with any other statement is also true.

Form:
If p, then p ∨ q
Example:
 Premise: It is raining.
 Conclusion: It is raining or sunny.

First-Order logic:
 First-order logic is another way of knowledge representation in artificial
intelligence. It is an extension to propositional logic.
 FOL is sufficiently expressive to represent the natural language statements
in a concise way.
 First-order logic is also known as Predicate logic or First-order predicate
logic. First-order logic is a powerful language that develops information
about the objects in a more easy way and can also express the relationship
between those objects.
 First-order logic (like natural language) does not only assume that the world
contains facts like propositional logic but also assumes the following things
in the world:

Objects: A, B, people, numbers, colors, wars, theories, squares, pits,


wumpus, ......

Relations: It can be unary relation such as: red, round, is adjacent, or n-any
relation such as: the sister of, brother of, has color, comes between

Function: Father of, best friend, third inning of, end of, .....

Quantifiers in First-order logic:


A quantifier is a language element which generates quantification, and
quantification specifies the quantity of specimen in the universe of discourse.

These are the symbols that permit to determine or identify the range and scope of
the variable in the logical expression. There are two types of quantifier:

a. Universal Quantifier, (for all, everyone, everything)

b. Existential quantifier, (for some, at least one).

Universal Quantifier:

Universal quantifier is a symbol of logical representation, which specifies that the


statement within its range is true for everything or every instance of a particular
thing. The Universal quantifier is represented by a symbol ∀, which resembles an
inverted A.

Note: In universal quantifier we use implication "→".

If x is a variable, then ∀x is read as:

 For all x
 For each x
 For every x.

Existential Quantifier:

Existential quantifiers are the type of quantifiers, which express that the statement
within its scope is true for at least one instance of something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is


used with a predicate variable then it is called as an existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol (∧).

If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read


as:

 There exists a 'x.'


 For some 'x.'
 For at least one 'x.'
FOL inference rules for quantifier:
As propositional logic we also have inference rules in first-order logic, so
following are some basic inference rules in FOL:

 Universal Generalization
 Universal Instantiation
 Existential Instantiation
 Existential introduction

Universal Generalization

 Universal generalization is a valid inference rule which states that if premise


P(c) is true for any arbitrary element c in the universe of discourse, then we
can have a conclusion as ∀ x P(x).
 It can be represented as:

 This rule can be used if we want to show that every element has a similar
property.
 In this rule, x must not appear as a free variable.

Universal Instantiation:

 Universal instantiation is also called as universal elimination or UI is a valid


inference rule. It can be applied multiple times to add new sentences.
 The new KB is logically equivalent to the previous KB.
 It can be represented as:

Existential Instantiation:

 Existential instantiation is also called as Existential Elimination, which is a


valid inference rule in first-order logic.
 It can be applied only once to replace the existential sentence.
 The new KB is not logically equivalent to old KB, but it will be satisfiable if
old KB was satisfiable
 It can be represented as:

Existential introduction

 An existential introduction is also known as an existential generalization,


which is a valid inference rule in first-order logic.
 This rule states that if there is some element c in the universe of discourse
which has a property P, then we can infer that there exists something in the
universe which has the property P.
 It can be represented as:

Proof by resolution
Resolution method is an inference rule which is used in both Propositional as well
as First-order Predicate Logic in different ways. This method is basically used for
proving the satisfiability of a sentence. In resolution method, we use Proof by
Refutation technique to prove the given statement

Key Concepts of Resolution Algorithm


 Proof-by-Contradiction: To prove that a knowledge base (KB) entails a
query (α), we assume the negation of the query (¬α) and add it to the
knowledge base. The goal is to demonstrate that the conjunction
of KB and ¬α leads to a contradiction. If a contradiction is found, it implies
that the original query is true, or KB ⊨ α.
 Conversion to CNF (Conjunctive Normal Form): To apply the resolution
algorithm, all logical statements must first be transformed into Conjunctive
Normal Form (CNF). CNF is a conjunction (AND) of disjunctions (OR) of
literals, which is a requirement for the resolution process to work effectively.
 Resolution Rule: The resolution rule enables us to derive new clauses by
eliminating complementary literals. Two clauses that contain a literal and its
negation (e.g., P and ¬P) can be combined by canceling out these
complementary literals to form a new resolvent clause. The process is repeated
until either a contradiction is found or no new resolvents can be generated.

Explanation of the Algorithm

1. Input: The inputs to the algorithm are the knowledge base (KB) and the query
(α). The knowledge base is a collection of facts in propositional logic, and the
query is the proposition we want to prove.
2. Clause Conversion: The algorithm starts by converting KB ∧ ¬α into a set of
clauses in CNF. This step is crucial because resolution operates only on CNF.
3. Loop and Resolution: In each iteration, the algorithm selects pairs of clauses
and applies the resolution rule. If the resolution of two clauses results in the
empty clause (denoted as False), the algorithm returns true, indicating that the
knowledge base entails the query.
4. Termination: If no new clauses can be added (i.e., new ⊆ clauses), the
algorithm returns false, implying that the query is not entailed by the
knowledge base.

In order to apply resolution in a proof:

1. we express our hypotheses and conclusion as a product of sums (conjunctive


normal form), such as those that appear in the Resolution Tautology.
2. each maxterm in the CNF of the hypothesis becomes a clause in the proof.
3. we apply the resolution tautology to pairs of clauses, producing new clauses.
4. if we produce all the clauses of the conclusion, we have proven it.

Steps for Resolution:

1. Conversion of facts into first-order logic.

2. Convert FOL statements into CNF

3. Negate the statement which needs to prove (proof by contradiction)

4. Draw resolution graph (unification).


Horn clauses
 A horn clause is a term used in the artificial intelligence industry to
describe when a company gets exclusive rights to use a certain product or
technology. This clause can be beneficial for both the company and the
product, as it helps ensure that the product is used in a way that's
beneficial to both parties
 A Horn clause is a disjunctive clause (a disjunction of literals) with at most
one positive, i.e. unnegated, literal.

The different types of horn clauses

There are two main types of horn clauses:

 Definite clause: A horn clause with exactly one positive literal in the head.
 Horn clause with a head: A horn clause with at most one positive literal in the
head, which may also contain negated literals.
Forms of a Horn Clause :

1. NOT(P1) OR NOT(P2) OR ... OR NOT(Pn) OR Q


2. NOT(P1) OR NOT(P2) OR ... OR NOT (Pn)

Definite Clause: A type of Horn clause that has exactly one positive literal. It’s
used in rule-based reasoning systems to represent knowledge.

Horn Clause and Definite Clause

Horn clauses and definite clauses are fundamental logic structures used in
knowledge representation, especially in forward and backward chaining.

Forward Chaining

What is Forward Chaining?

Forward chaining is a data-driven reasoning strategy in AI. It starts with known


facts and applies rules to generate new facts or reach a conclusion. The process
continues until no more new facts can be inferred or a goal is achieved. This
approach is often used in expert systems for tasks such as troubleshooting and
diagnostics.
Properties of Forward Chaining:

 Data-Driven: The reasoning starts from available data (facts) and works
toward a goal.
 Bottom-Up Approach: It builds knowledge from facts, gradually moving
towards conclusions.
 Breadth-First Search Strategy: The inference engine explores multiple
rules simultaneously, applying them step by step.
 Possibility of Irrelevant Rules: Forward chaining may explore rules that do
not contribute to the final solution, making it less efficient in some cases

Backward Chaining

What is Backward Chaining?

Backward chaining is a goal-driven reasoning strategy used in AI. It starts with a


goal or hypothesis and works backward to determine if the available facts support
the goal. The process continues by recursively breaking down the goal into smaller
sub-goals until either all facts are verified or no more supporting data is found.

Properties of Backward Chaining:

 Goal-Driven: Reasoning begins with a desired goal and searches for


evidence to support it.
 Top-Down Approach: The system starts from the goal and works back to
find relevant facts.
 Depth-First Search Strategy: The inference engine follows a path deeply
before exploring other possibilities, prioritizing each goal or sub-goal in
sequence.
 Possibility of Infinite Loops: If not handled properly, backward chaining
may get stuck in loops while looking for evidence to support the goal

You might also like