0% found this document useful (0 votes)
31 views259 pages

AI Unit 2

Uploaded by

Alokit pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views259 pages

AI Unit 2

Uploaded by

Alokit pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 259

Artificial Intelligence

(KCS071) Unit 2

KCS 071 Unit 2 Ms. Ankita Singh


Syllabus- Unit 2

Problem solving Methods – Search Strategies- Uninformed – Informed –


Heuristics – Local Search Algorithms and Optimization Problems – Searching
with Partial Observations – Constraint Satisfaction Problems – Constraint
Propagation – Backtracking Search – Game Playing – Optimal Decisions in
Games – Alpha – Beta Pruning – Stochastic Games

KCS-071 Unit 2 Ankita Singh


Problem Solving Methods
Problem-solving agent is a goal based agent.

Intelligent agents are supposed to maximize their performance measure.


Achieving this is sometimes simplified if the agent can adopt a goal and aim at
satisfying it.

Problem-solving agents use atomic representations, states of the world are


considered as wholes, with no internal structure visible to the problem solving

KCS-071 Unit 2 Ankita Singh


Example
● Imagine an agent in the city of Arad, Romania, enjoying a touring holiday.
● Performance measure: improve its Romanian, take in the sights,
Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the
following day. In that case, it makes sense for the agent to adopt the goal of getting
to Bucharest.
● Goals help organize behavior by limiting the objectives that the agent is trying
to achieve and hence the actions it needs to consider.
● Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
● The agent’s task is to find out how to act, now and in the future, so that it
reaches a goal state

KCS-071 Unit 2 Ankita Singh


Problem formulation is the process of deciding what actions and states to
consider, given a goal.

Before it can do this, it needs to decide (or we need to decide on its behalf)
what sorts of actions and states.
Let us assume that the agent will consider actions at the level of driving from
one major town to another. Each state therefore corresponds to being in a
particular town.

KCS-071 Unit 2 Ankita Singh


Our agent has now adopted the goal of driving to
Bucharest and is considering where to go from
Arad. three roads lead out of Arad, one toward
Sibiu, one to Timisoara, and one to Zerind.

If map is not known Unknown environment,


randomly select

we assume that the environment is


observable,discrete,known,deterministic.

Under these assumptions, the solution to any


problem is a fixed sequence of actions.In
general it could be a branching strategy that
recommends different actions in the future
depending on what percepts arrive.
KCS-071 Unit 2 Ankita Singh
After formulating a goal and a problem to solve, the agent calls a search
procedure to solve it.

It then uses the solution to guide its actions,doing whatever the solution
recommends as the next thing to do—typically, the first action of the
sequence—and then removing that step from the sequence.

Once the solution has been executed, the agent will formulate a new goal.

KCS-071 Unit 2 Ankita Singh


Well defined problems
A problem can be defined formally by five components:

• INITIAL STATE - The initial state that the agent starts in.

For example, the initial state for our agent in Romania might be described as In(Arad).

• ACTIONS - A description of the possible actions available to the agent. Given a particular state s,
ACTIONS(s) returns the set of actions that can be executed in s.

For example, from the state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.

• TRANSITION MODEL - A description of what each action does; It is specified by a function RESULT(s,
a) that returns the state that results from doing action a in state s.

For example, we have RESULT(In(Arad),Go(Zerind)) = In(Zerind) .

KCS-071 Unit 2 Ankita Singh


• GOAL TEST - The goal test determines whether a given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test simply
checks whether the given state is one of them. The agent’s goal in Romania is
the singleton set {In(Bucharest )}.

• PATH COST - A path cost function that assigns a numeric cost to each path.
The problem-solving agent chooses a cost function that reflects its own
performance measure.

The preceding elements define a problem and can be gathered into a single
data structure that is given as input to a problem-solving algorithm.

A solution to a problem is an action sequence that leads from the initial state
to a goal state. Solution quality is measured by the path cost function, and an
optimal solution has the lowest path cost among all solutions.
KCS-071 Unit 2 Ankita Singh
STATE SPACE : Together, the initial state, actions, and transition model
implicitly define the state space of the problem—the set of all states reachable
from the initial state by any sequence of actions.

The state space forms a directed network or graph in which the nodes are
states and the links between nodes are actions. A path in the state space is a
sequence of states connected by a sequence of actions.

KCS-071 Unit 2 Ankita Singh


Summary
• Initial State: The agent is at the starting point, for example, 'Home'.

• Action(s): The possible actions include choosing a road to travel from one intersection to
another.

• Result(s, a): For a chosen action 'a' (road taken), the result would be a new state
representing the agent's new location (another intersection).

• Goal Test: A function to determine if the agent has reached the destination, 'Work'.

• Path Cost Function: A function that adds up the distance (or time) to travel from the initial
state to the current state via the chosen paths. The objective is to minimize this cost.

KCS-071 Unit 2 Ankita Singh


Example Problems: Vacuum world
States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of
which might or might not contain dirt.’

Thus, there are 2 × 22= 8 possible world states.

A larger environment with n locations has n ·2n states.

• Initial state: Any state can be designated as the initial state.

• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.

• Transition model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in
the rightmost square, and Sucking in a clean square have no effect.

• Goal test: This checks whether all the squares are clean.

• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

KCS-071 Unit 2 Ankita Singh


Example Problems: 8-puzzle
States: A state description specifies the location of each of the
eight tiles and the blank in one of the nine squares.

• Initial state: Any state can be designated as the initial state.

• Actions: The simplest formulation defines the actions as


movements of the blank space Left, Right, Up, or Down. Different
subsets of these are possible depending on where the blank is.

• Transition model: Given a state and action, this returns the


resulting state; for example,if we apply Left to the start state in
Figure the resulting state has the 5 and the blank switched.

• Goal test: This checks whether the state matches the goal
configuration.

• Path cost: Each step costs 1, so the path cost is the number of
steps in the path.

KCS-071 Unit 2 Ankita Singh


Searching for solutions
A solution is an action sequence, so search algorithms work by considering various possible action sequences.
The possible action sequences starting at the initial state form a search tree with the initial state at the root; the
branches are actions and the nodes correspond to states in the state space of the problem.

A tree representation of
search problem is called
Search tree The root of the
search tree is the root node
which is
corresponding to the initial
state

KCS-071 Unit 2 Ankita Singh


Search Strategies
Search strategies refer to systematic methods and approaches used to explore and find relevant information or solutions within a
given search space.

Following are the four essential properties of search strategy/algorithms to compare the efficiency of these algorithms

● Completeness : A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists
for any random input.
● Optimality : If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other
solutions, then such a solution for is said to be an optimal solution.
● Time Complexity : Time complexity is a measure of time for an algorithm to complete its task.
● Space Complexity ; It is the maximum storage space required at any point during the search, as the complexity of the
problem.

Time and space complexity is measured in terms of b,d,m

b: maximum branching factor of the tree

d: depth of least cost solution

m: depth of the state space

KCS-071 Unit 2 Ankita Singh


Infrastructure for search algorithms
For each node n of the tree, we have a structure that contains four
components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.

KCS-071 Unit 2 Ankita Singh


Types of search strategy / algorithm
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

KCS-071 Unit 2 Ankita Singh


Uninformed Search(Brute Force Search/Blind
Search/Exhaustive Search)
● Uninformed search strategies explore the search space without any
specific information or heuristics about the problem. Here we proceed
in a systematic way by exploring nodes in some predetermined order or
simply by selecting nodes at random.
● It operates in a brute force way as it only includes information about how
to traverse the tree and how to identify leaf and goal nodes.
● The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal.
● It examines each node of the tree until it achieves the goal node

KCS-071 Unit 2 Ankita Singh


• Advantage: Simplicity: Uninformed search strategies are generally easy to
implement and understand.

• Disadvantage :Inefficiency: Without additional information, uninformed


search strategies may require an extensive search, leading to inefficiency in
terms of time and space.

• Examples: Breadth First Search, Depth First Search, Uniform Cost Search

KCS-071 Unit 2 Ankita Singh


Informed Search(Heuristic Search)
● Informed search strategies utilize domain-specific information or heuristics
to guide the search towards more promising paths. Here we have knowledge
such as how far we are from the goal, path cost, how to reach to goal node etc.
This knowledge helps agents to explore less to the search space and find more
efficiently the goal node.
● A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
● Informed search can solve much complex problem which could not be solved in
another way.

KCS-071 Unit 2 Ankita Singh


Advantage:

• Efficiency: By leveraging domain knowledge, informed search strategies can make informed
decisions and focus the search on more relevant areas, leading to faster convergence to a solution.

Disadvantage:

• Heuristic Accuracy: The effectiveness of informed search strategies heavily relies on the quality
and accuracy of the chosen heuristic function. An inaccurate or misleading heuristic can lead to
suboptimal or incorrect solutions.

Example: Best First Search, A* Algorithm

KCS-071 Unit 2 Ankita Singh


Aspect Uninformed Search Informed Search
Knowledge Searches without any prior knowledge Uses knowledge, such as
heuristics, to guide the search

Time efficiency Generally more time consuming. Finds solutions quicker by


prioritizing certain paths.

Complexity Higher complexity due to lack of Reduced complexity and typically more efficient in
information, affecting time and space both time and space due to informed decisions.
complexity.

Example Techniques Depth-First Search (DFS), A* Search, Heuristic Depth-


Breadth-First Search (BFS). First Search, Best-First Search.

Solution Approach Explore paths blindly. Explores paths strategically


towards the goal.

KCS-071 Unit 2 Ankita Singh


Uninformed Search

KCS 071 Unit 2 Ms. Ankita Singh


Breadth first Search:
•Breadth-First Search is an uninformed search strategy
•It is the most common search strategy for traversing a tree or graph This algorithm
searches breadthwise in a tree or graph, so it is called breadth first search.
•BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level
•Breadth first search implemented using FIFO queue data structure. New nodes (which are
always deeper than their parents) go to the back of the queue, and old nodes, which are
shallower than the new nodes, get expanded first.

KCS-071 Unit 2 Ankita Singh


Algorithm:
1. Start with the initial state as the root node.
2. Enqueue the root node into a queue.
3. While the queue is not empty, do the following:
1. Dequeue a node from the front of the queue.
2. Explore the node and enqueue its unvisited neighboring nodes.
3. Mark the dequeued node as visited.
4. Repeat steps 3 until the queue is empty or a goal state is found.

KCS-071 Unit 2 Ankita Singh


Example1:
S—---> A ------>B -------->C ------>D -------->G ------>H ------>E -------->F -------->I
-------->K

KCS-071 Unit 2 Ankita Singh


Example2:

KCS-071 Unit 2 Ankita Singh


Advantages:
•BFS will provide a solution if any solution exists. It guarantees finding the shortest path if one exists
in an unweighted graph.

• If there are more than one solutions for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.

Disadvantages:
•BFS is inefficient in terms of time and space for large search spaces. It requires lots of memory
since each level of the tree must be saved into memory to expand the next level.

•BFS needs lots of time if the solution is far away from the root node.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of BFS
•Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
•Optimality: Breadth-First Search is optimal in terms of finding the shortest path in an
unweighted or uniformly weighted graph.
•Time Complexity: The time complexity of Breadth-First Search is O (b d), where b is the
branching factor and d is the depth of the shallowest goal node.
•Space Complexity:The space complexity of Breadth-First Search is O (b d), as it stores all
the nodes at each level in the search tree in memory.

KCS-071 Unit 2 Ankita Singh


Uniform Cost Search Algorithm/ Cheapest Cost
Algorithm:
•Uniform cost search algorithm is an extension of Breadth-First Search (BFS) that takes into account the cost of
reaching each node to find the lowest-cost path to the goal.

•It is a searching algorithm used for traversing a weighted tree or graph.This algorithm comes into play when a
different cost is available for each edge.

•The primary goal of the uniform cost search is to find a path to the goal node which has the lowest
cumulative cost.

•A uniform cost search algorithm is implemented by the priority queue It gives maximum priority to the
lowest cumulative cost

•Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same.

•The goal test is applied to a node when it is selected for expansion rather than when it is first generated.

KCS-071 Unit 2 Ankita Singh


Algorithm
1. Start with the initial state as the root node.

2. Maintain a priority queue to store nodes based on their path cost.

3. Enqueue the root node with a path cost of zero.

4. While the priority queue is not empty, do the following:

1. Dequeue the node with the lowest path cost from the priority queue.

2. If the dequeued node is the goal state, terminate the search and return the solution.

3. Otherwise, expand the node and enqueue its unvisited neighboring nodes with their updated path
costs.

5. Repeat steps 4 until the goal state is found or the priority queue is empty.

KCS-071 Unit 2 Ankita Singh


Example1

KCS-071 Unit 2 Ankita Singh


Example2

Based on the cost we will expand the graph in order: a->b->d->c->f->e->g->h


KCS-071 Unit 2 Ankita Singh
Advantages:

•Uniform cost search is optimal because at every state the path with the least
cost is chosen.

Disadvantages:

•It does not care about the number of steps involved in searching and only
concerned about path cost, due to which this algorithm may be stuck in an
infinite loop.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of UCS
Completeness:Uniform cost search is complete, such as if there is a solution, UCS will find it.

Optimal: Uniform cost search is always optimal as it only selects a path with the lowest path cost.

Time Complexity:• The time complexity of Uniform Cost Search depends on the number of nodes
and the cost of the lowest-cost path to the goal. Lowest cost is e and optimal cost is c* .Hence, the
worst case time complexity of Uniform cost search is O (b 1 + [C*/ε])

Space Complexity: • The space complexity of Uniform Cost Search can also be exponential in the
worst case, i.e.,O(bd), as it may need to store all the nodes along the lowest-cost path in memory.The
same logic is for space complexity so, the worst case space complexity of Uniform cost search is

O (b 1 + [C*/ε])

KCS-071 Unit 2 Ankita Singh


Depth First Search
•Depth-First Search is an uninformed search strategy that explores as far as
possible along each branch before backtracking. It traverses the depth of a
search tree or graph before exploring the neighboring nodes
•Depth first search is a recursive algorithm for traversing a tree or graph data
structure.
•DFS uses a stack data structure for its implementation.
•The process of the DFS algorithm is similar to the BFS algorithm.

KCS-071 Unit 2 Ankita Singh


Algorithm
1. Initialize an empty stack and an empty set for visited nodes.
2. Push the starting node onto the stack.
3. While the stack is not empty:
a. Pop the current node.
b. If it’s the goal node, return “Path found”.
c. Add the current node to the visited set.
d. Get the neighbors of the current node.
e. For each neighbor not visited, push it onto the stack.If the loop
completes without finding the goal, return “Path not found”.

KCS-071 Unit 2 Ankita Singh


Example
Root node----->Left node --------> right

KCS-071 Unit 2 Ankita Singh


Example

KCS-071 Unit 2 Ankita Singh


Advantages:
•DFS requires very less memory as it only needs to store a stack of the nodes on the path from root
node to the current node.

• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right direction)

Disadvantages:
•There is the possibility that many states keep reoccurring, and there is no guarantee of finding the
solution.

•DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of DFS
•Completeness - DFS is not complete if the search space is infinite or contains cycles. If depth is finite than it is
complete.

•Optimality- DFS does not guarantee finding the optimal solution, as it may find a solution at a greater depth
before finding a shorter path. It is non optimal, as it may generate a large number of steps or high cost to reach
to the goal node.

•Time Complexity - The time complexity of DFS can vary depending on the search space structure. In the worst
case, it can be O(bm), where b is the branching factor and m is the maximum depth of the search tree.

•Space Complexity: The space complexity of DFS is O(b.m), where b is the branching factor and m is the
maximum depth of the search tree. It stores nodes along the current path in memory.

KCS-071 Unit 2 Ankita Singh


Depth limited search algorithm
•A depth limited search algorithm is similar to depth first search with a predetermined
limit. Depth limited search can solve the drawback of the infinite path in the Depth first
search.

•In this algorithm, the node at the depth limit will treat as it has no successor nodes
further

•Depth limited search can be terminated with two Conditions of failure:

•Standard failure value: It indicates that problem does not have any solution.

•Cutoff failure value: It defines no solution for the problem within a given depth limit.

KCS-071 Unit 2 Ankita Singh


Depth (d): 2
Goal Node: J
Path: S->A->C->D->B->I->J

Goal Node: H
This depth (d) will lead to no solution due to
condition of cut-off failure.

KCS-071 Unit 2 Ankita Singh


Advantages:

•Depth limited search is Memory efficient.

Disadvantages:

•Depth limited search can be terminated without finding solution.

•It may not be optimal if the problem has more than one solution.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of DLS
•Completeness: DLS search algorithm is complete if the solution is above the
depth limit
•Optimality :Depth limited search can be viewed as a special case of DFS, and
it is also not optimal even if ℓ>d
•Time Complexity: Time complexity of DLS algorithm is O(b d)
•Space Complexity: Space complexity of DLS algorithm is O (b×d)

KCS-071 Unit 2 Ankita Singh


Iterative Deepening Depth First Search:
•The iterative deepening algorithm is a combination of DFS and BFS algorithms.
•This search algorithm finds out the best depth limit and does it by gradually increasing
the limit until a goal is found.
•This algorithm performs depth first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
•This Search algorithm combines the benefits of Breadth first search's fast search and depth
first search's memory efficiency.
•The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown

KCS-071 Unit 2 Ankita Singh


If Goal node:G

1'st Iteration-(d=0)---------> A

2'nd Iteration (d=1)--------> A, B, C

3'rd Iteration (d=2) ------------>A, B, D, E, C, F, G

If Goal Node is k

4'th Iteration (d=3)------------>A, B, D, H, I, E, C, F, K, G

KCS-071 Unit 2 Ankita Singh


Advantages:

•It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.

Disadvantages:

•The main drawback of IDDFS is that it repeats all the work of the previous
phase

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of IDDFS
Completeness:This algorithm is complete is if the branching factor is finite
time
Optimality: IDDFS algorithm is optimal if path cost is a non decreasing
function of the depth of the node
Complexity:Let's suppose b is the branching factor and depth is d then the
worst case time complexity is O(bd)
Space Complexity:The space complexity of IDDFS will be O(b.d)

KCS-071 Unit 2 Ankita Singh


Bidirectional Search Algorithm:
•Bidirectional Search is a search algorithm that simultaneously performs two simultaneous
searches, one forward from the initial state and one backward from the goal state. It aims
to meet in the middle by searching for a common node reached from both directions.

•Bidirectional search replaces one single search graph with two small subgraphs in which
one starts the search from an initial vertex and other starts from goal vertex.

•Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

• The motivation is that bd/2 + bd/2 is much less than bd. Bidirectional search is implemented
by replacing the goal test with a check to see whether the frontiers of the two searches
intersect; if they do, a solution has been found.

KCS-071 Unit 2 Ankita Singh


•In the below search tree, bidirectional search algorithm is applied.

•This algorithm divides one graph/tree into two sub graphs.

•It starts traversing from node 1 in the forward direction and starts from goal
node 16 in the backward direction.

•The algorithm terminates at node 9 where two searches meet.

KCS-071 Unit 2 Ankita Singh


Example

E
B

A D G

C F

A-> B->C->D G->E->F->D


A->B->C->D->F->E->G

KCS-071 Unit 2 Ankita Singh


Advantages:
•Bidirectional Search can be faster than traditional searches by exploring the search space simultaneously from both ends,
potentially reducing the search effort.

• As the search progresses from both directions, the effective branching factor is reduced, leading to a more efficient
search. Bidirectional search requires less memory.

Disadvantages:
•Bidirectional Search requires storing visited nodes from both directions, leading to increased memory consumption
compared to unidirectional searches. Implementation of the bidirectional search tree is difficult.

•In bidirectional search, one should know the goal state in advance.

• The coordination and synchronization between the two searches introduce additional overhead in terms of implementation
complexity.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of Bidirectional Search
•Completeness: Bidirectional Search is complete if we use BFS in both searches.

•Optimality: Bidirectional search is Optimal if both the forward and backward searches are optimal

•Time Complexity: The time complexity of Bidirectional Search depends on the branching factor, the depth of
the shallowest goal state, and the meeting point of the two searches. In the best case, it can be O(b d/2), where b
is the branching factor and d is the depth of the goal state.Worst time complexity of bidirectional search is
O(b d).

•Space Complexity: The space complexity of Bidirectional Search depends on the memory required to store
visited nodes from both directions. In the best case, it can be O(b d/2), where b is the branching factor and d is
the depth of the goal state. Worst Space complexity of bidirectional search is O(b d)

KCS-071 Unit 2 Ankita Singh


Comparison of Uninformed Searches

KCS-071 Unit 2 Ankita Singh


Problems in Uninformed Search
1. Blind Exploration: Uninformed search strategies, such as Breadth-First
Search or Depth-First Search, lack domain-specific knowledge or heuristics.
They explore the search space without any information about the problem
structure or the goal state.

2. Inefficient in Complex Spaces: Uninformed search strategies can be


inefficient in large or complex search spaces, as they may require extensive
exploration of irrelevant or unpromising paths.

KCS-071 Unit 2 Ankita Singh


INFORMED SEARCH
STRATEGIES

KCS 071 Unit 2 Ms. Ankita Singh


INFORMED (HEURISTIC) SEARCH STRATEGIES
•Heuristic search strategies are like smart search methods that use special
knowledge or hints to find solutions more efficiently. It's similar to when we use
our intuition or common sense to solve a problem.
•Informed search algorithm contains an array of knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc.
•This knowledge help agents to explore less to the search space and find more
efficiently the goal node.
•The informed search algorithm is more useful for large search space. Informed
search algorithm uses the idea of heuristic, so it is also called Heuristic search.

KCS-071 Unit 2 Ankita Singh


Heuristics function:
•Heuristic is a function which is used in Informed Search, and it finds the most promising path

•The heuristic method, however, might not always give the best solution, but it guaranteed to find a good
solution in reasonable time

•Heuristic function estimates how close a state is to the goal It is represented by h(n), and it calculates the cost
of an optimal path between the pair of states The value of the heuristic function is always positive.

•Admissibility of the heuristic function is given as:

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than or equal to
the estimated cost.

KCS-071 Unit 2 Ankita Singh


In the informed search we will discuss two main algorithms which are given
below:

•Best First Search Algorithm(Greedy search)

•A* Search Algorithm

KCS-071 Unit 2 Ankita Singh


Pure Heuristic Search:
•Pure heuristic search is the simplest form of heuristic search algorithms It
expands nodes based on their heuristic value h(n).
•It maintains two lists, OPEN and CLOSED list In the CLOSED list, it places
those nodes which have already expanded and in the OPEN list, it places
nodes which have yet not been expanded.
•On each iteration, each node n with the lowest heuristic value is expanded
and generates all its successors and n is placed to the closed list The algorithm
continues unit a goal state is found.

KCS-071 Unit 2 Ankita Singh


Best First Search Algorithm (Greedy Search):
•Best-first search is a search algorithm which explores a graph by expanding the most promising
node . It's called "best-first" because it greedily select a path that appears best at that moment
according to the heuristic.

• It is the combination of depth first search and breadth first search algorithms It uses the heuristic
function and search.

•In the best first search algorithm, we expand the node which is closest to the goal node and the
closest cost is estimated by heuristic function.

•The greedy best first algorithm is implemented by the priority queue.

KCS-071 Unit 2 Ankita Singh


Algorithm
•Step 1 Place the starting node into the OPEN list

•Step 2 If the OPEN list is empty, Stop and return failure

•Step 3 Remove the node n, from the OPEN list which has the lowest value of f(n), and places it in the CLOSED
list

•Step 4 Expand the node n, and generate the successors of node n

•Step 5 Check each successor of node n, and find whether any node is a goal node or not If any successor node
is goal node, then return success and terminate the search, else proceed to Step 6

•Step 6 For each successor node, algorithm checks for evaluation function f(n), and then check if the node has
been in either OPEN or CLOSED list If the node has not been in both list, then add it to the OPEN list

•Step 7 Return to Step 2

KCS-071 Unit 2 Ankita Singh


Example
In this search example, we are using two lists which are OPEN and CLOSED Lists.

Following are the iteration for traversing the above example.

•Expand the nodes of S and put in the CLOSED list

•Initialization: Open [A, B], Closed [S]

•Iteration 1: Open [A], Closed [S, B]

•Iteration 2: Open [E, F, A], Closed [S, B]

: Open [E, A], Closed [S, B,

•Iteration 3: Open [I, G, E, A], Closed [S, B, F]

: Open [I, E, A], Closed [S, B,F, G]•

Hence the final solution path will be:

S> B ---------->F --------> G

KCS-071 Unit 2 Ankita Singh


Example 2 Node H(n)

24 A 40
F
D B 32
6 7 19
C 25
A C
12 G
D 35
10
9 E 19
B E H
14 8 F 17
A->C->F->G
H 10

G 0
KCS-071 Unit 2 Ankita Singh
Advantages:

• It can find a solution without exploring much of the state space.

• It uses less memory than other informed search methods like A* as it does not store all the generated nodes.

Disadvantages:

• It is not complete. In some cases, it may get stuck in an infinite loop.

• It is not optimal. It does not guarantee the shortest possible path will be found.

• It heavily depends on the accuracy of the heuristic.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of Best First Search
•Completeness: Complete Greedy best first search is also incomplete, even if
the given state space is finite.

•Optimality: Greedy best first search algorithm is not optimal.

•Time Complexity:The worst case time complexity of Greedy best first search
is O( b m)

•Space Complexity:The worst case space complexity of Greedy best first


search is O( b m ) Where, m is the maximum depth of the search space.

KCS-071 Unit 2 Ankita Singh


A* Search Algorithm
•A* Search is an informed search algorithm which evaluates a node based on a combination of the cost of the
path from the start node to that node and an estimated heuristic function that estimates the cost to reach the
goal form the current node.

•It has combined features of UCS and greedy best first search, by which it solve the problem efficiently.

•A* search algorithm finds the shortest path through the search space using the heuristic function. This search
algorithm provides optimal result faster.

•A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

•In A* search algorithm, we use search heuristic as well as the cost to reach the node.

KCS-071 Unit 2 Ankita Singh


Algorithm
Step1 Place the starting node in the OPEN list

Step2 Check if the OPEN list is empty or not, if the list is empty then return failure and stops

Step 3 Select the node from the OPEN list which has the smallest value of evaluation function g+h if node n is
goal node then return success and stop, otherwise

Step 4 Expand node n and generate all of its successors, and put n into the closed list For each successor n',
check whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and place
into Open list

Step 5 Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value

Step 6 Return to Step 2

KCS-071 Unit 2 Ankita Singh


Example
Step 01: We start with node A.

•Node B and Node F can be reached from node A.

A* Algorithm calculates f(B) and f(F).


•f(B) = 6 + 8 = 14
•f(F) = 3 + 6 = 9
Since f(F) < f(B), so it decides to go to node F.
Path A → F

Step 02: Node G and Node H can be reached from node F.

A* Algorithm calculates f(G) and f(H).


•f(G) = (3+1) + 5 = 9
•f(H) = (3+7) + 3 = 13
Since f(G) < f(H), so it decides to go to node G.

Path A → F → G

KCS-071 Unit 2 Ankita Singh


Step 03: Node I can be reached from node G.

A* Algorithm calculates f(I). f(I) = (3+1+3) + 1 = 8


It decides to go to node I.
PathA → F → G → I

Step 04:

Node E, Node H and Node J can be reached from node I.

A* Algorithm calculates f(E), f(H) and f(J). f(E) = (3+1+3+5) + 3 = 15

•f(H) = (3+1+3+2) + 3 = 12
•f(J) = (3+1+3+3) + 0 = 10
Since f(J) is least, so it decides to go to node J.
Path:A → F → G → I → J
This is the required shortest path from node A to node J

KCS-071 Unit 2 Ankita Singh


This is the required shortest path from node A to node J

KCS-071 Unit 2 Ankita Singh


Example
In this example, we will traverse the given graph using the A* algorithm The
heuristic value of all states is given in the below table so we will calculate the
f(n) of each state using the formula f(n)= g(n)

h(n), where g(n) is the cost to reach any node from start state

Here we will use OPEN and CLOSED list

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
Advantages:
•A* search algorithm is the best algorithm than other search algorithms.

•A* search algorithm is optimal and complete.

•This algorithm can solve very complex problems.

Disadvantages:
•It does not always produce the shortest path as it mostly based on heuristics and approximation.

•A* search algorithm has some complexity issues.

•The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is not
practical for various large scale problems.

KCS-071 Unit 2 Ankita Singh


Evaluating Efficiency of A* search
•Completeness: A* Search is complete if the search space is finite and the heuristic
function is admissible (never overestimates the actual cost).

•Optimality: A* Search is optimal if the heuristic function is admissible and consistent.(also


known as monotonic).

•Time Complexity: The time complexity of A* Search depends on the heuristic function,
the branching factor, and the structure of the search space. In the worst case, it can be
exponential.

•Space Complexity: The space complexity of A* Search depends on the size of the priority
queue and the number of nodes stored in memory. In the worst case, it can be exponential.

KCS-071 Unit 2 Ankita Singh


Conditions for optimality: Admissible heuristics
• The first condition we require for optimality is that h(n) be an admissible heuristic.
•A heuristic function h(n) is admissible if for every node n, h(n)<=h*(n), where h*(n) is the
true cost to reach the goal state from n.
• An admissible heuristic h(n) is one that never overestimates the cost to reach the goal.
Admissible heuristics are by nature optimistic because they think the cost of solving the
problem is less than it actually is.
• For example, Straight-line distance is admissible because the shortest path between any
two points is a straight line, so the straight line cannot be an overestimate.
•If h(n) is admissible, A* using TREE - SEARCH is optimal.

KCS-071 Unit 2 Ankita Singh


Conditions for optimality: Consistent heuristics
• A second, slightly stronger condition called consistency (or monotonicity) is required only for
applications of A∗ to graph search.

• A heuristic h(n) is consistent if, for every node n and every successor n’ of n generated by any action
a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus
the estimated cost of reaching the goal from n’: h(n) ≤ c(n, a, n’) + h(n’)
This is a form of the general triangle inequality, which stipulates that each side of a triangle
cannot be longer than the sum of the other two sides

n
h(n)
c(n,a,n’)
Every consistent heuristic is also admissible
n’ G
If h(n) is consistent, A* using GRAPH - SEARCH is optimal h(n’)

KCS-071 Unit 2 Ankita Singh


Memory-Bounded Heuristic Search
Problem in A*: Huge memory requirement.

Solution: IDA*, Recursive Best First Search

KCS-071 Unit 2 Ankita Singh


Iterative deepening A*
The simplest way to reduce memory requirements for A∗ is to adapt the idea
of iterative deepening to the heuristic search context, resulting in the
iterative-deepening A∗ (IDA∗) algorithm.

The main difference between IDA∗ and standard iterative deepening is that
the cutoff used is the f-cost (g+h) rather than the depth; at each iteration, the
cutoff value is the smallest f-cost of any node that exceeded the cutoff on the
previous iteration.

KCS-071 Unit 2 Ankita Singh


The IDA* algorithm works by incrementally increasing the threshold based on the f-score of each
node, which is calculated using the formula:
f(n)=g(n)+h(n)

f(n)=Actual cost+Estimated cost

Where h is admissible.
Here,
● f(n) = Total cost evaluation function.
● g(n) = The actual cost from the initial node to the current node.
● h(n) = Heuristic estimated cost from the current node to the goal state. it is based on the
approximation according to the problem characteristics.

F-score is a heuristic function that is used to estimate the cost of reaching the goal state from a
given state. It is a combination of two other heuristic functions, g(n) and h(n).
KCS-071 Unit 2 Ankita Singh
Step-by-Step Process of the IDA* Algorithm

1. Initialization: Set the root node as the current node and compute its f-score.
2. Set Threshold: Initialize a threshold based on the f-score of the starting node.
3. Node Expansion: Expand the current node’s children and calculate their f-scores.
4. Pruning: If the f-score exceeds the threshold, prune the node and store it for future exploration.
5. Path Return: Once the goal node is found, return the path from the start node to the goal.
6. Update Threshold: If the goal is not found, increase the threshold based on the minimum pruned
value and repeat the process

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
The goal path is
2–>5–>6–>8–>15

KCS-071 Unit 2 Ankita Singh


Advantages

1. Optimal Pathfinding: IDA* guarantees finding the optimal path, as it never overestimates the
cost to the goal.
2. Memory Efficient: It uses limited memory compared to A* by applying depth-first search
techniques.
3. Efficient with Large State Spaces: IDA* handles large graphs efficiently by pruning
unnecessary nodes.

Disadvantages

1. Repeated Node Exploration: The algorithm does not store visited nodes, leading to repeated
exploration.
2. Slower than A*: IDA* can be slower than algorithms like A* due to the repeated exploration of
nodes.

KCS-071 Unit 2 Ankita Singh


Recursive Best First Search
● Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to mimic the
operation of standard best-first search, but using only linear space.
● Its structure is similar to that of a recursive depth-first search, but rather than continuing
indefinitely down the current path, it uses the f limit variable to keep track of the f-value of the
best alternative path available from any ancestor of the current node.
● If the current node exceeds this limit, the recursion unwinds back to the alternative path.
As the recursion unwinds, RBFS replaces the f-value of each node along the path with a
backed-up value—the best f-value of its children.

In this way, RBFS remembers the f-value of the best leaf in the forgotten subtree and can therefore
decide whether it’s worth reexpanding the subtree at some later time.

RBFS is somewhat more efficient than IDA∗, but still suffers from excessive node regeneration

KCS-071 Unit 2 Ankita Singh


Example ∞
A 363

447
B
393
C 447 D 449

415
646 413
E 415 F 671 G H

417 553
L M I J B
591
450
526

KCS-071 Unit 2 M N H 607 Ankita Singh


418
615
Example ∞
A 363

447
B
393
C 447 D 449

417 415
646 413417
E 415 F 671 G H
450

417 553
L M I J B
591
450
526

KCS-071 Unit 2 M N H 607 Ankita Singh


418
615
Example ∞
A 363

447
B
393
C 447 D 449

447
646 413417
E 415 F 671 G H
450

417 553
I J B
447
526

KCS-071 Unit 2 M N H 607 Ankita Singh


418
615
Evaluating efficiency of Recursive Best First Search
Optimality: Like A∗ tree search, RBFS is an optimal algorithm if the heuristic
function h(n) is admissible.

Space Complexity: Its space complexity is linear in the depth of the deepest
optimal solution

Time Complexity: it depends both on the accuracy of the heuristic function


and on how often the best path changes as nodes are expanded.

KCS-071 Unit 2 Ankita Singh


RBFS vs IDA*
RBFS is somewhat more efficient than IDA∗, but still suffers from excessive
node regeneration.

IDA∗ and RBFS suffer from using too little memory. Between iterations, IDA∗
retains only a single number: the current f-cost limit. RBFS retains more
information in memory, but it uses only linear space: even if more memory
were available, RBFS has no way to make use of it. Because they forget most
of what they have done, both algorithms may end up reexpanding the same
states many times over. Furthermore, they suffer the potentially exponential
increase in complexity associated with redundant paths in graphs.

KCS-071 Unit 2 Ankita Singh


Depth First - Branch and Bound
● Depth-first branch-and-bound search is a way to combine the space saving of depth-first
search with heuristic information. It is particularly applicable when many paths to a goal exist
and we want an optimal path.
● The idea of a branch-and-bound search is to maintain the lowest-cost path to a goal found so
far, and its cost. Suppose this cost is bound. If the search encounters a path p such that
cost(p)+h(p) ≥ bound, path p can be pruned. If a non-pruned path to a goal is found, it must be
better than the previous best path. This new solution is remembered and bound is set to the
cost of this new solution. It then keeps searching for a better solution.
● Branch-and-bound search generates a sequence of ever-improving solutions. Once it has
found a solution, it can keep improving it.
● Branch-and-bound search is typically used with depth-first search, where the space saving of
the depth-first search can be achieved. It can be implemented similarly to depth-bounded
search, but where the bound is in terms of path cost and reduces as shorter paths are found.
The algorithm remembers the lowest-cost path found and returns this path when the search
finishes.
KCS-071 Unit 2 Ankita Singh
Terms

Branch: A mechanism to generate branches when searching the solution


space. We use heuristic strategy for picking which one to try first.

Bound: A mechanism to generate a bound so that many branches can be


terminated. It refers to ignoring partial solutions that cannot be better than
the current best solution.

Pruning: It eliminates those parts of a search space which does not contain
better solution.

KCS-071 Unit 2 Ankita Singh


Example

KCS-071 Unit 2 Ankita Singh


KCS-071 UnitCost=5
2 Ankita Singh
KCS-071 UnitCost=5
2 cost=4 Ankita Singh
Advantages:
● As it finds the minimum path instead of finding the minimum successor so there should not be
any repetition.
● The time complexity is less compared to other algorithms.

Disadvantages:
● The Branch and Bound algorithm is limited to small size network. In the problem of large
networks, where the solution search space grows exponentially with the scale of the network,
the approach becomes relatively prohibitive

KCS-071 Unit 2 Ankita Singh


PROBLEM REDUCTION:
•So far we have considered search strategies for OR graphs through which we want to find a single
path to a goal.

•Such structure represent the fact that along any one of the branches leaving it.

AND OR GRAPHS

•The AND OR GRAPH (or tree) is useful for representing the solution of problems that can solved by
decomposing them into a set of smaller problems, all of which must then be solved.

•This decomposition, or reduction, generates arcs that we call AND arcs. One AND arc may point to
any number of successor nodes, all of which must be solved in order for the arc to point to a
solution.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
AO*
The AO* algorithm is based on AND-OR graphs to break complex problems into smaller ones
and then solve them. The AND side of the graph represents those tasks that need to be done with
each other to reach the goal, while the OR side stands alone for a single task.

Working of AO* algorithm:


The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.

KCS-071 Unit 2 Ankita Singh


● AO* is designed to work with graphs or search spaces where the heuristic function may not always
be accurate or consistent.
● The key feature of AO* is its ability to handle inadmissible or inconsistent heuristic functions,
which means that the heuristic might overestimate or underestimate the actual cost to reach the goal.
To accommodate this, AO* employs a technique called “anytime optimization.” It continually
updates its search based on new information to refine its estimate of the optimal solution.
● The search process continues until certain termination conditions are met, such as:
○ A predefined time limit is reached.
○ A user request to stop and return the best solution found so far.
○ The discovery of an optimal solution that satisfies the problem constraints.

KCS-071 Unit 2 Ankita Singh


Example:
The cost of each edge is the same as , and the heuristic cost to reach the goal node from each node of the
graph is shown beside it.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Path is A->C>D

KCS-071 Unit 2 Ankita Singh


Difference b/w A* Algorithm and AO* algorithm
● A* algorithm provides with the optimal solution, whereas AO* stops when
it finds any solution.
● AO* algorithm requires lesser memory compared to A* algorithm.
● AO* algorithm doesn't go into infinite loop whereas the A* algorithm can
go into an infinite loop.

KCS-071 Unit 2 Ankita Singh


Advantages of AO Algorithm*
• It can efficiently solve problems with multiple paths due to its use of heuristics.

• It is optimal when the heuristic function is admissible (never overestimates the true cost).

Disadvantages of AO Algorithm*
• It can consume a large amount of memory, similar to the A* algorithm.

• The performance of AO* is heavily dependent on the accuracy of the heuristic function. If
the heuristic function is not well-chosen, AO* could perform poorly.

KCS-071 Unit 2 Ankita Singh


Generate and Test
• Generate and Test Search is a heuristic search technique based on Depth First Search with
Backtracking which guarantees to find a solution if done systematically and there exists a
solution.

• In this technique, all the solutions are generated and tested for the best solution. It ensures that
the best solution is checked against all possible generated solutions.

• Potential solutions that need to be generated vary depending on the kinds of problems.For some
problems the possible solutions may be particular points in the problem space and for some
problems, paths from the start state.

• This approach is what is known as British Museum algorithm: finding an object in the British
Museum by wandering randomly.

KCS-071 Unit 2 Ankita Singh


•The evaluation is carried out by the heuristic function as all the solutions are generated systematically
in generate and test algorithm but if there are some paths which are most unlikely to lead us to result
then they are not considered. The heuristic does this by ranking all the alternatives and is often
effective in doing so.

•Systematic Generate and Test may prove to be ineffective while solving complex problems. But
there is a technique to improve in complex cases as well by combining generate and test search with
other techniques so as to reduce the search space. For example in Artificial Intelligence Program
DENDRAL we make use of two techniques, the first one is Constraint Satisfaction Techniques followed by
Generate and Test Procedure to work on reduced search space i.e. yield an effective result by working
on a lesser number of lists generated in the very first step.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
Properties of Good Generators:
The good generators need to have the following properties:

Complete: Good Generators need to be complete i.e. they should generate all the possible solutions and cover all the
possible states. In this way, we can guaranty our algorithm to converge to the correct solution at some point in time.
Non Redundant: Good Generators should not yield a duplicate solution at any point of time as it reduces the
efficiency of algorithm thereby increasing the time of search and making the time complexity exponential. In fact, it is
often said that if solutions appear several times in the depth-first search then it is better to modify the procedure to
traverse a graph rather than a tree.
Informed: Good Generators have the knowledge about the search space which they maintain in the form of an array of
knowledge. This can be used to search how far the agent is from the goal, calculate the path cost and even find a way to
reach the goal.

KCS-071 Unit 2 Ankita Singh


Algorithm
1.Generate a possible solution.

2.Test to see if this is the expected solution.

3.If the solution has been found quit else go to step 1.

KCS-071 Unit 2 Ankita Singh


Example
Traveling Salesman Problem (TSP)

• A salesman has a list of cities, each of which he must visit exactly once.
There are direct roads between each pair of cities on the list. Find the route
the salesman should follow for the shortest possible round trip that both
starts and finishes at any one of the cities. – Traveler needs to visit n cities. –
Know the distance between each pair of cities. – Want to know the shortest
route that visits all the cities once.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Water-Jug Problem
Problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any measuring marker on it. There is a pump that
can be used to fill the jugs with water. How can you get exactly 2 gallon of water from the 4-gallon jug?

Initial state is (0, 0).

The goal state is (2, n) for any value of n.

State Space Representation: we will represent a state of the problem as a tuple (x, y) where x represents the amount of water in
the 4-gallon jug and y represents the amount of water in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.

Assumptions:

• We can fill a jug from the pump.

• We can pour water out of a jug to the ground.

• We can pour water from one jug to another.

• There is no measuring device available.

KCS-071 Unit 2 Ankita Singh


Operators (Actions)

KCS-071 Unit 2 Ankita Singh


Solution using DFS

KCS-071 Unit 2 Ankita Singh


8 Puzzle problem
● Given a 3×3 board with 8 tiles (each numbered from 1 to 8) and one empty space, the objective is to
place the numbers to match the final configuration using the empty space.
● We can slide four adjacent tiles (left, right, above, and below) into the empty space.
● Actions: Left, right, above, below

KCS-071 Unit 2 Ankita Singh


Solution using Uninformed Search (BFS)
2 8 3

1 6 4

7 5

2 8 3
2 8 3 2 8 3
1 6 4
1 4 1 6 4
7 5
7 6 5 7 5

2 3 2 8 3 2 8 3 2 8 3 2 8 3
1 8 4 1 4 1 4 1 6 4 1 6
7 6 5 7 6 5 7 5 5 7 5 7 5 4
KCS-071 Unit 2 Ankita Singh
Solution using Informed Search (A*)

h(n) =no. of misplaced tiles


with respect to goal state

KCS-071 Unit 2 Ankita Singh


Local Search Algorithm
● The search algorithms that we have seen so far are designed to explore search
spaces systematically.This systematicity is achieved by keeping one or more paths
in memory and by recording which alternatives have been explored at each point
along the path. When a goal is found, the path to that goal also constitutes a
solution to the problem.
● In many problems, however,the path to the goal is irrelevant. For example, in the
8-queens problem what matters is the final configuration of queens, not the order in
which they are added
● If the path to the goal does not matter, we might consider a different class of
algorithms,nones that do not worry about paths at all. Here we will search in
solution space, where each node will be a solution- a good solution or a bad solution.
Our task is to find a state which is a optimal solution to a problem.

KCS-071 Unit 2 Ankita Singh


Local Search and optimization
● Local search algorithms
○ Keep track of single current state/node(rather than multiple paths)
○ Move only to neighboring state/node.
○ Paths followed by the search are not retained.

● Local search algorithms are not systematic,


● Advantages:
○ They use very little memory—usually a constant amount;
○ They can often find reasonable solutions in large or infinite (continuous) state spaces for
which systematic algorithms are unsuitable.
● Pure Optimization Problems:
○ All states have an objective function (the function that says how good or bad a solution is)
○ Goal is to find state with max(min) objective value
○ Local search can do quite well on these problems.

KCS-071 Unit 2 Ankita Singh


Definition
Local search in AI refers to a family of optimization algorithms that are used to find the best possible solution within a given
search space. Unlike global search methods that explore the entire solution space, local search algorithms focus on making
incremental changes to improve a current solution until they reach a locally optimal or satisfactory solution. This approach
is useful in situations where the solution space is vast, making an exhaustive search impractical.

KCS-071 Unit 2 Ankita Singh


Example - N Queens
We have to formulate N-Queens problem as an Optimization problem.

State- in local search state is a solution good or bad. Here we can


decide at most one queen in one column. Number of possible states: 8 8

Objective function can be pair of queens attacking each other.

- worst possible state(queen arrangement) = 8C2 (28 attacks)


- best possible state(the real solution) will only have 0 possible
attacks.

So the objective function here should minimize the attacks until it


becomes 0.

Successor function/ Neighbourhood function:


- Intuition of neighbourhood function is two states in
neighbourhood are relatively similar solutions.
- Move single queen to another square in the same column
KCS-071 Unit 2 Ankita Singh
Hill climbing
•Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem.It
terminates when it reaches a peak value where no neighbor has a higher value.

•The algorithm does not maintain a search tree, so the data structure for the current node need only
record the state and the value of the objective function.

• Hill climbing is sometimes called greedy local search with no backtracking because it grabs a good
neighbor state without looking ahead beyond the immediate neighbors of the current state.

• Hill climbing often makes rapid progress toward a solution because it is usually quite easy to
improve a bad state.

KCS-071 Unit 2 Ankita Singh


Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
•Generate and Test variant: Hill Climbing is the variant of Generate and Test
method The Generate and Test method produce feedback which helps to
decide which direction to move in the search space
•Greedy approach: Hill climbing algorithm search moves in the direction
which optimizes the cost
•No backtracking : It does not backtrack the search space, as it does not
remember the previous states

KCS-071 Unit 2 Ankita Singh


State space Diagram for Hill Climbing:
•The state space landscape is a graphical representation of
the hill climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost

•On Y axis we have taken the function which can be an


objective function or cost function, and state space on the x
axis

•If the function on Y axis is cost then, the goal of search is to


find the global minimum and local minimum

•If the function of Y axis is Objective function, then the goal


of the search is to find the global maximum and local
maximum

KCS-071 Unit 2 Ankita Singh


Different regions in the state space landscape
•Local Maximum:Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.

•Global Maximum:Global maximum is the best possible state of state space landscape It
has the highest value of objective function.

•Current state :It is a state in a landscape diagram where an agent is currently present.

•Flat local maximum:It is a flat space in the landscape where all the neighbor states of
current states have the same value.

•Shoulder: It is a plateau region which has an uphill edge.

KCS-071 Unit 2 Ankita Singh


Example
The successors of a state are all possible states generated by moving a single queen to
another square in the same column (so each state has 8×7=56 successors).

The heuristic cost function h is the number of pairs of queens that are attacking each
other, either directly or indirectly.

The global minimum of this function is zero, which occurs only at perfect solutions.

Figure shows a state with h=17. The figure also shows the values of all its successors, with the
best successors having h=12.

Unfortunately, hill climbing often gets stuck for the following reasons:

Hill-climbing algorithms typically choose randomly among the set of best successors if there
is more than one.

It takes just 5 steps to reach the state in Figure 4.3(b), which has h=1 and is very nearly a
solution.

14% of time HCA solves the problem, where 86% of the time it get stuck in local minimum

However
-It takes only 4 steps on average when it succeeds
-And 3 on average when it get stuck (in state space with 88 = 17 million states)

KCS-071 Unit 2 Ankita Singh


Simple Hill Climbing
•Simple hill climbing is the simplest way to implement a hill climbing algorithm
It only evaluates the neighbor node state at a time and selects the first one
which optimizes current cost and set it as a current state
•It only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state This algorithm has the following
features
•Less time consuming
•Less optimal solution and the solution is not guaranteed

KCS-071 Unit 2 Ankita Singh


Algorithm
Step 1:Evaluate the initial state, if it is goal state then return success and Stop.

Step 2:Loop Until a solution is found or there is no new operator(successor function) left to apply.

Step 3:Select and apply an operator to the current state.

Step 4:Check new state:

If it is goal state, then return success and quit.

Else if it is better than the current state then assign new state as a current state.

Else if not better than the current state, then return to step 2.

Step 5: Exit.

KCS-071 Unit 2 Ankita Singh


Steepest Ascent hill climbing:
•The steepest Ascent algorithm is a variation of simple hill climbing algorithm
This algorithm examines all the neighboring nodes of the current state and
selects one neighbor node which is closest to the goal state.This algorithm
consumes more time as it searches for multiple neighbors.

KCS-071 Unit 2 Ankita Singh


Algorithm for Steepest Ascent hill climbing
•Step 1 Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state.

•Step 2 :Loop until a solution is found or the current state does not change.

•Let SUCC be a state such that any successor of the current state will be better than it.

•For each operator that applies to the current state:

•Apply the new operator and generate a new state.

•Evaluate the new state.

•If it is goal state, then return it and quit, else compare it to the SUCC.

•If it is better than SUCC, then set new state as SUCC.

•If the SUCC is better than the current state, then set current state to SUCC.

•Step 3 Exit

KCS-071 Unit 2 Ankita Singh


Problems in Hill Climbing Algorithm:
•Local Maximum: A local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum. Hill-climbing
algorithms that reach the vicinity of a local maximum will be drawn upward
toward the peak but will then be stuck with nowhere else to go.

KCS-071 Unit 2 Ankita Singh


•Ridge : Ridges result in a sequence of local maxima that is very difficult for
greedy algorithms to navigate.

KCS-071 Unit 2 Ankita Singh


Plateau:

A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from which no
uphill exit exists, or a shoulder, from which progress is possible. A hill-climbing search might get lost
on the plateau. In each case, the algorithm reaches a point at which no progress is being made.

•Solution: We can allow sideways move in the hope that the plateau is really a shoulder. But,if we
always allow sideways moves when there are no uphill moves, an infinite loop will occur whenever the
algorithm reaches a flat local maximum that is not a shoulder. One common solution is to put a limit
on the number of consecutive sideways moves allowed.

For 8 Queens : with sideways move with limit=100


- Raise percentages of solved problem instances from 14 % to 94%
- However 21 steps for successful solution and 64 for each failure

KCS-071 Unit 2 Ankita Singh


Variations of hill climbing to tackle its problems
- Tabu Search
- Local optima enforced Hill Climbing
- Stochastic Hill Climbing variations
- Hill climbing with random walk
- Random Restart hill climbing

KCS-071 Unit 2 Ankita Singh


Tabu Search
-If we allows sideway moves, it may happen that algorithm gets stuck in loop, ie algorithm will return
to the same state again and again.

- To prevent the algorithm from getting stuck in a loop or revisiting states, a list of previously
visited states can be maintained, which are then avoided in future steps.

- This list is a fixed length queue called “tabu list” //add most recent state to queue and drop oldest.

- As the size of the tabu list grows, hill climbing will asymptotically become “non redundant” (won’t
look same state twice)

-In practice a reasonable sized tabu list(say 100) improves the performance of HC in many problems.

KCS-071 Unit 2 Ankita Singh


Enforced Hill Climbing
•To prevent getting stuck in local optima we can follow an approach where we will perform breadth first search (any
systematic search algorithm) once we reach local optima.
- While performing BFS we can find the a state with better h function(better than local optima), when we find a
better state we can restart our local search again till we reach next local optima.
- This process can continue till a limited time(termination condition) and then we can select node with best
optima as solution

•Summarizing in Enforced Hill Climbing we have:


–prolonged periods of exhaustive search
–bridged by relatively quick periods of hill-climbing

•Middle ground b/w local and systematic search

KCS-071 Unit 2 Ankita Singh


Stochastic hill climbing variations:
● When state space landscape has local minima, any search that moves only in greedy direction
cannot be complete. Stochastic Hill climbing variations follows an hybrid approach where it
follows greedy approach as well as is asymptotically complete.
● In Stochastic hill climbing
○ Algorithm selects one neighbor node at random and decides whether to choose it as a
current state or examine another state.
○ The selection probability can vary with the steepness of the uphill move.
● To avoid getting stuck in local minima
○ Random-restart hill-climbing
○ Random-walk hill-climbing
○ Hill-climbing with both

KCS-071 Unit 2 Ankita Singh


Hill-climbing with random walk
•At each step do one of the two
–Greedy: With probability p move to the neighbor with largest value
–Random: With probability 1-p move to a random neighbor

KCS-071 Unit 2 Ankita Singh


Hill-climbing with random restarts
• When we get stuck at local optima randomly sample a new state and do hill climbing again and
always keep best solution found so far. This is called random-restart hill climbing and increases the
chance of finding a global optimum.

•Different variations
–For each restart: run until termination vs. run for a fixed time
–Run a fixed number of restarts or run indefinitely

KCS-071 Unit 2 Ankita Singh


Hill-climbing with both
•At each step do one of the three
–Greedy: move to the neighbor with largest value
–Random Walk: move to a random neighbor
–Random Restart: Resample a new current state

KCS-071 Unit 2 Ankita Singh


Advantages of Hill Climbing:

• It's simple to understand and easy to implement.

• It requires less computational power compared to other search algorithms.

• If the heuristic is well chosen, it can find a solution in a reasonable time.

Disadvantages of Hill Climbing:

• It's not guaranteed to find the optimal solution.

• It's highly sensitive to the initial state and can get stuck in local optima.

• It does not maintain a search history, which can cause the algorithm to cycle or loop.

• It can't deal effectively with flat regions of the search space (plateau) or regionst that form a ridge.

KCS-071 Unit 2 Ankita Singh


Simulated Annealing
–instead of picking the best move, pick one randomly
–say the change in objective function is ẟ
–if ẟ is positive, then move to that state
–otherwise:
•move to this state with probability proportional to ẟ
•thus: worse moves (very large negative ẟ) are executed less often
–however, there is always a chance of escaping from local maxima
–over time, make it less likely to accept locally bad moves

KCS-071 Unit 2 Ankita Singh


Simulated Annealing
• In metallurgy, annealing is the process used to temper or harden metals and
glass by heating them to a high temperature and then gradually cooling them,
thus allowing the material to reach a low energy crystalline state.

• Inspired by metallurgy, SA permits bad moves to states with a lower value


but lets us escape states that lead to a local optima.

KCS-071 Unit 2 Ankita Singh


Algorithm
function SIMULATED-ANNEALING(problem, schedule) returns a solution state
inputs: problem, a problem to solve
schedule, a mapping from time to “temperature” (controls exploration)

current ← MAKE-NODE(problem.INITIAL-STATE) # Start with the initial state of the problem

for t = 1 to ∞ do # Iterate indefinitely (or until stopping condition)


T ← schedule(t) # The temperature is initialized based on the schedule function, which decreases over time.
if T = 0 then return current # If temperature is 0, stop and return the current state
next ← a randomly selected successor of current # Pick a random neighboring state
ΔE ← next.VALUE – current.VALUE #Calculate the change in value of next state and current state (difference in objective
function)
if ΔE > 0 then # If the new state is better (positive ΔE):
current ← next # Move to the next state
else # If the new state is worse:
current ← next only with probability e^(ΔE / T) # the algorithm can still accept the new state, but only with a probability
proportional to the temperature. The probability of accepting a worse state is calculated as:𝑃=e^(ΔE / T)
KCS-071 Unit 2 Ankita Singh
Key Concepts in Simulated Annealing:
Exploration vs. Exploitation:

● When T is high, the probability of accepting a worse state is higher, allowing the algorithm to explore more freely. As T decreases,
the probability of accepting worse solutions decreases, focusing more on exploitation and less on exploration.
● In the early stages (high temperature), the algorithm explores a wide range of solutions, even accepting worse solutions to escape
local optima. In the later stages (low temperature), it focuses on refining the solution by accepting only better solutions.

Annealing Schedule:

● The schedule controls how the temperature T decreases over time. In the early stages, the temperature is high, allowing more
exploration (accepting worse solutions). Over time, as the temperature lowers, the algorithm becomes more focused on improving
the solution.When the temperature reaches zero, the search ends, and the algorithm returns the current state as the best-found
solution.
● The rate at which the temperature decreases is critical. A slow decrease (cooling schedule) allows more exploration, which may yield
better results but takes longer. A fast decrease leads to quicker convergence but increases the risk of getting stuck in a local
optimum.

Probabilistic Acceptance:

● Unlike greedy algorithms, simulated annealing sometimes accepts worse solutions, making it less likely to get trapped in local minima
and more likely to find a global optimum.

KCS-071 Unit 2 Ankita Singh


Advantages:
● Can escape local optima by accepting worse solutions early in the search.
● Simple to implement and widely applicable to a variety of optimization
problems.
Disadvantages:
● Performance depends on the cooling schedule, which may need to be
carefully tuned for specific problems.
● Does not guarantee finding the global optimum, though it often finds
good solutions.
KCS-071 Unit 2 Ankita Singh
Local beam search
•The local beam search algorithm keeps track of k states rather than just one.
It begins with k randomly generated states. At each step, all the successors of
all k states are generated. If any one is a goal, the algorithm halts. Otherwise,
it selects the k best successors from the complete list and repeats.
•In a local beam search, useful information is passed among the parallel
search threads. In effect, the states that generate the best successors say to
the others, “Come over here, the grass is greener!” The algorithm quickly
abandons unfruitful searches and moves its resources to where the most
progress is being made

KCS-071 Unit 2 Ankita Singh


•In its simplest form, local beam search can suffer from a lack of diversity among the k
states—they can quickly become concentrated in a small region of the state space, making
the search little more than an expensive version of hill climbing. A variant called stochastic
beam search, analogous to stochastic hill climbing, helps alleviate this problem.

•Instead of choosing the best k from the the pool of candidate successors, stochastic beam
search chooses k successors at random, with the probability of choosing a given successor
being an increasing function of its value. Stochastic beam search bears some resemblance
to the process of natural selection, whereby the “successors” (offspring) of a “state”
(organism) populate the next generation according to its “value” (fitness).

KCS-071 Unit 2 Ankita Singh


Genetic Algorithm
A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Example
● Each state, or individual, is represented as a string over a finite alphabet. For example, an 8-queens state must specify the positions of
8 queens, each in a column of 8 squares.
● A fitness function should return higher values for better states, so, for the 8-queens problem we use the number of non attacking pairs
of queens, which has a value of 28 for a solution.
● The values of the four states are 24, 23, 20, and 11. In this particular variant of the genetic algorithm, the probability of being chosen
for reproducing is directly proportional to the fitness score, and the percentages are shown next to the raw scores.
● In (c), two pairs are selected at random for reproduction, in accordance with the probabilities in (b). Notice that one individual is
selected twice and one not at all.4
● For each pair to be mated, a crossover point is chosen randomly from the positions in the string. In Figure 4.6, the crossover points are
after the third digit in the first pair and after the fifth digit in the second pair.5
● In (d), the offspring themselves are created by crossing over the parent strings at the crossover point. For example, the first child of the
first pair gets the first three digits from the first parent and the remaining digits from the second parent, whereas the second child gets
the first three digits from the second parent and the rest from the first parent.

KCS-071 Unit 2 Ankita Singh


● The 8-queens states involved in this reproduction step are shown in Figure
● The example shows that when two parent states are quite different, the crossover
operation can produce a state that is a long way from either parent state.
● It is often the case that the population is quite diverse early on in the process, so
crossover (like simulated annealing) frequently takes large steps int the state space
early in the search process and smaller steps later on when most individuals are quite
similar.
● Finally, in (e), each location is subject to random mutation with a small independent
probability. One digit was mutated in the first, third, and fourth offspring. I
● n the 8-queens problem, this corresponds to choosing a queen at random and moving
it to a random square in its column. Figure 4.8 describes an algorithm that implements
all these steps.

KCS-071 Unit 2 Ankita Singh


Defining sensorless problem
physical problem P is defined by ACTIONSP, RESULTP, GOAL-TESTP, and

STEP-COSTP .

We can define sensorless problem by:

● Belief state

● Initial state

● Actions

● Transition model

● Goal Test

● Path Cost

KCS-071 Unit 2 Ankita Singh


Searching with Partial Observations
In partially observable environments, an agent's perceptions are insufficient to
determine the exact state.
The key concept required for solving partially observable problems is the
belief state.
Belief state represents the agent’s current belief about the possible physical
states it might be in, based on the actions it has taken and what it has sensed
so far.

KCS-071 Unit 2 Ankita Singh


Searching with no observation
When the agent’s percepts provide no information at all, we have what is called a sensorless problem or sometimes a
conformant problem.

Sensorless problems are quite often solvable.and have advantages like


a)they don’t rely on sensors working properly
b)avoid high cost of sensing.

KCS-071 Unit 2 Ankita Singh


Example-Sensorless version of Vacuum world
● Assume that the agent knows the geography of its world, but
doesn’t know its location or the distribution of dirt.
● initial state could be any element of the set {1, 2, 3, 4, 5, 6, 7,
8}.
● Now, consider what happens if it tries the action Right.
● This will cause it to be in one of the states {2, 4, 6, 8}
● The agent now has more information, the action sequence
[Right,Suck] will always end up in one of the states {4, 8}.
● Finally, the sequence [Right,Suck,Left,Suck] is guaranteed to
reach the goal state 7 no matter what the start state.

We say that the agent can coerce the world into state 7.

To solve sensorless problems, we search in the space of belief states


rather than physical States.
● In belief-state space, the problem is fully observable because the agent always knows its own belief
state.
● The solution (if any) of any sensorless problem is always a sequence of actions.
KCS-071 Unit 2 Ankita Singh
1. Belief States:

● A belief state is a set of all possible physical states the agent might be in at a given time. Since the agent has no sensors to detect
its current state, it must consider multiple possibilities simultaneously.
● If the original problem has n physical states, the sensorless problem can have up to 2 n belief states (since the agent could be in any
combination of these physical states). Many belief states might be unreachable, depending on the problem.

2. Initial State:

● The agent’s starting belief state is usually the set of all possible states since it has no information about where it starts.
● n some problems, the agent might have more knowledge, so its initial belief state could be a smaller subset of possible states.

3. Actions:
● The set of actions the agent can take. In sensorless problems, the agent may be unsure which actions are legal in its current belief
state, since it doesn’t know its exact position.
● There are two possibilities:
○ If illegal actions have no effect: The agent can take the union of all possible actions from every state in the belief state.
This means the agent assumes it can perform any action that’s legal in any of the possible states.
○ If illegal actions are dangerous: The agent can take the intersection of actions, meaning it only performs actions that are
legal in all possible states within the belief state.

KCS-071 Unit 2 Ankita Singh


4. Transition Model (Result of Actions):
● This describes how the belief state changes after the agent takes an action. Since the agent doesn't know its exact state, it
computes the possible outcomes for each state in the belief state.
● How it works:
○ Deterministic actions: The result of an action is the set of states that can be reached by applying the action to all states in
the current belief state. The new belief state will be a subset of the current one.
○ Nondeterministic actions: If the action has multiple possible outcomes (nondeterministic), the new belief state can be
larger than the current one because the agent must consider more possibilities.

5. Goal Test:
● The agent achieves the goal if ALL the possible states in the belief state meet the goal condition.
● The goal is considered achieved only if, regardless of which state the agent is actually in, it has reached the goal. Even if the agent
accidentally reaches the goal in one state, it won’t know unless every possible state in its belief state has also reached the goal.

6. Path Cost:
● he cost of taking an action in a belief state. In sensorless problems, an action might have different costs in different states, which
complicates the calculation of path cost.
● How it works:
○ If the same action has different costs depending on the actual state, the agent needs to account for the range of possible costs.
○ To simplify, it’s often assumed that the action cost is the same across all states in the belief state, so the cost can be directly
transferred from the physical problem.

KCS-071 Unit 2 Ankita Singh


Incremental Belief-state search Algorithm
This algorithm is used to find the action sequence to reach the goal state from all initial belief states.

In sensorless vacuum world,

There are only 12 reachable belief states out of 28 =256 possible belief states.

● Initial belief state is {1,2,3,4,5,6,7,8}


● Find an action sequence that works in all 8 states.
● First find a solution that work for state 1, then check if it work for state 2;
● If not go back and find a different solution for state 1 and so on

The incremental belief-state search must find one single solution that satisfies all states in
the belief
KCS-071 Unit state.
2 Ankita Singh
Physical states

KCS-071 Unit 2 Belief states Ankita Singh


Advantage of incremental approach
Early Detection of Failures: A key benefit of this incremental approach is that it can quickly identify when a belief state is
unsolvable. Typically, if a belief state cannot be solved, it’s likely that a small subset of that state is also unsolvable. This allows
the algorithm to avoid unnecessary explorations.

Efficiency Gains: This ability to prune the search space can significantly speed up the problem-solving process, especially when
dealing with large belief states.

KCS-071 Unit 2 Ankita Singh


Searching in partially observable environments

For eg, we might define the local-sensing vacuum world with agent having position
sensor and a local dirt sensor but has no sensor capable of detecting dirt in other
squares.

PERCEPT(s) function returns the percept received in a given state. For example, in
the local-sensing vacuum world, the PERCEPT in state 1 is [A, Dirty].

When observations are partial, it will usually be the case that several states could
have produced any given percept. For example, the percept [A, Dirty] is produced
by state 3 as well as by state 1. Hence, given this as the initial percept, the initial
belief state for the local-sensing vacuum world will be {1, 3}.

The ACTIONS, STEP-COST, and GOAL-TEST are constructed from the underlying
physical problem just as for sensorless problems, but the transition model is a bit
more complicated.

KCS-071 Unit 2 Ankita Singh


example of transitions in local-sensing
vacuum worlds.
(a) In the deterministic world, Right is
applied in the initial belief state, resulting in
a new belief state with two possible
physical states; for those states, the
possible percepts are [B, Dirty] and [B,
Clean], leading to two belief states, each of
which is a singleton.

KCS-071 Unit 2 Ankita Singh


We can think of transitions from one belief state to the next for a particular action as occurring in three stages, as
shown in Figure

• The prediction stage is the same as for sensorless problems: given the action a in belief state b, the predicted belief
state is ˆb =PREDICT(b, a).

• The observation prediction stage determines the set of percepts o that could be observed in the predicted belief
state:

POSSIBLE-PERCEPTS(ˆ b) = {o : o=PERCEPT(s) and s ∈ ˆ b} .

• The update stage determines, for each possible percept, the belief state that would result from the percept. The
new belief state bo is just the set of states in ˆb that could have produced the percept:

bo = UPDATE(ˆb, o) = {s : o= PERCEPT(s) and s ∈ ˆ b} .

each updated belief state bo can be no larger than the predicted belief state ˆ b;

Putting these three stages together, we obtain the possible belief states resulting from a given action and the
subsequent possible percepts:

RESULTS(b, a) = {bo : bo = UPDATE(PREDICT(b, a), o) and

o ∈ POSSIBLE-PERCEPTS(PREDICT(b, a))} .
KCS-071 Unit 2 Ankita Singh
Game playing - Adversarial Search
In multiagent environments, each agent needs to consider the actions of other agents
and how they affect its own welfare. Competitive environments, in which the agents’
goals are in conflict, give rise to adversarial search problems—often known as
games.

Why do AI researchers study game playing?


1. It’s a good reasoning problem, formal and nontrivial. Games, unlike most of the
toy problems are interesting because they are too hard to solve.
2. Direct comparison with humans and other computer programs is easy.

KCS-071 Unit 2 Ankita Singh


Adversarial Search
What Kinds of Games?

Mainly games of strategy with the following characteristics:

● Sequence of moves to play


● Rules that specify possible moves
● Rules that specify a payment/reward for each move
● Objective is to maximize your payment/reward

In AI, the most common games are of a rather specialized kind—deterministic, turn-taking, two-player, zero-sum
games of perfect information (such as chess).

In our terminology, this means deterministic, fully observable environments in which two agents act alternately and in which
the utility values at the end of the game are always equal and opposite.

For example, if one player wins the game of chess(+1),the other player necessarily loses(-1). It is this opposition
between the agents’ utility functions that makes the situation adversarial.

KCS-071 Unit 2 Ankita Singh


Games vs. Search Problems
Unlike Search problems , Adversarial Search Problems have:

•Unpredictable opponent->specifying a move for every possible opponent


reply

•Time limits->unlikely to find goal, must approximate

KCS-071 Unit 2 Ankita Singh


Two Player Game

KCS-071 Unit 2 Ankita Singh


Formal Definition of Game
● We will consider games with two players, whom we will call MAX and MIN. MAX moves first, and then they take turns
moving until the game is over.

● At the end of the game, points are awarded to the winning player and penalties are given to the loser. A game can
be formally defined as a search problem with the following components:

○ S0: The initial state, which specifies how the game is set up at the start.
○ PLAYER(s): Defines which player has the move in a state.
○ ACTIONS(s): Returns the set of legal moves in a state.
○ RESULT(s, a): The transition model, which defines the result of a move.
○ TERMINAL-TEST(s): A terminal test, which is true when the game is over and false otherwise. States where the
game has ended are called terminal states.
○ UTILITY(s, p): A utility function (also called an objective function or payoff function), defines the final numeric
value for a game that ends in terminal state s for a player p. In chess, the outcome is a win, loss, or draw, with
values +1, 0, or 1

A zero-sum game is defined as one where the total payoff to all players is the same for every instance of the game. Chess is
zero-sum because every game has payoff of either 0 + 1, 1 + 0 or 1

KCS-071 Unit 2 Ankita Singh


Game Tree
● The initial state and legal moves for each side
define the game tree for the game.
● Figure shows the part of the game tree for tic-tac-toe
(noughts and crosses).
● From the initial state, MAX has nine possible moves.
● Play alternates between MAX’s placing an X and MIN’s
placing a 0 until we reach leaf nodes corresponding to
the terminal states such that one player has three in a
row or all the squares are filled.
● The number on each leaf node indicates the utility
value of the terminal state from the point of view of
MAX; high values are assumed to be good for MAX
and bad for MIN. It is the MAX’s job to use the search
tree (particularly the utility of terminal states) to
determine the best move.

KCS-071 Unit 2 Ankita Singh


Optimal Decisions in Games
● In normal search problem, the optimal solution would be a sequence of
move leading to a goal state – a terminal state that is a win.

● In a game, on the other hand, MIN has something to say about it, MAX
therefore must find a contingent strategy, which specifies MAX’s move
in the initial state, then MAX’s moves in the states resulting from every
possible response by MIN, then MAX’s moves in the states resulting from
every possible response by MIN those moves, and so on.
● An optimal strategy leads to outcomes at least as good as any other
strategy when one is playing an infallible opponent.
KCS-071 Unit 2 Ankita Singh
Min-Max Terminology
•move: a move by both players
•ply: a half-move
•utility function:the function applied to leaf nodes
•backed-up value
–of a max-position : the value of its largest successor
–of a min-position : the value of its smallest successor
•minimax procedure: search down several levels; at the bottom level apply the utility
function, back-up values all the way up to the root node, and that node selects the move.

KCS-071 Unit 2 Ankita Singh


Example
● The possible moves for MAX at the root node are labeled a1, a2, and a3.The possible replies to a1 for MIN are b1, b2, b3, and so on.

● This particular game ends after one move each by MAX and MIN.

● The utilities of PLY the terminal states in this game range from 2 to 14

● The terminal nodes on the bottom level get their utility values from the game’s UTILITY function. The first MIN node, labeled B, has three successor states with values 3, 12, and 8, so its
minimax value is 3.

● Similarly, the other two MIN nodes have minimax value 2. The root node is a MAX node; its successor states have minimax values 3, 2, and 2; so it has a minimax value of 3. We can also
identify the minimax decision MINIMAX DECISION at the root: action a1 is the optimal choice for MAX because it leads to the state with the highest minimax value.

KCS-071 Unit 2 Ankita Singh


Minimax algorithm
● Minimax is a specialized search algorithm used in game playing to determine the optimal sequence of moves
for a player in a zero-sum game involving two players.

● The algorithm computes the minimax decision for the current state and uses a depth-first search algorithm for the
exploration of the complete game tree.

● It operates by recursively using backtracking to simulate all possible moves in the game, effectively searching
through a game tree.The recursion proceeds all the way down to the leaves of the tree, and then the minimax values
are backed up through the tree as the recursion unwinds.

● In the game tree, there are two types of nodes: MAX nodes, where the algorithm selects the move with the maximum
value, and MIN nodes, where the algorithm selects the move with the minimum value.

• Games: It's typically applied to perfect information games like chess, checkers, and tic-tac-toe.

● In the algorithm, the MAX player is often represented with negative infinity (-∞) as the initial worst case, while MIN is
represented with positive infinity (∞).

KCS-071 Unit 2 Ankita Singh


Example
MAX A

MIN B C

MAX D E F G

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example
MAX A

MIN B C

MAX 8 F G
D E

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example
MAX A

MIN B C

MAX 8 F G
D E

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example
MAX A

-1
MIN B C

8 -1 G
MAX D E F

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example
MAX A

-1
2
MIN B C

8 -1 G
MAX D E F 4
2

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example 2
MAX A

-1
2
MIN B C

8 -1 G
MAX D E F 4
2

-1 Unit 2
KCS-071 8 -3 -1 2 1 -3 Ankita4Singh
Example 2

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
Evaluating efficiency of Minimax
• Completeness: The algorithm is complete and will definitely find a solution if one exists(If tree is finite)

• Optimality: It guarantees to find the optimal strategy for both players. But is not able to exploit opponent
weakness against suboptimal opponent.

• Time Complexity: The time complexity is O(b^m), where b is the branching factor (the average number of
child nodes for each node in the tree) and m is the maximum depth of the tree.

• Space Complexity: The space complexity is also O(b*m), because of the need to store all the nodes in
memory.

KCS-071 Unit 2 Ankita Singh


Limitations:
• The algorithm can be slow for complex games with many possible moves
(like chess).

• A game like chess can have around 35 possible moves at any point, leading
to a vast number of possible game states to evaluate.

Average game length m≈100

search space bm≈ 35100≈ 10154

KCS-071 Unit 2 Ankita Singh


Redundancy in Minimax Algorithm

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Alpha-Beta Pruning
● The problem with minimax search is that the number of game states it has to
examine is exponential in the number of moves.
● By performing pruning, we can eliminate large part of the tree from consideration.
● Alpha beta pruning, when applied to a minimax tree, returns the same move as
minimax would, but prunes away branches that cannot possibly influence the final
decision.
Definition: Alpha-beta pruning is an optimization technique for the minimax
algorithm. It significantly reduces the number of nodes that need to be evaluated in
the game tree without affecting the final result. Alpha-beta pruning skips the
evaluation of certain branches in the game tree by identifying moves that are evidently
worse than previously examined moves.

KCS-071 Unit 2 Ankita Singh


● Alpha Beta pruning gets its name from the following two parameters that describe bounds
○ α : the value of the best (i.e., highest-value) choice we have found so far at any choice
point along the path of MAX.
a lower bound on the value that a max node may ultimately be assigned
○ β: the value of best (i.e., lowest-value) choice we have found so far at any choice point
along the path of MIN.
an upper bound on the value that a minimizing node may ultimately be assigned
Alpha Beta search updates the values of α and β as it goes along and prunes the remaining branches
at a node(i.e., terminates the recursive call) as soon as the value of the current node is known to be
worse than the current α and β value for MAX and MIN, respectively.

KCS-071 Unit 2 Ankita Singh


Key points in Alpha-beta Pruning
• Initial Values: Alpha starts at negative infinity (-∞), and Beta starts at positive infinity (∞).

• When to Prune: A branch is pruned when the minimizer's best option (Beta) is less than the maximizer's best
option (Alpha) since the maximizer will never allow the game to go down a path that could lead to a worse outcome
than it has already found.

The condition for Alpha-beta Pruning is that α >= β.

• Each node has to keep track of its alpha and beta values. Alpha can be updated only when it’s MAX’s turn and,
similarly, beta can be updated only when it’s MIN’s chance.

• MAX will update only alpha values and MIN player will update only beta values.

• The node values will be passed to upper nodes instead of values of alpha and beta during go into reverse of tree.

• Alpha and Beta values only be passed to child nodes.

KCS-071 Unit 2 Ankita Singh


Example

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Example 2

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Time complexity of Alpha-beta and benefits
Pruning does not affect final result. This means that it gets the exact same result as does full minimax.

Good move ordering improves effectiveness of pruning

The effectiveness of alpha-beta pruning is highly dependent on the order in which the successors are examined.

It might be worthwhile to try to examine first the successors that are likely to be the best. In such case, it turns out
that alpha-beta needs to examine only O(bd/2) nodes to pick the best move, instead of O(bd ) for minimax. This means
that the effective branching factor becomes sqrt(b) instead of b – for chess comes oyt to be 6 instead of 35.

Benefits:

• Efficiency: Alpha-beta pruning can greatly reduce the number of nodes that are explored in the game tree, leading
to faster decision-making.

• Optimality: The final decision made by the alpha-beta pruning is the same as the decision that would be made by
the full minimax algorithm, ensuring optimality is not compromised.

• Widely Applicable: It can be applied to any game tree, not just two-player games, though the algorithm assumes
perfect play on both sides.
KCS-071 Unit 2 Ankita Singh
Constraint Satisfaction Problem
● Till now we explored the idea that problems can be solved by searching in a space of
states. These states can be evaluated by domain-specific heuristics and tested to see
whether they are goal states. From the point of view of the search algorithm, however,
each state is atomic, or indivisible—a black box with no internal structure.

● CSP describes a way to solve a wide variety of problems more efficiently. We use a
factored representation for each state: a set of variables, each of which has a value.
● A problem is solved when each variable has a value that satisfies all the constraints on the
variable. A problem described this way is called a constraint satisfaction problem.

KCS-071 Unit 2 Ankita Singh


Definition
A constraint satisfaction problem consists of three components, X,D, and C:

X is a set of variables, {X1, . . . ,Xn}.

D is a set of domains, {D1, . . . ,Dn}, one for each variable. Each domain Di consists of a set of allowable values, {v1, . . . ,
vk} for variable Xi.

C is a set of constraints that specify allowable combinations of values.

Each constraint Ci consists of a pair (scope, rel ) where scope is a tuple of variables that participate in the constraint and rel
is a relation that defines the values that those variables can take on.

A relation can be represented as an explicit list of all tuples of values that satisfy the constraint, or as an abstract relation that
supports two operations: testing if a tuple is a member of the relation and enumerating the members of the relation.

For example, if X1 and X2 both have the domain {A,B}, then the constraint saying the two variables must have different values
can be written as (X1,X2), [(A,B), (B,A)] or as (X1,X2),X1 = X2.

KCS-071 Unit 2 Ankita Singh


Assignment
● Each state in a CSP is defined by an assignment of values to some of the variables, {X i=vi, Xj=vj, …};

● An assignment that does not violate any constraints is called a consistent or legal assignment;

● A complete assignment is one in which every variable is assigned;

● A solution to a CSP is a consistent, complete assignment;

● A partial assignment is one that assigns values to only some of the variables.

KCS-071 Unit 2 Ankita Singh


Examples of CSPs
● Sudoku Puzzles: In Sudoku, the variables are the empty cells, the domains are numbers from 1
to 9, and the constraints ensure that no number is repeated in a row, column, or 3x3 subgrid.
● Scheduling Problems: In university course scheduling, variables might represent classes,
domains represent time slots, and constraints ensure that classes with overlapping students or
instructors cannot be scheduled simultaneously.
● Map Coloring: In the map coloring problem, variables represent regions or countries, domains
represent available colors, and constraints ensure that adjacent regions must have different
colors.

KCS-071 Unit 2 Ankita Singh


Formulating a CSP
Variables X={WA, NT, Q, NSW, V, SA, T}
Domains Di = {red, green, blue}
Constraints: adjacent regions must have different colors. e.g., WA ≠ NT

KCS-071 Unit 2 Ankita Singh


Solutions are complete and consistent assignments, e.g., WA = red, NT =
green, Q = red, NSW = green, V = red, SA = blue, T = green

KCS-071 Unit 2 Ankita Singh


Constraint graph
Constraint graph: nodes are variables, arcs show constraints

Binary CSP: each constraint relates at most two variables

We can convert all non binary csp into binary csp.

General-purpose CSP algorithms use the graph structure to speed up search.


E.g., Tasmania is an independent subproblem!

KCS-071 Unit 2 Ankita Singh


Types of Variables
Discrete variables- Discrete variables are those that can only take on specific values from a finite set (also called the domain of
the variable)

A. Finite Domains CSPs - They have a limited number of possible values for each variable. The problem space is
manageable but can grow exponentially as the number of variables increases.

If you have n variables, each with a domain size d, then the total number of possible assignments to all variables is
O(d^n), For example, if each variable can take one of 3 values, and there are 5 variables, there are 35=243 possible
assignments.

Eg- Map coloring: The regions of a map (variables) can be assigned a color (values) from a finite set like {Red, Green,
Blue}.

B. Infinite Domains- Some CSPs deal with variables that can take on values from an infinite set, such as all integers or
strings. It requires constraint language to express relationships, which is hard to enumerate.

● Eg Job Scheduling: The variables might be the start and end days of jobs, which can take on any integer value
representing a day. For instance, a constraint might be "Job 1 must start at least 5 days before Job 3.”

Continuous Variables- These variables can take on any value within a continuous range (e.g., real numbers).

● Eg- In scheduling tasks like Hubble Telescope observations, the start and end times might be represented as continuous
variables because time can be infinitely subdivided (e.g., down to seconds, milliseconds, etc.).

KCS-071 Unit 2 Ankita Singh


Types of constraints
Unary constraints involve a single variable,

e.g., SA != green

Binary constraints involve pairs of variables,

e.g., SA != WA

Higher-order constraints involve 3 or more variables,

e.g., cryptarithmetic column constraints

Preferences (soft constraints), e.g., red is better than green

often representable by a cost for each variable assignment -> constrained optimization problems

KCS-071 Unit 2 Ankita Singh


Example - Node Coloring
V={1,2,3,4}

D={Red, Green, Blue}


1 2
C={adjacent nodes should not be of same color}
1 2 3 4

Initial R.G,B R.G,B R.G,B R.G,B


3 4
1=R R G,B G.B G,B

2=G R G G,B B

3=B R G B ERROR
3=G R G G B
KCS-071 Unit 2 Ankita Singh
Example: Cryptarithmetic
Cryptarithmetic: is a type of constraint satisfaction problem in which each alphabet and symbol is associated with a unique
digit.

Eg. TO+GO=OUT, SOME+MORE= MONEY, CROSS+ROADS= DANGER, BASE+BALL= GAMES, EAT+THAT=APPLE

Constraints:

1. Each alphabet /variable has a unique digit.


2. Digit ranges from 0-9.
3. Only one carry should be found.
4. Can be solved from both sides.
5. Sum of digits must be as shown in problem.

KCS-071 Unit 2 Ankita Singh


TO+GO=OUT
Left most digit will always be 1
T O
O=1 T 1 T=2
+ G O
+ G 1
O U T
1 U T

2 1
2 1 G+2= U+10
+ 8 1 G=9 or 8
1 0 2
+ G 1
G=9 -> U=1
1 U 2 G=8 -> U=0
KCS-071 Unit 2 Ankita Singh
Ans: O=1, T=2, G=8, U=0
SEND+MORE=MONEY

KCS-071 Unit 2 Ankita Singh


+ 1

1
Letter Code

1. M is leftmost digit, so M=1 S 9


2. Lets assume S=9 (because we have to
generate a carry) E
3. When S=9, M=1 So O=0
N

D
9
M 1
+ 1 0
O 0
1 0
R
KCS-071 Unit 2 Y Ankita Singh
1
9 Letter Code
+ 1 0 S 9

1 0 E

1 1 N
4. If O=0, there must be carry of 1
so 1+E+0=N, —--------------(1) 9 D
Also N+R(+1)=E+10—-------(2)
From 1&2 + 1 0 8 M 1
N+R(+1)=N-1+10
R(+1)=9 1 0 O 0
R!=9 since S=9,
Therefore R=8 R 8

KCS-071 Unit 2 Ankita Singh


1 1
9
Letter Code
+ 1 0 8
S 9
1 0 E 5

1 1 N 6
5. D+E should be such that it generates a
carry. Also D+E>11 (Y!= 0,1) 9 5 7 D 7
Assume Y=2
D+E=12 + 1 0 8 5 M 1
D!=8,9
Assume D=7 E=5 1 0 5 2 O 0

R 8
6. N+8+1=15
Y 2
N=6

KCS-071 Unit 2 Ankita Singh


CSP as a Search Problem
● Initial state: {} - all variables are unassigned.
● Successor Function: a value is assigned to one of the unassigned variable
with no conflict.
● Goal test: a complete assignment.
● Path cost: a constant cost for each step.

Solution appears at depth n if there are n variables.

KCS-071 Unit 2 Ankita Singh


Standard search formulation (incremental)
Let's start with the straightforward, approach,

States are defined by the values assigned so far

- Initial state: the empty assignment, {}


- Successor function: assign a value to an unassigned variable that does not conflict with current assignment.
- fail if no legal assignments (not fixable!)

- Goal test: the current assignment is complete

1) This is the same for all CSPs!

2) Every solution appears at depth n with n variables

-> use depth-first search

3) Path is irrelevant, so can also use complete-state formulation

4) b=(n-l)d at depth l, hence n!dn leaves!!!!

KCS-071 Unit 2 Ankita Singh


Example
Imagine we have a small map with 3 regions: A, B, and C. We want to color the map using 3 colors: Red, Green, and Blue. The goal is to color the
regions such that no two neighboring regions share the same color.

Variables: A, B, C (regions)
Domains: {Red, Green, Blue} (possible colors for each region)
Constraints: A ≠ B, B ≠ C, A ≠ C (no two adjacent regions should have the same color)

DFS Algo
Step 1: A = Red
Step 2: B = Red (No constraint check yet, DFS continues)
Step 3: C = Red (Again, no constraint check)
At this point, DFS has reached a complete assignment: A = Red, B = Red, C = Red. But this violates the constraints (A ≠ B, B ≠ C, A ≠ C). <Backtrack>
Backtrack to C: C = Green, Still, A = Red, B = Red, C = Green violates the constraint A ≠ B.<Backtrack>
Backtrack again, now try a different value for B.
Exploring All Possible Assignments:
Eventually, after a lot of unnecessary exploration, the dumb DFS will find valid solutions:
● A = Red, B = Green, C = Blue
● A = Green, B = Red, C = Blue
But notice how it wastes a lot of time trying invalid combinations and continues exploring even when constraints are violated early on. This is
inefficient

KCS-071 Unit 2 Ankita Singh


Backtracking search
Variable assignments are commutative, i.e., [WA=red then NT =green] same as [NT =green then WA=red]

Only need to consider assignments to a single variable at each node

- b=d and there are dn leaves

Depth-first search for CSPs with single-variable assignments is called backtracking search

The term backtracking search is used for a depth-first search that chooses values for one variable at a time and backtracks
when a variable has no legal values left to assign.

It repeatedly chooses an unassigned variable, and then tries all values in the domain of that variable in turn, trying to find a
solution. If an inconsistency is detected, then returns failure, causing the previous call to try another value.

Backtracking search is the basic uninformed algorithm for CSPs

Can solve n-queens for n = 25

KCS-071 Unit 2 Ankita Singh


Algorithm
function BACKTRACKING-SEARCH(csp) returns solution or failure #main function that initiates the search process
return RECURSIVE-BACKTRACKING({}, csp)

function RECURSIVE-BACKTRACKING(assignment, csp) returns solution or failure # explore possible assignments of values to variables in a recursive
manner, checking for consistency at each step.

if assignment is complete then


return assignment

var ← SELECT-UNASSIGNED-VARIABLE(csp, assignment)

for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do #Order-Domain-Values function generates the sequence of values to try.
if value is consistent with assignment given constraints[csp] then
add {var = value} to assignment
result ← RECURSIVE-BACKTRACKING(assignment, csp)

if result ≠ failure then


return result

remove {var = value} from assignment #If no valid assignment can be found after trying all possible values for the selected variable, the algorithm
removes the most recent variable-value assignment (undoing the last step) and backtracks to try a different value or variable.

return failure

KCS-071 Unit 2 Ankita Singh


Backtracking example

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Improving backtracking efficiency
General-purpose methods can give huge gains in speed:

1. Which variable should be assigned next?

2. In what order should its values be tried?

3. Can we detect inevitable failure early?

4. Can we take advantage of problem structure?

KCS-071 Unit 2 Ankita Singh


Minimum remaining values
● Always select the variable with the fewest remaining legal values (smallest domain) to assign next.
● It increases the likelihood of detecting a failure early by focusing on the most constrained variables first. This
reduces the chance of deeper backtracking.

KCS-071 Unit 2 Ankita Singh


Degree heuristic
● If there’s a tie in MRV, use the degree heuristic, which chooses the variable that is involved in the largest number of
constraints on other unassigned variables.
● This heuristic aims to reduce future constraints by focusing on the most “constraining” variables, making it easier to find a
valid solution later

KCS-071 Unit 2 Ankita Singh


Least constraining value
● When selecting a value for a variable, choose the value that rules out the fewest options for the other unassigned
variables.
● LCV reduces future conflict and keeps options open, making it more likely to find a solution without backtracking.

KCS-071 Unit 2 Ankita Singh


Forward checking

● One way to make better use of constraints during search is called forward checking. Whenever a variable X is assigned,
the forward checking process looks at each unassigned variable Y that is connected to X by a constraint and
deletes from Y ’s domain any value that is inconsistent with the value chosen for X.

● By detecting potential conflicts early, forward checking avoids exploring paths that would eventually fail, reducing
unnecessary search.

KCS-071 Unit 2 Ankita Singh


KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
KCS-071 Unit 2 Ankita Singh
Constraint propagation: Inference in CSPs
Forward checking propagates information from assigned to unassigned variables but does not provide early
detection of failures.

Constraint propagation repeatedly enforce constraints locally by propagating implications of a


constraint of one variable onto other variables.. The idea is that as constraints are applied, the domains of
the variables become smaller, making the problem easier to solve.

In CSPs there is a choice: an algorithm can search (choose a new variable assignment from several possibilities)
or do a specific type of inference called constraint propagation: using the constraints to reduce the number
of legal values for a variable, which in turn can reduce the legal value for another variable, and so on.

Constraint propagation may be intertwined with search, or it may be done as a preprocessing step, before
search starts. Sometimes this preprocessing can solve the whole problem, so no search is required at all

KCS-071 Unit 2 Ankita Singh


Propagation Techniques:
Method to enforce Constraint Propagation:

● Arc Consistency: Ensures that for every value in the domain of a variable, there is some compatible value in the domain
of other variables it’s constrained with. If a value for a variable has no valid partner in another variable, that value is
removed from the domain.

KCS-071 Unit 2 Ankita Singh


Arc Consistency

KCS-071 Unit 2 Ankita Singh


Node Consistency

KCS-071 Unit 2 Ankita Singh


Path Consistency

KCS-071 Unit 2 Ankita Singh


K- Consistency
A CSP is k-consistent if, for any consistent assignment to k-1 variables, there is a consistent
assignment for the k-th variable
1-Consistency(node consistency)- Ensures that all individual variables satisfy their unary
constraints (constraints that apply to just one variable).
Each variable by itself is consistent(has a non empty domain)
2-consistency(arc consistency)
3- consistency(path consistency)- Involves constraints between three or more variables, ensuring
that given a pair of variables, there is a consistent third variable that satisfies the ternary constraint.
Any pair of adjacent variables can be extended to a third.

KCS-071 Unit 2 Ankita Singh


Arc Consistency Algorithm

KCS-071 Unit 2 Ankita Singh

You might also like