0% found this document useful (0 votes)
8 views19 pages

Unit 2 From Constraint Propagtion

The document discusses constraint propagation and backtracking search as techniques for solving computational problems. Constraint propagation reduces the domains of decision variables based on constraints until no further reductions can be made or a failure occurs, while backtracking explores possible solutions incrementally and eliminates those that do not satisfy constraints. Additionally, it covers the minimax algorithm and its optimization through alpha-beta pruning, along with applications and properties of these algorithms in artificial intelligence and game playing.

Uploaded by

mohanprasath1017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views19 pages

Unit 2 From Constraint Propagtion

The document discusses constraint propagation and backtracking search as techniques for solving computational problems. Constraint propagation reduces the domains of decision variables based on constraints until no further reductions can be made or a failure occurs, while backtracking explores possible solutions incrementally and eliminates those that do not satisfy constraints. Additionally, it covers the minimax algorithm and its optimization through alpha-beta pruning, along with applications and properties of these algorithms in artificial intelligence and game playing.

Uploaded by

mohanprasath1017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

1.

Constraint propagation

Constraint propagation is the process of communicating the domain reduction of a decision variable
to all of the constraints that are stated over this variable. This process can result in more domain
reductions. These domain reductions, in turn, are communicated to the appropriate constraints. This
process continues until no more variable domains can be reduced or when a domain becomes empty
and a failure occurs. An empty domain during the initial constraint propagation means that the model
has no solution.

Example for constraint propagation

For example, consider the decision variables y with an initial domain [0..10], z with an initial domain
[0..10] and t with an initial domain [0..1], and the constraints

y + 5*z <= 4
t != z
t != y

over these three variables.

The domain reduction of the constraint y + 5*z <= 4 reduces the domain of y to [0..4] and z to
[0]. The variable z is thus fixed to a single value. Constraint propagation invokes domain
reduction of every constraint involving z. Domain reduction is invoked again for the constraint y
+ 5*z <= 4, but the variable domains cannot be reduced further. Domain reduction of the
constraint t != z is invoked again, and because z is fixed to 0, the constraint removes the value 0
from the domain of t. The variable t is now fixed to the value 1, and constraint propagation
invokes domain reduction of every constraint involving t, namely t != z and t != y. The
constraint that can reduce domains further is t != y. Domain reduction removes the value 1 from
the domain of y.

Constraint propagation is performed on constraints involving y; however, no more domain


reduction can be achieved and the final domains are:

 y = [0 2..4],
 z = [0] and
 t = [1].

2. Backtracking Search

Backtracking can be defined as a general algorithmic technique that considers searching


every possible combination in order to solve a computational problem.

Explanation of Backtracking search


Backtracking is an algorithmic technique for solving problems recursively by trying to build a
solution incrementally, one piece at a time, removing those solutions that fail to satisfy the
constraints of the problem at any point in time (by time, here, is referred to the time elapsed
till reaching any level of the search tree). Backtracking can also be said as an improvement to
the brute force approach. So basically, the idea behind the backtracking technique is that it
searches for a solution to a problem among all the available options. Initially, we start the
backtracking from one possible option and if the problem is solved with that selected option
then we return the solution else we backtrack and select another option from the remaining
available options. There also might be a case where none of the options will give you the
solution and hence we understand that backtracking won’t give any solution to that particular
problem. We can also say that backtracking is a form of recursion. This is because the process
of finding the solution from the various option available is repeated recursively until we don’t
find the solution or we reach the final state. So we can conclude that backtracking at every
step eliminates those choices that cannot give us the solution and proceeds to those choices
that have the potential of taking us to the solution.

Backtracking algorithm

Backtrack(s)

ifs is not a solution

return false

if is a new solution

add to list of solutions

backtrack(expand s)

The final algorithm is as follows:

 Step 1: Return success if the current point is a viable solution.


 Step 2: Otherwise, if all paths have been exhausted (i.e., the current point is an endpoint),
return failure because there is no feasible solution.

 Step 3: If the current point is not an endpoint, backtrack and explore other points, then repeat
the preceding steps.

State-Space Tree

A space state tree is a tree that represents all of the possible states of the problem, from the root
as an initial state to the leaf as a terminal state

An Example of Backtracking Algorithm

You need to arrange the three letters x, y, and z so that z cannot be next to x.

According to the backtracking, you will first construct a state-space tree. Look for all possible
solutions and compare them to the given constraint. You must only keep solutions that meet the
following constraint:
The following are possible solutions to the problems: (x,y,z), (x,z,y), (y,x,z), (y,z,x), (z,x,y)
(z,y,x).Nonetheless, valid solutions to this problem are those that satisfy the constraint that keeps
only (x,y,z) and (z,y,x) in the final solution set.

Types of Backtracking Algorithm

Backtracking algorithms are classified into two types:

1. Algorithm for recursive backtracking

2. Non-recursive backtracking algorithm

Algorithm for Recursive Backtracking

Backtracking must be described in this manner because it is a postorder traversal of a tree:

1. Algorithm Backtrack (s)


2. // Using recursion, this scheme describes the backtracking process.

3. //The first s-1 values are entered.

4. // z [1], z [2]… z [s-1] of the solution vector.

5. // z [1:n] have been assigned. Z [] and n are global.

6. {

7. For (each z [s] £ T (z [1],……,z [s-1]) do

8. {

9. If ( Bk (z [1], z[2],……….,z [s] != 0) then10 {

11. If (z[1], z[2], …… , z[s] is a path to an answer node )

12. Then write (z[1:s]);

13. If(s<n) then backtrack (s+1);


14. }

15. }

16. }

Non-Recursive Backtracking Algorithm

In a non-recursive algorithm, an iterative or non-recursive version of the recursive algorithm


appears.

1. Backtracking Algorithm(s)

2. This scheme describes the process of backtracking.

3. // all solutions in z [1: n] are generated and printed

3. / all solutions in z [1: n] are generated and printed

5.{
6.s=1;

7. While (s!= 0) do

8.{

9.If ( there remains are untried z [1] £ X ( z [1], z [2], ….., z [s-1]) and Bk (z[1], ….., z[s]) is true) then

10. {

11. If ( z[1], …., z[s] is a path to an answer node)

12. Then write ( z[1 : s]);

13. S = s + 1

14. }

15. Else s=s - 1 // backtrack the previous set

16.}

17.}
x() returns the set of all possible values assigned to the solution vector's first component, z1. The
component z1 will accept values that satisfy the bounding function B1 (z1).As you progress
through this tutorial, you will see some of the applications of the backtracking.

Applications of Backtracking Algorithm

1. To Find All Hamiltonian Paths Present in a Graph.


2. To Solve the N Queen Problem.

3. Maze Solving Problems

4. The Knight's Tour Problem

Game playing

Mini-Max Algorithm in Artificial Intelligence

o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-

making and game theory. It provides an optimal move for the player assuming that

opponent is also playing optimally.

o Mini-Max algorithm uses recursion to search through the game-tree.

o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-

tac-toe, go, and various tow-players game. This Algorithm computes the minimax

decision for the current state.

o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get

the maximum benefit.

o Both Players of the game are opponent of each other, where MAX will select the

maximized value and MIN will select the minimized value.

o The minimax algorithm performs a depth-first search algorithm for the exploration of the

complete game tree.

o The minimax algorithm proceeds all the way down to the terminal node of the tree, then

backtrack the tree as the recursion.

Working of Min-Max Algorithm:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take A is
the initial state of the tree. Suppose maximizer takes first turn which has worst-case initial value
=- infinity, and minimizer will take next turn which has worst-case initial value = +infinity.
.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will
compare each value in terminal state with initial value of Maximizer and determines the higher
nodes values. It will find the maximum among the all

o For node D max(-1,- -∞) => max(-1,4)= 4

o For Node E max(2, -∞) => max(2, 6)= 6

o For Node F max(-3, -∞) => max(-3,-5) = -3

o For node G max(0, -∞) = max(0, 7) = 7


Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and
will find the 3rd layer node values.

o For node B= min(4,6) = 4

o For node C= min (-3, 7) = -3


Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value
and find the maximum value for the root node. In this game tree, there are only 4 layers, hence
we reach immediately to the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4


Properties of Mini-Max algorithm:

o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in

the finite search tree.

o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.

o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-

Max algorithm is O(bm), where b is branching factor of the game-tree, and m is the

maximum depth of the tree.

o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS

which is O(bm).

Advantages of Game Playing in Artificial Intelligence:


1. Advancement of AI: Game playing has been a driving force behind the development of
artificial intelligence and has led to the creation of new algorithms and techniques that
can be applied to other areas of AI.
2. Education and training: Game playing can be used to teach AI techniques and algorithms
to students and professionals, as well as to provide training for military and emergency
response personnel.
3. Research: Game playing is an active area of research in AI and provides an opportunity
to study and develop new techniques for decision-making and problem-solving.
4. Real-world applications: The techniques and algorithms developed for game playing can
be applied to real-world applications, such as robotics, autonomous systems, and
decision support systems.

Disadvantages of Game Playing in Artificial Intelligence:

1. Limited scope: The techniques and algorithms developed for game playing may not be
well-suited for other types of applications and may need to be adapted or modified for
different domains.
2. Computational cost: Game playing can be computationally expensive, especially for
complex games such as chess or Go, and may require powerful computers to achieve
real-time performance.
Alpha-Beta Pruning

Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization


technique for the minimax algorithm

A technique by which without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two threshold parameter
Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.

o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune

the tree leaves but also entire sub-tree.

o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any point along

the path of Maximizer. The initial value of alpha is -∞.


b. Beta: The best (lowest-value) choice we have found so far at any point along the

path of Minimizer. The initial value of beta is +∞.

o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the

standard algorithm does, but it removes all the nodes which are not really affecting the

final decision but making algorithm slow. Hence by pruning these nodes, it makes the

algorithm fast.

Condition for Alpha-beta pruning:

α>=β

Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.

o The Min player will only update the value of beta.

o While backtracking the tree, the node values will be passed to upper nodes instead of

values of alpha and beta.

o We will only pass the alpha, beta values to the child nodes.

Pseudo-code for Alpha-beta Pruning:

1. function minimax(node, depth, alpha, beta, maximizingPlayer) is

2. if depth ==0 or node is a terminal node then

3. return static evaluation of node

4.

5. if MaximizingPlayer then // for Maximizer Player

6. maxEva= -infinity
7. for each child of node do

8. eva= minimax(child, depth-1, alpha, beta, False)

9. maxEva= max(maxEva, eva)

10. alpha= max(alpha, maxEva)

11. if beta<=alpha

12. break

13. return maxEva

14.

15. else // for Minimizer player

16. minEva= +infinity

17. for each child of node do

18. eva= minimax(child, depth-1, alpha, beta, true)

19. minEva= min(minEva, eva)

20. beta= min(beta, eva)

21. if beta<=alpha

22. break

23. return minEva

Move Ordering in Alpha-Beta pruning:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the

leaves of the tree, and works exactly as minimax algorithm. In this case, it also consumes

more time because of alpha-beta factors, such a move of pruning is called worst ordering.
In this case, the best move occurs on the right side of the tree. The time complexity for

such an order is O(bm).

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning

happens in the tree, and best moves occur at the left side of the tree. We apply DFS hence

it first search left of the tree and go deep twice as minimax algorithm in the same amount

of time. Complexity in ideal ordering is O(bm/2).

Stochastic Games in Artificial Intelligence


Many unforeseeable external occurrences can place us in unforeseen circumstances in real
life. Many games, such as dice tossing, have a random element to reflect this unpredictability.
These are known as stochastic games. Backgammon is a classic game that mixes skill and
luck. The legal moves are determined by rolling dice at the start of each player’s turn white,
for example, has rolled a 6–5 and has four alternative moves in the backgammon scenario
shown in the figure below.

This is a standard backgammon position. The object of the game is to get all of one’s pieces
off the board as quickly as possible. White moves in a clockwise direction toward 25, while
Black moves in a counterclockwise direction toward 0. Unless there are many opponent
pieces, a piece can advance to any position; if there is only one opponent, it is caught and
must start over. White has rolled a 6–5 and must pick between four valid moves: (5–10,5–11),
(5–11,19–24), (5–10,10–16), and (5–11,11–16), where the notation (5–11,11–16) denotes
moving one piece from position 5 to 11 and then another from 11 to 16.

Stochastic game tree for a backgammon position

White knows his or her own legal moves, but he or she has no idea how Black will roll, and
thus has no idea what Black’s legal moves will be. That means White won’t be able to build a
normal game tree-like in chess or tic-tac-toe. In backgammon, in addition to M A X and M I N
nodes, a game tree must include chance nodes. The figure below depicts chance nodes as
circles. The possible dice rolls are indicated by the branches leading from each chance node;
each branch is labelled with the roll and its probability. There are 36 different ways to roll two
dice, each equally likely, yet there are only 21 distinct rolls because a 6–5 is the same as a 5–
6. P (1–1) = 1/36 because each of the six doubles (1–1 through 6–6) has a probability of 1/36.
Each of the other 15 rolls has a 1/18 chance of happening.

The following phase is to learn how to make good decisions. Obviously, we want to choose
the move that will put us in the best position. Positions, on the other hand, do not have
specific minimum and maximum values. Instead, we can only compute a position’s
anticipated value, which is the average of all potential outcomes of the chance nodes.
As a result, we can generalize the deterministic minimax value to an expected-minimax value
for games with chance nodes. Terminal nodes, MAX and MIN nodes (for which the dice roll
is known), and MAX and MIN nodes (for which the dice roll is unknown) all function as
before. We compute the expected value for chance nodes, which is the sum of all outcomes,
weighted by the probability of each chance action.

where r is a possible dice roll (or other random events) and RESULT(s,r) denotes the same
state as s, but with the addition that the dice roll’s result is r.

You might also like