Lecture 11 16
Lecture 11 16
T
Program 2
• Board : A nine-element vector representing the board. We store
2 (indicating Blank), 3 (indicating X), or 5 (indicating O). Turn:
An integer indicating which move of the game about to be
played. The 1st move will be indicated by 1, last by 9.
The Algorithm : The main algorithm uses three functions.
• Make_2 : returns 5 if the center square of the board is blank i.e.,
if board[5]=2. Otherwise, this function returns any non-corner square (2,
4, 6 or 8).
• Posswin(p) : Returns 0 if player p can’t win on his next move; otherwise,
it returns the number of the square that constitutes a winning move. This
function will enable the program both to win and to block opponents win.
This function operates by checking each of the rows, columns, and
diagonals. By multiplying the values of each square together for an entire
row (or column or diagonal), the possibility of a win can be checked. If the
product is 18 (3 x 3 x 2), then X can win. If the product is 50 (5 x 5 x 2),
then O can win. If a winning row (column or diagonal) is found, the blank
Program 2
Program 2
Program 2 : Pseudo Code
• Turn = 1 Go(1) (upper left corner).
• Turn = 2 If Board[5] is blank, Go(5), else Go(1).
• Turn = 3 If Board[9] is blank, Go(9), else Go(3).
• Turn = 4 If Posswin(X) is not 0, then Go(Posswin(X)) i.e. [ block
opponent’s win], else Go(Make_2).
• Turn = 5 If Posswin(X) is not 0 then Go(Posswin(X)) [i.e. win], else
if Posswin(O) is not 0, then Go(Posswin(O)) [i.e. block win], else if
Board[7] is blank, then Go(7), else Go(3). [to explore other
possibility if there be any ].
• Turn = 6 If Posswin(O) is not 0 then Go(Posswin(O)), else if
Posswin(X) is not 0, then Go(Posswin(X)), else Go(Make2).
• Turn = 7 If Posswin(X) is not 0 then Go(Posswin(X)), else
if Posswin(X) is not 0, then Go(Posswin(O)) else go anywhere that is
blank.
• Turn = 8 If Posswin(O) is not 0 then Go(Posswin(O)), else if
Iteration
Turn 1 (Agent 1's turn): Agent 1 (O) checks Turn 2 (Agent 2's turn): Agent 2 (X) checks
lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No
No potential winning lines for O are found. potential winning lines for X are found. Agent 2
Agent 1 chooses the center position, 5. chooses position 8.
Turn 3 (Agent 1's turn): Agent 1 (O) checks Turn 4 (Agent 2's turn): Agent 2 (X) checks
lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No
potential winning lines for O are found. Agent 1 potential winning lines for X are found. Agent 2
chooses position 1. chooses position 3.
Iteration
Turn 5 (Agent 1's turn): Agent 1 (O) checks lines:
1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. Potential
winning line found for O: 1, 5, 9. Agent 1 chooses
position 9.
With Ratings
With Passwin
Associate Professor-CSE
Manipal University Jaipur
Generate and Test Search
• Generate and Test Search is a heuristic search technique
based on Depth First Search with Backtracking which
guarantees to find a solution if done systematically and there
exists a solution.
Algorithm
• Generate a possible solution.
• Test to see if this is actually a solution.
• Quit if a solution has been found.
• Otherwise, return to step 1.
Generate and Test Search
• The evaluation is carried out by the heuristic function as all
the solutions are generated systematically in generate and
test algorithm.
• But if there are some paths which are most unlikely to lead us
to result then they are not considered.
• The heuristic does this by ranking all the alternatives and is
often effective in doing so.
• But there is a technique to improve in complex cases as well
by combining generate and test search with other techniques
so as to reduce the search space.
• It is also known as British Museum Search Algorithm as
it’s like looking for an exhibit at random or finding an object in
the British Museum by wandering randomly.
• It is also known as brute-force search or exhaustive
Key Characteristics of Generate and Test
• Dual Phases: Generation and testing are distinct but
interconnected phases.
Associate Professor-CSE
Manipal University Jaipur
Hill Climbing Search
• This algorithm is considered to be one of the simplest
procedures for implementing heuristic search.
• Hill climbing is a simple, yet effective local search algorithm
used in artificial intelligence and optimization problems in
mathematics.
• It is inspired by the idea of climbing a hill to reach the highest
point, where the goal is to find the best possible solution
within a given problem space.
• Hill climbing search algorithm is simply a loop that
continuously moves in the direction of increasing value. It
stops when it reaches a “peak” where no neighbour has
higher value.
• This heuristic combines the advantages of both depth first and
breadth first searches into a single method.
Hill Climbing Algorithm
Basic Idea:
• Hill climbing is a heuristic search algorithm that starts from an initial
solution and iteratively moves to neighboring solutions that are better in
terms of the defined objective function. The algorithm keeps moving
uphill (towards better solutions) until it reaches a point where no better
neighbor exists. At that point, it considers the current solution as a local
optimum.
Algorithm Steps:
• Initialization: Start from an initial solution within the problem space.
This can be a randomly generated solution or a predefined one.
• Evaluation: Evaluate the current solution using an objective function.
The objective function defines the quality of the solution and is used to
determine whether a neighbor solution is better or worse.
• Neighbor Generation: Generate neighboring solutions by making
small modifications to the current solution. These modifications can
include small changes or swaps, depending on the problem domain.
Flow Chart of Hill Climbing Algorithm
Types of Hill Climbing Algorithm
1.Simple Hill Climbing: This approach checks all neighbours one by
one to see if any neighbour can provide a better solution to the
problem. The first encountered neighbour which provides a more
optimal value is chosen as the next current value. This search focus
only on his previous and next step.
2.Steepest Ascent Hill Climbing: This approach builds up on the
former and checks all the neighbours for solutions. The neighbour
which provides the best answer i.e., the steepest hill is considered to
be the next current value. It considers all the successive nodes,
compares them, and choose the node which is closest to the solution.
3.Stochastic Hill Climbing: This approach does not focus on all the
nodes. It selects a neighbour at random and checks it to see if it
provides a better solution. Then, it is compared with the current state
and depending on the differences in the values, it decides whether to
examine a different neighbour or continue with the one that was just
evaluated.
Types of Hill Climbing Algorithm
5.Random Mutation Hill Climbing: In this variant, the current
solution is randomly mutated multiple times to generate new
candidate solutions. The best among these candidates is chosen as
the next current solution. This approach introduces randomness and
diversity in the search process.
6.Simulated Annealing: Simulated annealing is a probabilistic hill
climbing technique. It allows for accepting solutions that are worse
than the current one with a certain probability, which decreases over
time. This probabilistic acceptance allows the algorithm to escape
local optima and explore the search space more effectively.
7.Hill Climbing with Restart: This technique involves periodically
restarting the hill climbing algorithm from a random or pre-defined
starting point. The idea is to escape local optima and explore
different parts of the search space by restarting the process.
8.Parallel Hill Climbing: In this approach, multiple instances of the
hill climbing algorithm run concurrently, each starting from a different
Problems with Hill Climbing Algorithm
1.Local Maximum is a state
which is optimally better than all
its neighbouring states. So, if the
hill climbing algorithm arrives at
this point, this value will be
returned. Whereas Global
Maximum is the best state
possible in the landscape. This is
the highest point in the state
space landscape.
• To overcome the local
maximum problem: Utilize the
backtracking technique. Maintain
a list of visited states. If the
Problems with Hill Climbing Algorithm
2.Flat Local Maximum/Plateau is an area where all
neighbouring states are at the same elevation i.e. each solution
is as optimal as the others. So, any of these points could be
chosen.
• Shoulder is a plateau region which ends with an uphill edge. In
this situation, the plateau area acts like a flat region, where all
points are equally suitable solutions. But we know that
although the plateau ends with an uphill edge, the algorithm
may not continue its search until it reaches the edge and
prematurely return the shoulder region point as the solution,
instead of continuing to look for higher points.
• To overcome plateaus: Make a big jump. Randomly select a
state far away from the current state. Chances are that we will
land in a non-plateau region.
Pros & Cons of Hill Climbing Algorithm
Advantages –
• Hill Climbing is a simple and intuitive algorithm that is easy to
understand and implement.
• It can be used in a wide variety of optimization problems, including
those with a large search space and complex constraints.
• Hill Climbing is often very efficient in finding local optima, making it a
good choice for problems where a good solution is needed quickly.
• The algorithm can be easily modified and extended to include additional
heuristics or constraints.
Limitations –
• Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
• The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
• Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
state-space landscape
objectiv
global maximum
e
functio
n
cost
global minimum
functio
n
current
state
neighbor
s
Hill
Climbing
Example – House & Hospital Distance
• The "House and Hospital" problem is a classic
optimization problem that involves finding the
optimal assignment of houses to hospitals based
on certain criteria. The problem statement
typically goes something like this:
Problem Statement:
• We have a set of houses and a set of hospitals.
And our goal might be, in a world that's formatted
as the grid. Our objective in this case is to try and
minimize the distance of any of the houses from a
hospital.
• There are a number of ways we could calculate
Example – House & Hospital Distance
Cost:
17
Cost:
17
Cost:
17
Cost:
15
Cost:
13
Cost:
11
Temperature
Reduction Rules
Simulated Annealing
Physical Annealing Algorithms:
• Physical substances function
are SIMULATED-ANNEALING(problem,
melted and then graduallycurrent = initial state of problem
cooled until some solid for t = 1 to max:
state is reached. T = TEMPERATURE(t)
• The goal is to produce a neighbor = random neighbor of current
minimal-energy state. ΔE = how much better neighbor is than c
Annealing schedule if ΔE > 0:
• If the temperature is current = neighbor
lowered sufficiently slowly, with probability eΔE/T set current = neighb
then the goal will be return current
attained.
• Nevertheless, there is
some probability for a
• In Simple Hill Climb, It will move towards right upper
state and stuck at lower maxima.
• Whereas in Simulated Annealing, It will move towards
left lower state and then reach global maxima.
“Thank you for
being such an
engaged
audience during
my
presentation.”
- Dr. Ashish Kumar