0% found this document useful (0 votes)
31 views

Lecture 11 16

Uploaded by

Pratham Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Lecture 11 16

Uploaded by

Pratham Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Tic-Tac-Toe Game

T
Program 2
• Board : A nine-element vector representing the board. We store
2 (indicating Blank), 3 (indicating X), or 5 (indicating O). Turn:
An integer indicating which move of the game about to be
played. The 1st move will be indicated by 1, last by 9.
The Algorithm : The main algorithm uses three functions.
• Make_2 : returns 5 if the center square of the board is blank i.e.,
if board[5]=2. Otherwise, this function returns any non-corner square (2,
4, 6 or 8).
• Posswin(p) : Returns 0 if player p can’t win on his next move; otherwise,
it returns the number of the square that constitutes a winning move. This
function will enable the program both to win and to block opponents win.
This function operates by checking each of the rows, columns, and
diagonals. By multiplying the values of each square together for an entire
row (or column or diagonal), the possibility of a win can be checked. If the
product is 18 (3 x 3 x 2), then X can win. If the product is 50 (5 x 5 x 2),
then O can win. If a winning row (column or diagonal) is found, the blank
Program 2
Program 2
Program 2 : Pseudo Code
• Turn = 1 Go(1) (upper left corner).
• Turn = 2 If Board[5] is blank, Go(5), else Go(1).
• Turn = 3 If Board[9] is blank, Go(9), else Go(3).
• Turn = 4 If Posswin(X) is not 0, then Go(Posswin(X)) i.e. [ block
opponent’s win], else Go(Make_2).
• Turn = 5 If Posswin(X) is not 0 then Go(Posswin(X)) [i.e. win], else
if Posswin(O) is not 0, then Go(Posswin(O)) [i.e. block win], else if
Board[7] is blank, then Go(7), else Go(3). [to explore other
possibility if there be any ].
• Turn = 6 If Posswin(O) is not 0 then Go(Posswin(O)), else if
Posswin(X) is not 0, then Go(Posswin(X)), else Go(Make2).
• Turn = 7 If Posswin(X) is not 0 then Go(Posswin(X)), else
if Posswin(X) is not 0, then Go(Posswin(O)) else go anywhere that is
blank.
• Turn = 8 If Posswin(O) is not 0 then Go(Posswin(O)), else if
Iteration
Turn 1 (Agent 1's turn): Agent 1 (O) checks Turn 2 (Agent 2's turn): Agent 2 (X) checks
lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No
No potential winning lines for O are found. potential winning lines for X are found. Agent 2
Agent 1 chooses the center position, 5. chooses position 8.

Turn 3 (Agent 1's turn): Agent 1 (O) checks Turn 4 (Agent 2's turn): Agent 2 (X) checks
lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No lines: 1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. No
potential winning lines for O are found. Agent 1 potential winning lines for X are found. Agent 2
chooses position 1. chooses position 3.
Iteration
Turn 5 (Agent 1's turn): Agent 1 (O) checks lines:
1, 2, 3, 4, 5, 6, 7, 8, 9, 1-5-9, and 3-5-7. Potential
winning line found for O: 1, 5, 9. Agent 1 chooses
position 9.

Agent 1 (O) Wins!


Comments
• Not as efficient as first one in terms of time.
• Several conditions are checked before each move.
• It is memory efficient.
• Easier to understand & complete strategy has been
• determined in advance
• Still can not generalize to 3-D.
Implementations

With Ratings

With Magic Square

With Passwin

Learn Magic Square creation


Heuristic Search

Dr. Ashish Kumar


Associate Professor-CSE
Manipal University Jaipur
Heuristics
• Psychologists Daniel Kahneman and Amos Tversky have
developed the study of Heuristics in human decision-making
in the 1970s and 1980s.
• However, this concept was first introduced by the Nobel
Laureate Herbert A. Simon, whose primary object of
research was problem-solving.
• Heuristic - a “rule of thumb” used to help guide search
• often, something learned experientially and recalled when needed
• Heuristic Function - function applied to a state in a search
space to indicate a likelihood of success if that state is
selected
• Heuristic functions provide estimates of how close a state is to the
goal. They guide search algorithms by evaluating potential states.
• heuristic search methods are known as “weak methods” because of
Heuristics
• Heuristic search is an AI search technique that employs
heuristic for its moves.
• Heuristic is a rule of thumb that probably leads to a solution.
• Heuristics play a major role in search strategies because of
exponential nature of the most problems. Heuristics help to
reduce the number of alternatives from an exponential
number to a polynomial number.
• In Artificial Intelligence, heuristic search has a general
meaning, and a more specialized technical meaning.
• In a general sense, the term heuristic is used for any advice
that is often effective but is not guaranteed to work in every
case.
• Within the heuristic search architecture, however, the term
heuristic usually refers to the special case of a heuristic
Heuristics
• Example: The Traveling Salesman Problem

• For a low number of cities, this question could be reasonably brute-


forced. However, as the number of cities increases, it becomes
increasingly difficult to come to a solution.
• The nearest-neighbor (NN) heuristic solves this problem nicely: the
computer always picks the nearest unvisited city next on the path. NN
does not always provide the best solution, but it is close enough to the
Applying Heuristics to Your Algorithms
• To apply heuristics to your algorithms, you need to know the
solution or goal you’re looking for ahead of time. If you
know your end goal, you can specify rules that can help you
achieve it.
• If the algorithm is being designed to find out how many moves
a knight can make on a square, 8x8 chessboard while visiting
every square, it’s possible to create a heuristic that causes the
knight to always choose the path with the most available
moves afterward.
• However, because we’re trying to create a specific path, it
may be better to create a heuristic that causes the knight to
choose the path with the fewest available moves afterward.
• Since the available decisions are much narrower, so too are
the available solutions, and so they are found more quickly.
Generate and Test Search

Associate Professor-CSE
Manipal University Jaipur
Generate and Test Search
• Generate and Test Search is a heuristic search technique
based on Depth First Search with Backtracking which
guarantees to find a solution if done systematically and there
exists a solution.
Algorithm
• Generate a possible solution.
• Test to see if this is actually a solution.
• Quit if a solution has been found.
• Otherwise, return to step 1.
Generate and Test Search
• The evaluation is carried out by the heuristic function as all
the solutions are generated systematically in generate and
test algorithm.
• But if there are some paths which are most unlikely to lead us
to result then they are not considered.
• The heuristic does this by ranking all the alternatives and is
often effective in doing so.
• But there is a technique to improve in complex cases as well
by combining generate and test search with other techniques
so as to reduce the search space.
• It is also known as British Museum Search Algorithm as
it’s like looking for an exhibit at random or finding an object in
the British Museum by wandering randomly.
• It is also known as brute-force search or exhaustive
Key Characteristics of Generate and Test
• Dual Phases: Generation and testing are distinct but
interconnected phases.

• Iterative Nature: Iteration allows for learning from previous


attempts. Solutions are refined through repeated iterations.

• Evaluation Criteria: Solutions are evaluated based on


predefined criteria.
Generation Phase
• The Generate phase is where the creative process takes place.
Potential solutions are generated based on the problem's
context and requirements.

• Methods of Solution Generation


1. Systematic Enumeration: Solutions are generated
systematically by exploring all possible combinations.
2. Random Generation: Solutions are generated randomly,
allowing for a diverse range of possibilities.

Example: Generating Solutions


• For a puzzle-solving problem, systematic enumeration might
involve considering all possible sequences of moves.
• In contrast, random generation could create various initial
Testing Phase
• The Test phase is a critical step in the Generate and Test
approach. Its purpose is to evaluate the generated solutions
for their validity and suitability.

Criteria for Testing Solutions


• Solutions are tested against specific criteria or constraints.
• These criteria could involve performance measures, feasibility,
or adherence to predefined rules.

Example: Testing Solutions


• In a scheduling problem, solutions could be tested for meeting
time constraints and resource availability.
• For a design problem, solutions might be tested for structural
integrity and cost-effectiveness.
Pros & Cons of Generate and Test Search
Advantages –
• Simplicity and Flexibility: The concept of generating solutions
and testing them is straightforward. It can be adapted to a wide
range of problems across various domains.
• Exploration of Solution Space: The approach explores diverse
solution possibilities, even unconventional ones. This exploration
can lead to innovative and unexpected solutions.
• Applicability: Well-suited for complex problems with no direct
formulaic solution. Effective for problems where solution
approaches are not well-defined.
Limitations –
• Inefficient for Large Solution Spaces: In scenarios with a vast
solution space, exhaustive generation and testing can be time-
consuming and resource-intensive.
• Defining Testing Criteria: Establishing precise testing criteria
Hill Climbing Search

Associate Professor-CSE
Manipal University Jaipur
Hill Climbing Search
• This algorithm is considered to be one of the simplest
procedures for implementing heuristic search.
• Hill climbing is a simple, yet effective local search algorithm
used in artificial intelligence and optimization problems in
mathematics.
• It is inspired by the idea of climbing a hill to reach the highest
point, where the goal is to find the best possible solution
within a given problem space.
• Hill climbing search algorithm is simply a loop that
continuously moves in the direction of increasing value. It
stops when it reaches a “peak” where no neighbour has
higher value.
• This heuristic combines the advantages of both depth first and
breadth first searches into a single method.
Hill Climbing Algorithm
Basic Idea:
• Hill climbing is a heuristic search algorithm that starts from an initial
solution and iteratively moves to neighboring solutions that are better in
terms of the defined objective function. The algorithm keeps moving
uphill (towards better solutions) until it reaches a point where no better
neighbor exists. At that point, it considers the current solution as a local
optimum.
Algorithm Steps:
• Initialization: Start from an initial solution within the problem space.
This can be a randomly generated solution or a predefined one.
• Evaluation: Evaluate the current solution using an objective function.
The objective function defines the quality of the solution and is used to
determine whether a neighbor solution is better or worse.
• Neighbor Generation: Generate neighboring solutions by making
small modifications to the current solution. These modifications can
include small changes or swaps, depending on the problem domain.
Flow Chart of Hill Climbing Algorithm
Types of Hill Climbing Algorithm
1.Simple Hill Climbing: This approach checks all neighbours one by
one to see if any neighbour can provide a better solution to the
problem. The first encountered neighbour which provides a more
optimal value is chosen as the next current value. This search focus
only on his previous and next step.
2.Steepest Ascent Hill Climbing: This approach builds up on the
former and checks all the neighbours for solutions. The neighbour
which provides the best answer i.e., the steepest hill is considered to
be the next current value. It considers all the successive nodes,
compares them, and choose the node which is closest to the solution.
3.Stochastic Hill Climbing: This approach does not focus on all the
nodes. It selects a neighbour at random and checks it to see if it
provides a better solution. Then, it is compared with the current state
and depending on the differences in the values, it decides whether to
examine a different neighbour or continue with the one that was just
evaluated.
Types of Hill Climbing Algorithm
5.Random Mutation Hill Climbing: In this variant, the current
solution is randomly mutated multiple times to generate new
candidate solutions. The best among these candidates is chosen as
the next current solution. This approach introduces randomness and
diversity in the search process.
6.Simulated Annealing: Simulated annealing is a probabilistic hill
climbing technique. It allows for accepting solutions that are worse
than the current one with a certain probability, which decreases over
time. This probabilistic acceptance allows the algorithm to escape
local optima and explore the search space more effectively.
7.Hill Climbing with Restart: This technique involves periodically
restarting the hill climbing algorithm from a random or pre-defined
starting point. The idea is to escape local optima and explore
different parts of the search space by restarting the process.
8.Parallel Hill Climbing: In this approach, multiple instances of the
hill climbing algorithm run concurrently, each starting from a different
Problems with Hill Climbing Algorithm
1.Local Maximum is a state
which is optimally better than all
its neighbouring states. So, if the
hill climbing algorithm arrives at
this point, this value will be
returned. Whereas Global
Maximum is the best state
possible in the landscape. This is
the highest point in the state
space landscape.
• To overcome the local
maximum problem: Utilize the
backtracking technique. Maintain
a list of visited states. If the
Problems with Hill Climbing Algorithm
2.Flat Local Maximum/Plateau is an area where all
neighbouring states are at the same elevation i.e. each solution
is as optimal as the others. So, any of these points could be
chosen.
• Shoulder is a plateau region which ends with an uphill edge. In
this situation, the plateau area acts like a flat region, where all
points are equally suitable solutions. But we know that
although the plateau ends with an uphill edge, the algorithm
may not continue its search until it reaches the edge and
prematurely return the shoulder region point as the solution,
instead of continuing to look for higher points.
• To overcome plateaus: Make a big jump. Randomly select a
state far away from the current state. Chances are that we will
land in a non-plateau region.
Pros & Cons of Hill Climbing Algorithm
Advantages –
• Hill Climbing is a simple and intuitive algorithm that is easy to
understand and implement.
• It can be used in a wide variety of optimization problems, including
those with a large search space and complex constraints.
• Hill Climbing is often very efficient in finding local optima, making it a
good choice for problems where a good solution is needed quickly.
• The algorithm can be easily modified and extended to include additional
heuristics or constraints.
Limitations –
• Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
• The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
• Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
state-space landscape
objectiv
global maximum
e
functio
n
cost
global minimum
functio
n
current
state
neighbor
s
Hill
Climbing
Example – House & Hospital Distance
• The "House and Hospital" problem is a classic
optimization problem that involves finding the
optimal assignment of houses to hospitals based
on certain criteria. The problem statement
typically goes something like this:
Problem Statement:
• We have a set of houses and a set of hospitals.
And our goal might be, in a world that's formatted
as the grid. Our objective in this case is to try and
minimize the distance of any of the houses from a
hospital.
• There are a number of ways we could calculate
Example – House & Hospital Distance
Cost:
17
Cost:
17
Cost:
17
Cost:
15
Cost:
13
Cost:
11

This is optimal as per local maxima of the hill climbing algo.


Cost:
9

This is optimal as per global maxima but hill climbing will


Conclusion
• Remember that while hill climbing algorithms are relatively
simple and intuitive, they may struggle with complex
optimization problems that have many local optima or
discontinuous search spaces.

• To address the limitations of hill climbing algorithm,


researchers have developed various extensions and hybrid
approaches that combine hill climbing with other optimization
methods, such as genetic algorithms, simulated annealing, or
particle swarm optimization.

• These combinations aim to leverage the strengths of different


techniques while mitigating their weaknesses.
Simulated Annealing
• A variation of hill climbing in which, at the beginning of the
process, some downhill moves may be made.
• To do enough exploration of the whole space early on, so that
the final solution is relatively insensitive to the starting state.
• Lowering the chances of getting caught at a local maximum,
or plateau, or a ridge.

Temperature
Reduction Rules
Simulated Annealing
Physical Annealing Algorithms:
• Physical substances function
are SIMULATED-ANNEALING(problem,
melted and then graduallycurrent = initial state of problem
cooled until some solid for t = 1 to max:
state is reached. T = TEMPERATURE(t)
• The goal is to produce a neighbor = random neighbor of current
minimal-energy state. ΔE = how much better neighbor is than c
Annealing schedule if ΔE > 0:
• If the temperature is current = neighbor
lowered sufficiently slowly, with probability eΔE/T set current = neighb
then the goal will be return current
attained.
• Nevertheless, there is
some probability for a
• In Simple Hill Climb, It will move towards right upper
state and stuck at lower maxima.
• Whereas in Simulated Annealing, It will move towards
left lower state and then reach global maxima.
“Thank you for
being such an
engaged
audience during
my
presentation.”
- Dr. Ashish Kumar

You might also like