0% found this document useful (0 votes)
13 views26 pages

Lecture 12

The document discusses search algorithms in complex environments, emphasizing the transition from classical search methods to local search techniques. It highlights the importance of focusing on the end result rather than the path taken, particularly in optimization problems like job scheduling and microchip design. The document also details the hill climbing algorithm, its variations, advantages, disadvantages, and challenges such as local maxima, plateaus, and ridges.

Uploaded by

Yawar's Hyper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

Lecture 12

The document discusses search algorithms in complex environments, emphasizing the transition from classical search methods to local search techniques. It highlights the importance of focusing on the end result rather than the path taken, particularly in optimization problems like job scheduling and microchip design. The document also details the hill climbing algorithm, its variations, advantages, disadvantages, and challenges such as local maxima, plateaus, and ridges.

Uploaded by

Yawar's Hyper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Artificial

Intelligence
LECTURE: 12

I N S T R U C T O R : D R . H U S N A I N A S H FA Q
Search in Complex
Environments
Previously the searches were discussed in environments that were fully observable,
deterministic, and where the goal was to find a sequence of actions leading to a goal state. This
works fine in well-structured problems like puzzles or navigation on a known map. But what
about situations where:
•The environment is huge or infinite?
•We don’t care about the path, only the end result?
•The system has incomplete knowledge or stochastic outcomes?
That’s where search in complex environments becomes important.
Why move beyond classical
Searches
Sometimes we care only about the final outcome, not the steps to get there.
Examples:
• 8-Queens Problem: Find a configuration of 8 queens where no two attack each other.
• Real-world applications: circuit design, factory layout, job scheduling, network optimization, crop planning, etc.

Let’s take a real-world example:


Suppose you are designing a microchip layout. There may be millions of possible configurations, and what matters
is:
• How well the chip performs,
• How cheap it is to manufacture, etc.
You’re not interested in how the layout came to be (i.e., the path), but what the layout is. So, maintaining a full
path or exploring every possibility is wasteful.
This shifts our focus from "How do I get there?" to "Where is the best place to be?"
Classical Search Vs Local Search
Classical Search Local search
Systematic Exploration of Search Explores by moving from one
Space solution to a neighboring solution

Keeps one or more path in a Keeps only the current state in


memory memory
The Path to the goal is the solution Goal state is the solution path is
to the problem irrelevant
Local Search
These search algorithms keep a single “current” state and
move to the neighboring states try to improve it.
The solution path needs not to be maintained. Hence to
search is local.
These search algorithms are suitable for problems in which
path is not important, the goal itself is the solution.
Local Search and Optimization
Problems
These algorithms:
•Work with a single current state.
•Move to neighboring states.
•Do not maintain a full path or visited states.
•Are not systematic (i.e., might miss some solutions).
•Use very little memory.
•Are effective even in very large or infinite state spaces.
Optimization Problems
Here, the goal is to maximize or minimize an
objective function.
•If you're trying to find the best state → you're solving
an optimization problem.
•These problems use objective functions (e.g.,
number of conflicts, total cost, etc.).
Examples of Local Search
1. Job scheduling: Assign tasks to machines to minimize the
completion time
2. Graph coloring: Assign colors to the nodes so that no adjacent
nodes share the same color using the minimum number of
color.
3. N- Queen Problem: Place N queens on an NxN chessboard
without any conflicts
4. Optimizers in ML: Minimizing Cost functions in Machine
learning
Hill Climbing Search Algorithm
Hill climbing is a heuristic Algorithm which continuously moves in the direction of increasing
value (solution to the goal) to find the peak of the mountain or best solution to the problem.
It keeps the track of current state and on each iteration moves to the neighboring state with
highest value: that is, it heads in the direction that provide the steepest ascent.
In this algorithm, when it reaches a peak value where no neighbor has a higher value, then it
terminates
It is also called greedy local search as it only searches its good immediate neighbor state and
not beyond that
Hill climbing is mostly used a when a good heuristic value is available.
Hill climbing algorithm
Hill climbing is a straightforward local search algorithm that starts with an initial solution and iteratively
moves to the best neighboring solution that improves the objective function. Here's how it works:

● Initialization: Begin with an initial solution, often generated randomly or using a heuristic
method.
● Evaluation: Calculate the quality of the initial solution using an objective function or fitness
measure.
● Neighbor Generation: Generate neighboring solutions by making small changes (moves) to the
current solution.
● Selection: Choose the neighboring solution that results in the most significant improvement in the
objective function.
● Termination: Continue this process until a termination condition is met (e.g., reaching a
maximum number of iterations or finding a satisfactory solution).
State Space Landscape
Hill climbing algorithm
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another
state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest
value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have
the same value.
Shoulder: It is a plateau region which has an uphill edge.
Characteristics of Hill-
Climbing:

The main features of the algorithm are:


Employ a greedy approach: It means that the movement
through the space of solutions always occurs in the sense of
maximizing the objective function.
No backtracking: It only works in the current state. The past
history of the research that led to a certain state is irrelevant.
Generate and test mechanism: Allows you to decide the next
current state.
Incremental change: The current state is improved through
small incremental changes.
Hill climbing algorithm

Advantages of Hill Climbing algorithm:

● Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
● It can be used in a wide variety of optimization problems, including those with a large search space
and complex constraints.
● Hill Climbing is often very efficient in finding local optima, making it a good choice for problems
where a good solution is needed quickly.
● The algorithm can be easily modified and extended to include additional heuristics or constraints.
Hill climbing algorithm
Disadvantages of Hill Climbing algorithm:

● Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of the
problem.
● The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result in a
poor final solution.
● Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find
better solutions.
● It may be less effective than other optimization algorithms, such as genetic algorithms or simulated
annealing, for certain types of problems.
Hill climbing algorithm

Types of Hill Climbing Algorithm:


○ Simple hill Climbing:
○ Steepest-Ascent hill-climbing:
○ Stochastic hill Climbing:
Hill climbing algorithm
1. Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a current state. It only checks it's one successor state, and if it finds better
than the current state, then move else be in the same state. This algorithm has the following features:
Less time consuming
Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
If it is goal state, then return success and quit.
Else if it is better than the current state then assign new state as a current state.
Else if not better than the current state, then return to step2.
Step 5: Exit.
Simple Hill Climbing Example

19
20
Hill climbing algorithm
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal state. This algorithm consumes more time as it searches for
multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
Let SUCC be a state such that any successor of the current state will be better than it.
For each operator that applies to the current state:
Apply the new operator and generate a new state.
Evaluate the new state.
If it is goal state, then return it and quit, else compare it to the SUCC.
If it is better than SUCC, then set new state as SUCC.
If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
Feature Simple Hill Climbing Steepest Ascent Hill Climbing

Neighbor Evaluation One at a time All at once

Move Decision First better neighbor found Best of all better neighbors

Speed Faster (greedy) Slower (thorough)

Risk of Missing Better Move High Lower

Risk of Getting Stuck High (more likely to plateau) Still possible, but less
Hill climbing algorithm
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search algorithm
selects one neighbor node at random and decides whether to choose it as a current state or examine
another state.
Hill climbing algorithm

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape. Create a
list of the promising path so that the algorithm can backtrack the search space and explore other paths as
well.
Hill climbing algorithm

Problems in Hill Climbing Algorithm:


2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current state
contains the same value, because of this algorithm does not find any best direction to move. A hill-climbing
search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
Hill climbing algorithm

Problems in Hill Climbing Algorithm:


3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding
areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we can improve this
problem.

You might also like