Lecture 12
Lecture 12
Intelligence
LECTURE: 12
I N S T R U C T O R : D R . H U S N A I N A S H FA Q
Search in Complex
Environments
Previously the searches were discussed in environments that were fully observable,
deterministic, and where the goal was to find a sequence of actions leading to a goal state. This
works fine in well-structured problems like puzzles or navigation on a known map. But what
about situations where:
•The environment is huge or infinite?
•We don’t care about the path, only the end result?
•The system has incomplete knowledge or stochastic outcomes?
That’s where search in complex environments becomes important.
Why move beyond classical
Searches
Sometimes we care only about the final outcome, not the steps to get there.
Examples:
• 8-Queens Problem: Find a configuration of 8 queens where no two attack each other.
• Real-world applications: circuit design, factory layout, job scheduling, network optimization, crop planning, etc.
● Initialization: Begin with an initial solution, often generated randomly or using a heuristic
method.
● Evaluation: Calculate the quality of the initial solution using an objective function or fitness
measure.
● Neighbor Generation: Generate neighboring solutions by making small changes (moves) to the
current solution.
● Selection: Choose the neighboring solution that results in the most significant improvement in the
objective function.
● Termination: Continue this process until a termination condition is met (e.g., reaching a
maximum number of iterations or finding a satisfactory solution).
State Space Landscape
Hill climbing algorithm
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another
state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest
value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have
the same value.
Shoulder: It is a plateau region which has an uphill edge.
Characteristics of Hill-
Climbing:
● Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
● It can be used in a wide variety of optimization problems, including those with a large search space
and complex constraints.
● Hill Climbing is often very efficient in finding local optima, making it a good choice for problems
where a good solution is needed quickly.
● The algorithm can be easily modified and extended to include additional heuristics or constraints.
Hill climbing algorithm
Disadvantages of Hill Climbing algorithm:
● Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of the
problem.
● The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result in a
poor final solution.
● Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find
better solutions.
● It may be less effective than other optimization algorithms, such as genetic algorithms or simulated
annealing, for certain types of problems.
Hill climbing algorithm
19
20
Hill climbing algorithm
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal state. This algorithm consumes more time as it searches for
multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
Let SUCC be a state such that any successor of the current state will be better than it.
For each operator that applies to the current state:
Apply the new operator and generate a new state.
Evaluate the new state.
If it is goal state, then return it and quit, else compare it to the SUCC.
If it is better than SUCC, then set new state as SUCC.
If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
Feature Simple Hill Climbing Steepest Ascent Hill Climbing
Move Decision First better neighbor found Best of all better neighbors
Risk of Getting Stuck High (more likely to plateau) Still possible, but less
Hill climbing algorithm
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search algorithm
selects one neighbor node at random and decides whether to choose it as a current state or examine
another state.
Hill climbing algorithm
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
Hill climbing algorithm
Solution: With the use of bidirectional search, or by moving in different directions, we can improve this
problem.