0% found this document useful (0 votes)
83 views

What Is Hill Climbing Algorithm

The document discusses hill climbing algorithms, which are local search algorithms that continuously improve the current solution until reaching an optimal peak. It describes three main types of hill climbing algorithms: simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing. For each type, it provides pseudocode describing the algorithm's steps. It also discusses features of hill climbing algorithms like employing a greedy approach and not backtracking, as well as challenges like getting stuck at local optima.

Uploaded by

NUREDIN KEDIR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

What Is Hill Climbing Algorithm

The document discusses hill climbing algorithms, which are local search algorithms that continuously improve the current solution until reaching an optimal peak. It describes three main types of hill climbing algorithms: simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing. For each type, it provides pseudocode describing the algorithm's steps. It also discusses features of hill climbing algorithms like employing a greedy approach and not backtracking, as well as challenges like getting stuck at local optima.

Uploaded by

NUREDIN KEDIR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

1. What is Hill Climbing Algorithm?

Enumerate its Features,


Illustrate and Explain its State Space.

Understanding Hill Climbing Algorithm in Artificial Intelligence

A hill-climbing algorithm is an Artificial Intelligence (AI) algorithm that increases in value continuously
until it achieves a peak solution. This algorithm is used to optimize mathematical problems and in other
real-life applications like marketing and job scheduling.

This article will improve our understanding of hill climbing in artificial intelligence. It discusses various
aspects such as features, problems, types, and algorithm steps of the hill-climbing algorithm. The article
will also highlight the applications of this algorithm.

Introduction to hill climbing algorithm

A hill-climbing algorithm is a local search algorithm that moves continuously upward (increasing) until
the best solution is attained. This algorithm comes to an end when the peak is reached.

This algorithm has a node that comprises two parts: state and value. It begins with a non-optimal state
(the hill’s base) and upgrades this state until a certain precondition is met. The heuristic function is used
as the basis for this precondition. The process of continuous improvement of the current state of iteration
can be termed as climbing. This explains why the algorithm is termed as a hill-climbing algorithm.

A hill-climbing algorithm’s objective is to attain an optimal state that is an upgrade of the existing state.
When the current state is improved, the algorithm will perform further incremental changes to the
improved state. This process will continue until a peak solution is achieved. The peak state cannot
undergo further improvements.

Features of a hill climbing algorithm

A hill-climbing algorithm has four main features:

It employs a greedy approach: This means that it moves in a direction in which the cost function is
optimized. The greedy approach enables the algorithm to establish local maxima or minima.

No Backtracking: A hill-climbing algorithm only works on the current state and succeeding states
(future). It does not look at the previous states.

Feedback mechanism: The algorithm has a feedback mechanism that helps it decide on the direction of
movement (whether up or down the hill). The feedback mechanism is enhanced through the generate-and-
test technique.

Incremental change: The algorithm improves the current solution by incremental changes.


State-space diagram analysis

A state-space diagram provides a graphical representation of states and the optimization function. If the
objective function is the y-axis, we aim to establish the local maximum and global maximum.

If the cost function represents this axis, we aim to establish the local minimum and global minimum.
More information about local minimum, local maximum, global minimum, and global maximum can be
found here.

The following diagram shows a simple state-space diagram. The objective function has been shown on
the y-axis, while the state-space represents the x-axis.

Image Source: Javat Point

A state-space diagram consists of various regions that can be explained as follows;

Local maximum: A local maximum is a solution that surpasses other neighboring solutions or states but
is not the best possible solution.

Global maximum: This is the best possible solution achieved by the algorithm.

Current state: This is the existing or present state.

Flat local maximum: This is a flat region where the neighboring solutions attain the same value.

Shoulder: This is a plateau whose edge is stretching upwards.

Problems with hill climbing

There are three regions in which a hill-climbing algorithm cannot attain a global maximum or the optimal
solution: local maximum, ridge, and plateau.

Local maximum

At this point, the neighboring states have lower values than the current state. The greedy approach feature
will not move the algorithm to a worse off state. This will lead to the hill-climbing process’s termination,
even though this is not the best possible solution.

This problem can be solved using momentum. This technique adds a certain proportion (m) of the initial
weight to the current one. m is a value between 0 and 1. Momentum enables the hill-climbing algorithm
to take huge steps that will make it move past the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space


landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
Plateau

In this region, the values attained by the neighboring states are the same. This makes it difficult for the
algorithm to choose the best direction.

This challenge can be overcome by taking a huge jump that will lead you to a non-plateau space.

Image Source: Tutorial and Example

Ridge

The hill-climbing algorithm may terminate itself when it reaches a ridge. This is because the peak of the
ridge is followed by downward movement rather than upward movement.

This impediment can be solved by going in different directions at once.


Image Source: VietMX Blog

Types of hill climbing algorithms

The following are the types of a hill-climbing algorithm:

Simple hill climbing

This is a simple form of hill climbing that evaluates the neighboring solutions. If the next neighbor state
has a higher value than the current state, the algorithm will move. The neighboring state will then be set
as the current one.

This algorithm consumes less time and requires little computational power. However, the solutions
produced by the algorithm are sub-optimal. In some cases, an optimal solution may not be guaranteed.

Algorithm

Conduct an assessment of the current state. Stop the process and indicate success if it is a goal state.

Perform looping on the current state if the assessment in step 1 did not establish a goal state.

Continue looping to attain a new solution.

Assess the new solution. If the new state has a higher value than the current state in steps 1 and 2, then
mark it as a current state.

Continue steps 1 to 4 until a goal state is attained. If this is the case, then exit the process.

Steepest – Ascent hill climbing

This algorithm is more advanced than the simple hill-climbing algorithm. It chooses the next node by
assessing the neighboring nodes. The algorithm moves to the node that is closest to the optimal or goal
state.

Algorithm
Conduct an assessment of the current state. Stop the process and indicate success if it is a goal state.

Perform looping on the current state if the assessment in step 1 did not establish a goal state.

Continue looping to attain a new solution.

Establish or set a state (X) such that current state successors have higher values than it.

Run the new operator and produce a new solution.

Assess this solution to establish whether it is a goal state. If this is the case, exit the program. Otherwise,
compare it with the state (X).

If the new state has a higher value than the state (X), set it as X. The current state should be set to Target
if the state (X) has a higher value than the current state.

Stochastic hill climbing

In this algorithm, the neighboring nodes are selected randomly. The selected node is assessed to establish
the level of improvement. The algorithm will move to this neighboring node if it has a higher value than
the current state.

Applications of hill climbing algorithm

The hill-climbing algorithm can be applied in the following areas:

Marketing

A hill-climbing algorithm can help a marketing manager to develop the best marketing plans. This
algorithm is widely used in solving Traveling-Salesman problems. It can help by optimizing the distance
covered and improving the travel time of sales team members. The algorithm helps establish the local
minima efficiently.

Robotics

Hill climbing is useful in the effective operation of robotics. It enhances the coordination of different
systems and components in robots.

Job Scheduling

The hill climbing algorithm has also been applied in job scheduling. This is a process in which system
resources are allocated to different tasks within a computer system. Job scheduling is achieved through
the migration of jobs from one node to a neighboring node. A hill-climbing technique helps establish the
right migration route.

Conclusion

Hill climbing is a very resourceful technique used in solving huge computational problems. It can help
establish the best solution for problems. This technique has the potential of revolutionizing optimization
within artificial intelligence.
In the future, technological advancement to the hill climbing technique will solve diverse and unique
optimization problems with better advanced features.

What are the different types of Hill Climbing Algorithm?


Write down the algorithms for each of them.

Features of Hill Climbing

It carries out a Heuristic search.

A heuristic function is one that ranks all the potential alternatives in a search algorithm based on the
information available. It helps the algorithm to select the best route to its solution.

This basically means that this search algorithm may not find the optimal solution to the problem but it
will give the best possible solution in a reasonable amount of time.

It is a variant of the generate-and-test algorithm.

The algorithm is as follows :

Step1: Generate possible solutions.

Step2: Evaluate to see if this is the expected solution.

Step3: If the solution has been found quit else go back to step 1.

Hill climbing takes the feedback from the test procedure and the generator uses it in deciding the next
move in the search space. Hence, we call it as a  variant of the generate-and-test algorithm.
Python Machine Learning Certification Training

Instructor-led Live Sessions

Real-life Case Studies

Assignments

Lifetime Access

Explore Curriculum

It uses the Greedy approach.  

At any point in state space, the search moves in that direction only which optimises the cost of function
with the hope of finding the most optimum solution at the end.

State Space diagram for Hill Climbing

The State-space diagram is a graphical representation of the set of states(input) our search algorithm


can reach vs the value of our objective function(function we intend to maximise/minimise) . Here;
1. The X-axis denotes the state space ie states or configuration our algorithm may reach.
2. The Y-axis denotes the values of objective function corresponding to a particular state.
The best solution will be that state space where objective function has maximum value or
global maxima.

Following are the different regions in the State Space Diagram;

Local maxima: It is a state which is better than its neighbouring state however there exists a state which
is better than it (global maximum). This state is better because here the value of the objective function is
higher than its neighbours.

Global maxima: It is the best possible state in the state space diagram. This because at this state,
objective function has the highest value.

Plateau/flat local maxima: It is a flat region of state space where neighbouring states have the same
value.

Ridge: It is a region which is higher than its neighbour’s but itself has a slope. It is a special kind of local
maximum.

Current state: The region of state space diagram where we are currently present during the search.
(Denoted by the highlighted circle in the given image.)

Working of Hill Climbing Algorithm

Hill Climbing is the simplest implementation of a Genetic Algorithm. Instead of focusing on the ease of
implementation, it completely rids itself of concepts like population and crossover. It has faster iterations
compared to more traditional genetic algorithms, but in return, it is less thorough than the traditional ones.

Hill Climbing works in a very simple manner. So, we’ll begin by trying to print “Hello World”.
Even though it is not a challenging problem, it is still a pretty good introduction. 

So, here’s a basic skeleton of the solution.

1
best = generate_random_solution()

2 best_score = evaluate_solution(best)

3  

4 while True:

5     print('Best score so far', best_score, 'Solution', best)

6     new_solution = copy_solution(best)

7     mutate_solution(new_solution)
8  

9     score = evaluate(new_solution)
10     if evaluate(new_solution) < best_score:
11         best = new_solution
12
        best_score = score

Step1: Start with a random or an empty solution. This will be your “best solution”.

1 def generate_random_solution(length=11):

2 return [random.choice(string.printable) for _ in range(length)]

This function needs to return a random solution. In a hill-climbing algorithm, making this a separate
function might be too much abstraction, but if you want to change the structure of your code to a
population-based genetic algorithm it will be helpful.

Step2: Evaluate.

1 def evaluate(solution):
2 target = "Hello, World!".split("")
3 diff = 0
4 for i in range(len(target)):
5 s = solution[i]

6 t = target[i]

7 diff += abs(ord(s) - ord(t))


8 return diff

So our evaluation function is going to return a distance metric between two strings.

Step3: Make a copy of the solution and mutate it slightly.

1 def mutate_solution(solution):

2 index = random.randint(0, len(solution) - 1)

3 solution[index] = random.choice(string.printable)

Step4: Now, evaluate the new solution. If it’s better than the best solution, we replace the
best solution with this one. Go to step two and repeat steps 2 and 3.

Basically, to reach a solution to a problem, you’ll need to write three functions.

Creating a random solution.

Evaluating a solution and returning a result.

Mutating the solution in a random fashion.

Let’s get the code in a state that is ready to run.

Python Machine Learning Certification Training

Watch The Course Preview

1 import random
2 import string
3  

4 def generate_random_solution(length=13):
5 return [random.choice(string.printable) for _ in range(length)]
6  
7 def evaluate(solution):
8
target = list("Hello, World!")
9
diff = 0
10
for i in range(len(target)):
11
s = solution[i]
12
t = target[i]
13
diff += abs(ord(s) - ord(t))
14
return diff
15
 
16
def mutate_solution(solution):
17
index = random.randint(0, len(solution) - 1)
18
solution[index] = random.choice(string.printable)
19
 
20

21 best = generate_random_solution()

22 best_score = evaluate(best)

23  

24 while True:

25 print('Best score so far', best_score, 'Solution', "".join(best))

26  

27 if best_score == 0:
break
28
 
29
new_solution = list(best)
30
mutate_solution(new_solution)
31
 
32
score = evaluate(new_solution)
33
if evaluate(new_solution) < best_score:
34
best = new_solution
35
best_score = score

Testing the Code

Best score so far 392 Solution #KAKZ'yjrJo/5

Best score so far 392 Solution #KAKZ'yjrJo/5

Best score so far 390 Solution #KAKZ/yjrJo/5

Best score so far 347 Solution #KAKZ/yjrJon5

...

Best score so far 27 Solution Jojon,"osld

...

Best score so far 12 Solution H_mmn, Vosld

...

Best score so far 4 Solution Hemmo, Vorld

...

Best score so far 0 Solution Hello, World!

Types of Hill Climbing

1. Simple Hill Climbing

Simple hill climbing is the simplest way to implement a hill-climbing algorithm. It only evaluates the
neighbour node state at a time and selects the first one which optimizes current cost and set it as a current
state. It only checks it’s one successor state, and if it finds better than the current state, then move else be
in the same state. This algorithm has the following features:

Less time consuming

Less optimal solution

The solution is not guaranteed

Algorithm for Simple Hill Climbing

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

 If it is goal state, then return success and quit.

else if it is better than the current state then assign new state as a current state.

else if not better than the current state, then return to step 2.

Step 5: Exit.

2. Steepest-Ascent hill climbing

The steepest-Ascent algorithm is a variation of the simple hill-climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is closest to
the goal state. This algorithm consumes more time as it searches for multiple neighbours.

Algorithm for Steepest-Ascent hill climbing

Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make the current
state as your initial state.

Step 2: Loop until a solution is found or the current state does not change.

Let S be a state such that any successor of the current state will be better than it.

For each operator that applies to the current state;

Apply the new operator and generate a new state.


Evaluate the new state.

If it is goal state, then return it and quit, else compare it to the S.

If it is better than S, then set new state as S.

If the S is better than the current state, then set the current state to S.

Step 5: Exit.

3. Stochastic hill climbing

Stochastic hill climbing does not examine for all its neighbours before moving. Rather, this search
algorithm selects one neighbour node at random and evaluate it as a current state or examine another
state.

Problems in different regions in Hill climbing

Hill climbing cannot reach the best possible state if it enters any of the following regions :

1. Local maximum: At a local maximum all neighbouring states have values which are worse than the
current state. Since hill-climbing uses a greedy approach, it will not move to the worse state and terminate
itself. The process will end even though a better solution may exist.
To overcome the local maximum problem: Utilise the backtracking technique. Maintain a list of
visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and
explore a new path.

2. Plateau: On the plateau, all neighbours have the same value. Hence, it is not possible to select the best
direction.

To overcome plateaus: Make a big jump. Randomly select a state far away from the current state.
Chances are that we will land at a non-plateau region

3. Ridge: Any point on a ridge can look like a peak because the movement in all possible directions is
downward. Hence, the algorithm stops when it reaches such a state.
To overcome Ridge: You could use two or more rules before testing. It implies moving in several
directions at once.

Simulated Annealing

A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete
because it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient. 

Simulated Annealing is an algorithm which yields both efficiency and completeness. 

Mechanically, the term annealing is a process of hardening a metal or glass to a high temperature then
cooling gradually, so this allows the metal to reach a low-energy crystalline state.
The same process is used in simulated annealing in which the algorithm picks a random move, instead of
picking the best move. If the random move improves the state, then it follows the same path. Otherwise,
the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses
another path.

3. Give some Problem Scenarios from which Hill Climbing


Algorithm will be of great significance

Hill Climbing is a heuristic search  used for mathematical optimization problems in the field of
Artificial Intelligence. 
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to
the problem. This solution may not be the global optimal maximum. 

In the above definition, mathematical optimization problems  imply that hill-climbing solves the
problems where we need to maximize or minimize a given real function by choosing values from the
given inputs. Example-Travelling salesman problem  where we need to minimize the distance
traveled by the salesman.

‘Heuristic search’ means that this search algorithm may not find the optimal solution to the problem.
However, it will give a good solution in a reasonable time.

A heuristic function  is a function that will rank all the possible alternatives at any branching step in
the search algorithm based on the available information. It helps the algorithm to select the best route
out of possible routes.

Features of Hill Climbing

1. Variant of generating and test algorithm:  

It is a variant of generating and testing algorithms. The generate and test algorithm is as follows :  

Generate possible solutions. 

Test to see if this is the expected solution. 

If the solution has been found quit else go to step 1.

Hence we call Hill climbing a variant of generating and test algorithm as it takes the feedback from the
test procedure. Then this feedback is utilized by the generator in deciding the next move in the search
space. 

2. Uses the Greedy approach: 

At any point in state space, the search moves in that direction only which optimizes the cost of function
with the hope of finding the optimal solution at the end. 
Types of Hill Climbing

1. Simple Hill climbing:

It examines the neighboring nodes one by one and selects the first neighboring node which optimizes
the current cost as the next node. 

Algorithm for Simple Hill climbing :  

Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state as the current state. 

Loop until the solution state is found or there are no new operators present which can be applied to the
current state. 

Select a state that has not been yet applied to the current state and apply it to produce a new state. 

Perform these to evaluate the new state.

If the current state is a goal state, then stop and return success. 

If it is better than the current state, then make it the current state and proceed further. 

If it is not better than the current state, then continue in the loop until a solution is found. 

Exit from the function.

2. Steepest-Ascent Hill climbing:  

It first examines all the neighboring nodes and then selects the node closest to the solution state as of
the next node. 
Algorithm for Steepest Ascent Hill climbing :

Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state as the current state. 

Repeat these steps until a solution is found or the current state does not change 

Select a state that has not been yet applied to the current state.

Initialize a new ‘best state’ equal to the current state and apply it to produce a new state.

Perform these to evaluate the new state

If the current state is a goal state, then stop and return success.

If it is better than the best state, then make it the best state else continue the loop with another new
state.

Make the best state as the current state and go to Step 2 of the second point.

Exit from the function.

3. Stochastic hill climbing:  

It does not examine all the neighboring nodes before deciding which node to select. It just selects a
neighboring node at random and decides (based on the amount of improvement in that neighbor)
whether to move to that neighbor or to examine another. 

Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state the current state. 

Repeat these steps until a solution is found or the current state does not change.

Select a state that has not been yet applied to the current state.

Apply the successor function to the current state and generate all the neighbor states.

Among the generated neighbor states which are better than the current state choose a state randomly (or
based on some probability function). 

If the chosen state is the goal state, then return success, else make it the current state and repeat step 2
of the second point.

Exit from the function.

State Space diagram for Hill Climbing

The state-space diagram is a graphical representation of the set of states our search algorithm can
reach vs the value of our objective function(the function which we wish to maximize). 

X-axis: denotes the state space ie states or configuration our algorithm may reach. 
Y-axis: denotes the values of objective function corresponding to a particular state. 

The best solution will be a state space where the objective function has a maximum value(global
maximum). 

Different regions in the State Space Diagram:  

Local maximum: It is a state which is better than its neighboring state however there exists a state
which is better than it(global maximum). This state is better because here the value of the objective
function is higher than its neighbors. 
 

Global maximum: It is the best possible state in the state space diagram. This is because, at this
stage, the objective function has the highest value.

Plateau/flat local maximum: It is a flat region of state space where neighboring states have the
same value.

Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special kind of local
maximum.

Current state: The region of the state space diagram where we are currently present during the
search.
Shoulder: It is a plateau that has an uphill edge.

Problems in different regions in Hill climbing

Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following
regions :  

Local maximum: At a local maximum all neighboring states have a value that is worse than the
current state. Since hill-climbing uses a greedy approach, it will not move to the worse state and
terminate itself. The process will end even though a better solution may exist. 
To overcome the local maximum problem: Utilize the backtracking technique . Maintain a list
of visited states. If the search reaches an undesirable state, it can backtrack to the previous
configuration and explore a new path.

Plateau: On the plateau, all neighbors have the same value. Hence, it is not possible to select the best
direction. 

To overcome plateaus:  Make a big jump. Randomly select a state far away from the current state.
Chances are that we will land in a non-plateau region.

Ridge: Any point on a ridge can look like a peak because movement in all possible directions is
downward. Hence the algorithm stops when it reaches this state. 
To overcome Ridge: In this kind of obstacle, use two or more rules before testing. It implies moving
in several directions at once.

4. Basics of Means End Analysis (MEA) in Artificial Intelligence (AI)

April 7, 2021

Means end analysis (MEA) is an important concept in artificial intelligence (AI) because it enhances
problem resolution. MEA solves problems by defining the goal and establishing the right action plan.
This technique is used in AI programs to limit search.

This article explains how MEA works and provides the algorithm steps used to implement it. It also
provides an example of how a problem is solved using means end analysis. This article also explains how
this technique is used in real-life applications.

Introduction to MEA and problem-solving in AI

Problem-solving in artificial intelligence is the application of heuristics, root cause analysis, and
algorithms to provide solutions to AI problems.

It is an effective way of reaching a target goal from a problematic state. This process begins with the
collection of data relating to the problem. This data is then analyzed to establish a suitable solution.

Means end analysis is a technique used to solve problems in AI programs. This technique combines
forward and backward strategies to solve complex problems. With these mixed strategies, complex
problems can be tackled first, followed by smaller ones.
In this technique, the system evaluates the differences between the current state or position and the target
or goal state. It then decides the best action to be undertaken to reach the end goal.

How MEA works

Means end analysis uses the following processes to achieve its objectives:

First, the system evaluates the current state to establish whether there is a problem. If a problem is
identified, then it means that an action should be taken to correct it.

The second step involves defining the target or desired goal that needs to be achieved.

The target goal is split into sub-goals, that are further split into other smaller goals.

This step involves establishing the actions or operations that will be carried out to achieve the end state.

In this step, all the sub-goals are linked with corresponding executable actions (operations).

After that is done, intermediate steps are undertaken to solve the problems in the current state. The chosen
operators will be applied to reduce the differences between the current state and the end state.

This step involves tracking all the changes made to the actual state. Changes are made until the target
state is achieved.

The following image shows how the target goal is divided into sub-goals, that are then linked with
executable actions.

Image Source

Algorithm steps for Means End Analysis


The following are the algorithmic steps for means end analysis:

Conduct a study to assess the status of the current state. This can be done at a macro or micro level.

Capture the problems in the current state and define the target state. This can also be done at a macro or
micro level.

Make a comparison between the current state and the end state that you defined. If these states are the
same, then perform no further action. This is an indication that the problem has been tackled. If the two
states are not the same, then move to step 4.

Record the differences between the two states at the two aforementioned levels (macro and micro).

Transform these differences into adjustments to the current state.

Determine the right action for implementing the adjustments in step 5.

Execute the changes and compare the results with the target goal.

If there are still some differences between the current state and the target state, perform course correction
until the end goal is achieved.

Cannot recover from failure: Hill-climbing strategies expand the current state of the search and
evaluate its children. The best child is selected for further expansion; neither its siblings nor its parent are
retained. Because it keeps no history, the algorithm cannot recover from failures of its strategy.
Stuck at local Maxima: Hill-climbing strategies have a tendency to become stuck at local maxima. If
they reach a state that has a better evaluation than any of its children, the algorithm halts.

If this state is not a goal, but just a local maximum, the algorithm may fail to find the best solution.

That is, performance might well improve in a limited setting, but because of the shape of the entire space,
it may never reach the overall best.

An example of local maxima in games occurs in the 8-puzzle. Often, in order to move a particular tile to
its destination, other tiles already in goal position need be moved out. This is necessary to solve the
puzzle but temporarily worsens the board state. Because "better" need not be "best" in an absolute sense,
search methods without backtracking or some other recovery mechanism are unable to distinguish
between local and global maxima.

Multiple Local Maxima: hill-climbing functions can have multiple local maxima, which frustrates hill-
climbing methods. For example, in 8-puzzle problem consider the goal state & initial state as below

Any applicable rule applied to the initial state description lowers value of our hill-climbing function. In
this case the initial state description is a local (but not a global) maximum of the function.
Stuck on plateaus & ridges: The hill climbing algorithm may get the problem stuck on plateaus & ridges

End-bunching: Instant insanity problem where the goal of the puzzle is to arrange the cubes one on top
of the other in such a way that they form a stack four cubes high, with each of the four sides having
exactly one red, one blue, one green, & one white cubes.

The goal state could be characterized exactly as having each of the four colours represented on each of the
four vertical sides. Hence, we may consider the beginning state to have the evaluation vector (4, 4, 4, 4).
The four dimensions could rather naturally be combined into a one-dimensional evaluation function
simply by summing the four components. In obtaining this sum, it seems natural to give equal weight to
each component, since each dimension has the same range of values and an analogous meaning. Hill
climbing here greatly reduces the search space, but the method still leaves a very large number of
alternatives to investigate. There are many equivalent options at each of the four nonterminal nodes of the
state-action tree for Instant Insanity, so hill climbing with this evaluation function hardly yields the
answer with a single series of four choices. The difficulty with this state evaluation function applied to
this problem is that it is much harder to increase the evaluation function by the required amount at the last
(fourth) choice node than at earlier nodes. At most of the last nodes, no action will achieve the goal, even
though the solver is currently at a node that has the evaluation (3, 3, 3, 3). Whether or not you can solve
the problem is determined by existence of such an action at the fourth node, but the evaluation function
for the states that could be achieved at earlier nodes gives very inadequate information concerning the
“correct” fourth node at which to be. That is, there are many fourth nodes with the evaluation (3, 3, 3, 3),
and very few of these have any action that leads to a terminal node with the evaluation (4, 4, 4, 4). There
are many problems like this, where the restrictions bunch up at the end of the problem. It is as if you had
many easy trails to climb most of the way up a mountain, but the summit was attainable from only a few
of these trails, with the rest running into unscalable precipices. Hill climbing is often not a very good
method to use in such cases, though it may considerably reduce the amount of trail-and-error search.
The end-bunching of restrictions is a difficulty with hill climbing that is somewhat analogous to the local
maximum difficulty.

Detours and Circling: Problems with multiple equivalently valued paths at the early nodes can be
difficult to solve with hill climbing, but perhaps the greatest frustration in using the method comes in
detour problems, where at some node you must actually choose an action that decreases the evaluation.
Somewhat less difficulty is encountered in what might be called circling problems, where at one or more
nodes you must take actions that do not increase the evaluations. If the nodes where you must detour or
circle have no better choices (that is, no choices that increase the evaluation), then you are more likely to
try detouring or circling than if the critical nodes have better choices. When better choices are available,
you tend to just choose them and go on without considering the possibility of detouring or circling. If the
path you choose does not lead to the goal, you might go back and investigate alternative paths, but the
first ones to be investigated will be those that were equivalent or almost equivalent at some previous
node. Only after all of this fails should you try detouring – that is, choosing an action at some node that
produces a state that has a lower evaluation than the previous state had.

The missionaries-and-cannibals problem is a famous example of the difficulties encountered by hill


climbing in detour problem.

Inference Problems: The description of the problem state must generally be considered to include the
entire set of expressions given or derived up to that point. Since the goal isusually asingleexpression, it is
generally much more difficult todefine an evaluation function that isuseful for hill climbing that compares
the current state with the goalstate. Another reason for the greater difficulty in using hill climbing in
inference problems is that the non-destructive operations frequently found in such problems are often not
one-to-one operations-that is, operations that take one expression as input and produce one expression as
output. There are such one-to-one operations, of course. However, in addition, inference problems usually
contain a variety of two-to-one, and three-to-one, or even more complex operations-that is, operations that
take two or three or more expressions as input and produce one expression as the output (the inferred
expression).

Example of problem-solving in Means End Analysis

Let’s assume that we have the following initial state.

Image Source
We want to apply the concept of means end analysis to establish whether there are any adjustments
needed. The first step is to evaluate the initial state, and compare it with the end goal to establish whether
there are any differences between the two states.

The following image shows a comparison between the initial state and the target state.

Image Source

The image above shows that there is a difference between the current state and the target state. This
indicates that there is a need to make adjustments to the current state to reach the end goal.

The goal can be divided into sub-goals that are linked with executable actions or operations.

The following are the three operators that can be used to solve the problem.

1. Delete operator: The dot symbol at the top right corner in the initial state does not exist in the goal
state. The dot symbol can be removed by applying the delete operator.

Image Source

2. Move operator: We will then compare the new state with the end state. The green diamond in the new
state is inside the circle while the green diamond in the end state is at the top right corner. We will move
this diamond symbol to the right position by applying the move operator.
Image Source

3. Expand operator: After evaluating the new state generated in step 2, we find that the diamond symbol
is smaller than the one in the end state. We can increase the size of this symbol by applying the expand
operator.

Image Source

After applying the three operators above, we will find that the state in step 3 is the same as the end state.
There are no differences between these two states, which means that the problem has been solved.

Applications of Means End Analysis

Means end analysis can be applied in the following fields:

5. How does MEA works? Write down the algorithm of MEA


and illustrate this algorithm by solving a particular
example.

Basics of Means End Analysis (MEA) in Artificial Intelligence (AI)

April 7, 2021

Means end analysis (MEA) is an important concept in artificial intelligence (AI) because it enhances
problem resolution. MEA solves problems by defining the goal and establishing the right action plan.
This technique is used in AI programs to limit search.

This article explains how MEA works and provides the algorithm steps used to implement it. It also
provides an example of how a problem is solved using means end analysis. This article also explains how
this technique is used in real-life applications.
Introduction to MEA and problem-solving in AI

Problem-solving in artificial intelligence is the application of heuristics, root cause analysis, and
algorithms to provide solutions to AI problems.

It is an effective way of reaching a target goal from a problematic state. This process begins with the
collection of data relating to the problem. This data is then analyzed to establish a suitable solution.

Means end analysis is a technique used to solve problems in AI programs. This technique combines
forward and backward strategies to solve complex problems. With these mixed strategies, complex
problems can be tackled first, followed by smaller ones.

In this technique, the system evaluates the differences between the current state or position and the target
or goal state. It then decides the best action to be undertaken to reach the end goal.

How MEA works

Means end analysis uses the following processes to achieve its objectives:

First, the system evaluates the current state to establish whether there is a problem. If a problem is
identified, then it means that an action should be taken to correct it.

The second step involves defining the target or desired goal that needs to be achieved.

The target goal is split into sub-goals, that are further split into other smaller goals.

This step involves establishing the actions or operations that will be carried out to achieve the end state.

In this step, all the sub-goals are linked with corresponding executable actions (operations).

After that is done, intermediate steps are undertaken to solve the problems in the current state. The chosen
operators will be applied to reduce the differences between the current state and the end state.

This step involves tracking all the changes made to the actual state. Changes are made until the target
state is achieved.

The following image shows how the target goal is divided into sub-goals, that are then linked with
executable actions.
Image Source

Algorithm steps for Means End Analysis

The following are the algorithmic steps for means end analysis:

Conduct a study to assess the status of the current state. This can be done at a macro or micro level.

Capture the problems in the current state and define the target state. This can also be done at a macro or
micro level.

Make a comparison between the current state and the end state that you defined. If these states are the
same, then perform no further action. This is an indication that the problem has been tackled. If the two
states are not the same, then move to step 4.

Record the differences between the two states at the two aforementioned levels (macro and micro).

Transform these differences into adjustments to the current state.

Determine the right action for implementing the adjustments in step 5.

Execute the changes and compare the results with the target goal.

If there are still some differences between the current state and the target state, perform course correction
until the end goal is achieved.

Example of problem-solving in Means End Analysis

Let’s assume that we have the following initial state.


Image Source

We want to apply the concept of means end analysis to establish whether there are any adjustments
needed. The first step is to evaluate the initial state, and compare it with the end goal to establish whether
there are any differences between the two states.

The following image shows a comparison between the initial state and the target state.

Image Source

The image above shows that there is a difference between the current state and the target state. This
indicates that there is a need to make adjustments to the current state to reach the end goal.

The goal can be divided into sub-goals that are linked with executable actions or operations.

The following are the three operators that can be used to solve the problem.

1. Delete operator: The dot symbol at the top right corner in the initial state does not exist in the goal
state. The dot symbol can be removed by applying the delete operator.
Image Source

2. Move operator: We will then compare the new state with the end state. The green diamond in the new
state is inside the circle while the green diamond in the end state is at the top right corner. We will move
this diamond symbol to the right position by applying the move operator.

Image Source

3. Expand operator: After evaluating the new state generated in step 2, we find that the diamond symbol
is smaller than the one in the end state. We can increase the size of this symbol by applying the expand
operator.

Image Source

After applying the three operators above, we will find that the state in step 3 is the same as the end state.
There are no differences between these two states, which means that the problem has been solved.

Applications of Means End Analysis

Means end analysis can be applied in the following fields:

Organizational planning

Means end analysis is used in organizations to facilitate general management. It helps organizational
managers to conduct planning to achieve the objectives of the organization. The management reaches the
desired goal by dividing the main goals into sub-goals that are linked with actionable tasks.

Business transformation
This technique is used to implement transformation projects. If there are any desired changes in the
current state of a business project, means end analysis is applied to establish the new processes to be
implemented. The processes are split into sub-processes to enhance effective implementation.

Gap analysis

Gap analysis is the comparison between the current performance and the required performance. Means
end analysis is applied in this field to compare the existing technology and the desired technology in
organizations. Various operations are applied to fill the existing gap in technology.

Conclusion

This article has provided an overview of means end analysis and how it works. This is an important
technique that makes it possible to solve complex problems in AI programs.

To summarize:

We have gained an overview of problem-solving in artificial intelligence. We have also gained an


understanding of means end analysis.

We have learned the various steps taken in means end analysis to reach the desired state.

We have gained an overview of the algorithm steps for means end analysis.

We have learned an example of problem-solving in means end analysis.

We have gone through some of the applications of means end analysis.

Happy learning!

You might also like