What Is Hill Climbing Algorithm
What Is Hill Climbing Algorithm
A hill-climbing algorithm is an Artificial Intelligence (AI) algorithm that increases in value continuously
until it achieves a peak solution. This algorithm is used to optimize mathematical problems and in other
real-life applications like marketing and job scheduling.
This article will improve our understanding of hill climbing in artificial intelligence. It discusses various
aspects such as features, problems, types, and algorithm steps of the hill-climbing algorithm. The article
will also highlight the applications of this algorithm.
A hill-climbing algorithm is a local search algorithm that moves continuously upward (increasing) until
the best solution is attained. This algorithm comes to an end when the peak is reached.
This algorithm has a node that comprises two parts: state and value. It begins with a non-optimal state
(the hill’s base) and upgrades this state until a certain precondition is met. The heuristic function is used
as the basis for this precondition. The process of continuous improvement of the current state of iteration
can be termed as climbing. This explains why the algorithm is termed as a hill-climbing algorithm.
A hill-climbing algorithm’s objective is to attain an optimal state that is an upgrade of the existing state.
When the current state is improved, the algorithm will perform further incremental changes to the
improved state. This process will continue until a peak solution is achieved. The peak state cannot
undergo further improvements.
It employs a greedy approach: This means that it moves in a direction in which the cost function is
optimized. The greedy approach enables the algorithm to establish local maxima or minima.
No Backtracking: A hill-climbing algorithm only works on the current state and succeeding states
(future). It does not look at the previous states.
Feedback mechanism: The algorithm has a feedback mechanism that helps it decide on the direction of
movement (whether up or down the hill). The feedback mechanism is enhanced through the generate-and-
test technique.
A state-space diagram provides a graphical representation of states and the optimization function. If the
objective function is the y-axis, we aim to establish the local maximum and global maximum.
If the cost function represents this axis, we aim to establish the local minimum and global minimum.
More information about local minimum, local maximum, global minimum, and global maximum can be
found here.
The following diagram shows a simple state-space diagram. The objective function has been shown on
the y-axis, while the state-space represents the x-axis.
Local maximum: A local maximum is a solution that surpasses other neighboring solutions or states but
is not the best possible solution.
Flat local maximum: This is a flat region where the neighboring solutions attain the same value.
There are three regions in which a hill-climbing algorithm cannot attain a global maximum or the optimal
solution: local maximum, ridge, and plateau.
Local maximum
At this point, the neighboring states have lower values than the current state. The greedy approach feature
will not move the algorithm to a worse off state. This will lead to the hill-climbing process’s termination,
even though this is not the best possible solution.
This problem can be solved using momentum. This technique adds a certain proportion (m) of the initial
weight to the current one. m is a value between 0 and 1. Momentum enables the hill-climbing algorithm
to take huge steps that will make it move past the local maximum.
In this region, the values attained by the neighboring states are the same. This makes it difficult for the
algorithm to choose the best direction.
This challenge can be overcome by taking a huge jump that will lead you to a non-plateau space.
Ridge
The hill-climbing algorithm may terminate itself when it reaches a ridge. This is because the peak of the
ridge is followed by downward movement rather than upward movement.
This is a simple form of hill climbing that evaluates the neighboring solutions. If the next neighbor state
has a higher value than the current state, the algorithm will move. The neighboring state will then be set
as the current one.
This algorithm consumes less time and requires little computational power. However, the solutions
produced by the algorithm are sub-optimal. In some cases, an optimal solution may not be guaranteed.
Algorithm
Conduct an assessment of the current state. Stop the process and indicate success if it is a goal state.
Perform looping on the current state if the assessment in step 1 did not establish a goal state.
Assess the new solution. If the new state has a higher value than the current state in steps 1 and 2, then
mark it as a current state.
Continue steps 1 to 4 until a goal state is attained. If this is the case, then exit the process.
This algorithm is more advanced than the simple hill-climbing algorithm. It chooses the next node by
assessing the neighboring nodes. The algorithm moves to the node that is closest to the optimal or goal
state.
Algorithm
Conduct an assessment of the current state. Stop the process and indicate success if it is a goal state.
Perform looping on the current state if the assessment in step 1 did not establish a goal state.
Establish or set a state (X) such that current state successors have higher values than it.
Assess this solution to establish whether it is a goal state. If this is the case, exit the program. Otherwise,
compare it with the state (X).
If the new state has a higher value than the state (X), set it as X. The current state should be set to Target
if the state (X) has a higher value than the current state.
In this algorithm, the neighboring nodes are selected randomly. The selected node is assessed to establish
the level of improvement. The algorithm will move to this neighboring node if it has a higher value than
the current state.
Marketing
A hill-climbing algorithm can help a marketing manager to develop the best marketing plans. This
algorithm is widely used in solving Traveling-Salesman problems. It can help by optimizing the distance
covered and improving the travel time of sales team members. The algorithm helps establish the local
minima efficiently.
Robotics
Hill climbing is useful in the effective operation of robotics. It enhances the coordination of different
systems and components in robots.
Job Scheduling
The hill climbing algorithm has also been applied in job scheduling. This is a process in which system
resources are allocated to different tasks within a computer system. Job scheduling is achieved through
the migration of jobs from one node to a neighboring node. A hill-climbing technique helps establish the
right migration route.
Conclusion
Hill climbing is a very resourceful technique used in solving huge computational problems. It can help
establish the best solution for problems. This technique has the potential of revolutionizing optimization
within artificial intelligence.
In the future, technological advancement to the hill climbing technique will solve diverse and unique
optimization problems with better advanced features.
A heuristic function is one that ranks all the potential alternatives in a search algorithm based on the
information available. It helps the algorithm to select the best route to its solution.
This basically means that this search algorithm may not find the optimal solution to the problem but it
will give the best possible solution in a reasonable amount of time.
Step3: If the solution has been found quit else go back to step 1.
Hill climbing takes the feedback from the test procedure and the generator uses it in deciding the next
move in the search space. Hence, we call it as a variant of the generate-and-test algorithm.
Python Machine Learning Certification Training
Assignments
Lifetime Access
Explore Curriculum
At any point in state space, the search moves in that direction only which optimises the cost of function
with the hope of finding the most optimum solution at the end.
Local maxima: It is a state which is better than its neighbouring state however there exists a state which
is better than it (global maximum). This state is better because here the value of the objective function is
higher than its neighbours.
Global maxima: It is the best possible state in the state space diagram. This because at this state,
objective function has the highest value.
Plateau/flat local maxima: It is a flat region of state space where neighbouring states have the same
value.
Ridge: It is a region which is higher than its neighbour’s but itself has a slope. It is a special kind of local
maximum.
Current state: The region of state space diagram where we are currently present during the search.
(Denoted by the highlighted circle in the given image.)
Hill Climbing is the simplest implementation of a Genetic Algorithm. Instead of focusing on the ease of
implementation, it completely rids itself of concepts like population and crossover. It has faster iterations
compared to more traditional genetic algorithms, but in return, it is less thorough than the traditional ones.
Hill Climbing works in a very simple manner. So, we’ll begin by trying to print “Hello World”.
Even though it is not a challenging problem, it is still a pretty good introduction.
1
best = generate_random_solution()
2 best_score = evaluate_solution(best)
3
4 while True:
6 new_solution = copy_solution(best)
7 mutate_solution(new_solution)
8
9 score = evaluate(new_solution)
10 if evaluate(new_solution) < best_score:
11 best = new_solution
12
best_score = score
Step1: Start with a random or an empty solution. This will be your “best solution”.
1 def generate_random_solution(length=11):
This function needs to return a random solution. In a hill-climbing algorithm, making this a separate
function might be too much abstraction, but if you want to change the structure of your code to a
population-based genetic algorithm it will be helpful.
Step2: Evaluate.
1 def evaluate(solution):
2 target = "Hello, World!".split("")
3 diff = 0
4 for i in range(len(target)):
5 s = solution[i]
6 t = target[i]
So our evaluation function is going to return a distance metric between two strings.
1 def mutate_solution(solution):
3 solution[index] = random.choice(string.printable)
Step4: Now, evaluate the new solution. If it’s better than the best solution, we replace the
best solution with this one. Go to step two and repeat steps 2 and 3.
1 import random
2 import string
3
4 def generate_random_solution(length=13):
5 return [random.choice(string.printable) for _ in range(length)]
6
7 def evaluate(solution):
8
target = list("Hello, World!")
9
diff = 0
10
for i in range(len(target)):
11
s = solution[i]
12
t = target[i]
13
diff += abs(ord(s) - ord(t))
14
return diff
15
16
def mutate_solution(solution):
17
index = random.randint(0, len(solution) - 1)
18
solution[index] = random.choice(string.printable)
19
20
21 best = generate_random_solution()
22 best_score = evaluate(best)
23
24 while True:
26
27 if best_score == 0:
break
28
29
new_solution = list(best)
30
mutate_solution(new_solution)
31
32
score = evaluate(new_solution)
33
if evaluate(new_solution) < best_score:
34
best = new_solution
35
best_score = score
...
...
...
...
Simple hill climbing is the simplest way to implement a hill-climbing algorithm. It only evaluates the
neighbour node state at a time and selects the first one which optimizes current cost and set it as a current
state. It only checks it’s one successor state, and if it finds better than the current state, then move else be
in the same state. This algorithm has the following features:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
else if it is better than the current state then assign new state as a current state.
else if not better than the current state, then return to step 2.
Step 5: Exit.
The steepest-Ascent algorithm is a variation of the simple hill-climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is closest to
the goal state. This algorithm consumes more time as it searches for multiple neighbours.
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make the current
state as your initial state.
Step 2: Loop until a solution is found or the current state does not change.
Let S be a state such that any successor of the current state will be better than it.
If the S is better than the current state, then set the current state to S.
Step 5: Exit.
Stochastic hill climbing does not examine for all its neighbours before moving. Rather, this search
algorithm selects one neighbour node at random and evaluate it as a current state or examine another
state.
Hill climbing cannot reach the best possible state if it enters any of the following regions :
1. Local maximum: At a local maximum all neighbouring states have values which are worse than the
current state. Since hill-climbing uses a greedy approach, it will not move to the worse state and terminate
itself. The process will end even though a better solution may exist.
To overcome the local maximum problem: Utilise the backtracking technique. Maintain a list of
visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and
explore a new path.
2. Plateau: On the plateau, all neighbours have the same value. Hence, it is not possible to select the best
direction.
To overcome plateaus: Make a big jump. Randomly select a state far away from the current state.
Chances are that we will land at a non-plateau region
3. Ridge: Any point on a ridge can look like a peak because the movement in all possible directions is
downward. Hence, the algorithm stops when it reaches such a state.
To overcome Ridge: You could use two or more rules before testing. It implies moving in several
directions at once.
Simulated Annealing
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete
because it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient.
Mechanically, the term annealing is a process of hardening a metal or glass to a high temperature then
cooling gradually, so this allows the metal to reach a low-energy crystalline state.
The same process is used in simulated annealing in which the algorithm picks a random move, instead of
picking the best move. If the random move improves the state, then it follows the same path. Otherwise,
the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses
another path.
Hill Climbing is a heuristic search used for mathematical optimization problems in the field of
Artificial Intelligence.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to
the problem. This solution may not be the global optimal maximum.
In the above definition, mathematical optimization problems imply that hill-climbing solves the
problems where we need to maximize or minimize a given real function by choosing values from the
given inputs. Example-Travelling salesman problem where we need to minimize the distance
traveled by the salesman.
‘Heuristic search’ means that this search algorithm may not find the optimal solution to the problem.
However, it will give a good solution in a reasonable time.
A heuristic function is a function that will rank all the possible alternatives at any branching step in
the search algorithm based on the available information. It helps the algorithm to select the best route
out of possible routes.
It is a variant of generating and testing algorithms. The generate and test algorithm is as follows :
Hence we call Hill climbing a variant of generating and test algorithm as it takes the feedback from the
test procedure. Then this feedback is utilized by the generator in deciding the next move in the search
space.
At any point in state space, the search moves in that direction only which optimizes the cost of function
with the hope of finding the optimal solution at the end.
Types of Hill Climbing
It examines the neighboring nodes one by one and selects the first neighboring node which optimizes
the current cost as the next node.
Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state as the current state.
Loop until the solution state is found or there are no new operators present which can be applied to the
current state.
Select a state that has not been yet applied to the current state and apply it to produce a new state.
If the current state is a goal state, then stop and return success.
If it is better than the current state, then make it the current state and proceed further.
If it is not better than the current state, then continue in the loop until a solution is found.
It first examines all the neighboring nodes and then selects the node closest to the solution state as of
the next node.
Algorithm for Steepest Ascent Hill climbing :
Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state as the current state.
Repeat these steps until a solution is found or the current state does not change
Select a state that has not been yet applied to the current state.
Initialize a new ‘best state’ equal to the current state and apply it to produce a new state.
If the current state is a goal state, then stop and return success.
If it is better than the best state, then make it the best state else continue the loop with another new
state.
Make the best state as the current state and go to Step 2 of the second point.
It does not examine all the neighboring nodes before deciding which node to select. It just selects a
neighboring node at random and decides (based on the amount of improvement in that neighbor)
whether to move to that neighbor or to examine another.
Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make the initial
state the current state.
Repeat these steps until a solution is found or the current state does not change.
Select a state that has not been yet applied to the current state.
Apply the successor function to the current state and generate all the neighbor states.
Among the generated neighbor states which are better than the current state choose a state randomly (or
based on some probability function).
If the chosen state is the goal state, then return success, else make it the current state and repeat step 2
of the second point.
The state-space diagram is a graphical representation of the set of states our search algorithm can
reach vs the value of our objective function(the function which we wish to maximize).
X-axis: denotes the state space ie states or configuration our algorithm may reach.
Y-axis: denotes the values of objective function corresponding to a particular state.
The best solution will be a state space where the objective function has a maximum value(global
maximum).
Local maximum: It is a state which is better than its neighboring state however there exists a state
which is better than it(global maximum). This state is better because here the value of the objective
function is higher than its neighbors.
Global maximum: It is the best possible state in the state space diagram. This is because, at this
stage, the objective function has the highest value.
Plateau/flat local maximum: It is a flat region of state space where neighboring states have the
same value.
Ridge: It is a region that is higher than its neighbors but itself has a slope. It is a special kind of local
maximum.
Current state: The region of the state space diagram where we are currently present during the
search.
Shoulder: It is a plateau that has an uphill edge.
Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following
regions :
Local maximum: At a local maximum all neighboring states have a value that is worse than the
current state. Since hill-climbing uses a greedy approach, it will not move to the worse state and
terminate itself. The process will end even though a better solution may exist.
To overcome the local maximum problem: Utilize the backtracking technique . Maintain a list
of visited states. If the search reaches an undesirable state, it can backtrack to the previous
configuration and explore a new path.
Plateau: On the plateau, all neighbors have the same value. Hence, it is not possible to select the best
direction.
To overcome plateaus: Make a big jump. Randomly select a state far away from the current state.
Chances are that we will land in a non-plateau region.
Ridge: Any point on a ridge can look like a peak because movement in all possible directions is
downward. Hence the algorithm stops when it reaches this state.
To overcome Ridge: In this kind of obstacle, use two or more rules before testing. It implies moving
in several directions at once.
April 7, 2021
Means end analysis (MEA) is an important concept in artificial intelligence (AI) because it enhances
problem resolution. MEA solves problems by defining the goal and establishing the right action plan.
This technique is used in AI programs to limit search.
This article explains how MEA works and provides the algorithm steps used to implement it. It also
provides an example of how a problem is solved using means end analysis. This article also explains how
this technique is used in real-life applications.
Problem-solving in artificial intelligence is the application of heuristics, root cause analysis, and
algorithms to provide solutions to AI problems.
It is an effective way of reaching a target goal from a problematic state. This process begins with the
collection of data relating to the problem. This data is then analyzed to establish a suitable solution.
Means end analysis is a technique used to solve problems in AI programs. This technique combines
forward and backward strategies to solve complex problems. With these mixed strategies, complex
problems can be tackled first, followed by smaller ones.
In this technique, the system evaluates the differences between the current state or position and the target
or goal state. It then decides the best action to be undertaken to reach the end goal.
Means end analysis uses the following processes to achieve its objectives:
First, the system evaluates the current state to establish whether there is a problem. If a problem is
identified, then it means that an action should be taken to correct it.
The second step involves defining the target or desired goal that needs to be achieved.
The target goal is split into sub-goals, that are further split into other smaller goals.
This step involves establishing the actions or operations that will be carried out to achieve the end state.
In this step, all the sub-goals are linked with corresponding executable actions (operations).
After that is done, intermediate steps are undertaken to solve the problems in the current state. The chosen
operators will be applied to reduce the differences between the current state and the end state.
This step involves tracking all the changes made to the actual state. Changes are made until the target
state is achieved.
The following image shows how the target goal is divided into sub-goals, that are then linked with
executable actions.
Image Source
Conduct a study to assess the status of the current state. This can be done at a macro or micro level.
Capture the problems in the current state and define the target state. This can also be done at a macro or
micro level.
Make a comparison between the current state and the end state that you defined. If these states are the
same, then perform no further action. This is an indication that the problem has been tackled. If the two
states are not the same, then move to step 4.
Record the differences between the two states at the two aforementioned levels (macro and micro).
Execute the changes and compare the results with the target goal.
If there are still some differences between the current state and the target state, perform course correction
until the end goal is achieved.
Cannot recover from failure: Hill-climbing strategies expand the current state of the search and
evaluate its children. The best child is selected for further expansion; neither its siblings nor its parent are
retained. Because it keeps no history, the algorithm cannot recover from failures of its strategy.
Stuck at local Maxima: Hill-climbing strategies have a tendency to become stuck at local maxima. If
they reach a state that has a better evaluation than any of its children, the algorithm halts.
If this state is not a goal, but just a local maximum, the algorithm may fail to find the best solution.
That is, performance might well improve in a limited setting, but because of the shape of the entire space,
it may never reach the overall best.
An example of local maxima in games occurs in the 8-puzzle. Often, in order to move a particular tile to
its destination, other tiles already in goal position need be moved out. This is necessary to solve the
puzzle but temporarily worsens the board state. Because "better" need not be "best" in an absolute sense,
search methods without backtracking or some other recovery mechanism are unable to distinguish
between local and global maxima.
Multiple Local Maxima: hill-climbing functions can have multiple local maxima, which frustrates hill-
climbing methods. For example, in 8-puzzle problem consider the goal state & initial state as below
Any applicable rule applied to the initial state description lowers value of our hill-climbing function. In
this case the initial state description is a local (but not a global) maximum of the function.
Stuck on plateaus & ridges: The hill climbing algorithm may get the problem stuck on plateaus & ridges
End-bunching: Instant insanity problem where the goal of the puzzle is to arrange the cubes one on top
of the other in such a way that they form a stack four cubes high, with each of the four sides having
exactly one red, one blue, one green, & one white cubes.
The goal state could be characterized exactly as having each of the four colours represented on each of the
four vertical sides. Hence, we may consider the beginning state to have the evaluation vector (4, 4, 4, 4).
The four dimensions could rather naturally be combined into a one-dimensional evaluation function
simply by summing the four components. In obtaining this sum, it seems natural to give equal weight to
each component, since each dimension has the same range of values and an analogous meaning. Hill
climbing here greatly reduces the search space, but the method still leaves a very large number of
alternatives to investigate. There are many equivalent options at each of the four nonterminal nodes of the
state-action tree for Instant Insanity, so hill climbing with this evaluation function hardly yields the
answer with a single series of four choices. The difficulty with this state evaluation function applied to
this problem is that it is much harder to increase the evaluation function by the required amount at the last
(fourth) choice node than at earlier nodes. At most of the last nodes, no action will achieve the goal, even
though the solver is currently at a node that has the evaluation (3, 3, 3, 3). Whether or not you can solve
the problem is determined by existence of such an action at the fourth node, but the evaluation function
for the states that could be achieved at earlier nodes gives very inadequate information concerning the
“correct” fourth node at which to be. That is, there are many fourth nodes with the evaluation (3, 3, 3, 3),
and very few of these have any action that leads to a terminal node with the evaluation (4, 4, 4, 4). There
are many problems like this, where the restrictions bunch up at the end of the problem. It is as if you had
many easy trails to climb most of the way up a mountain, but the summit was attainable from only a few
of these trails, with the rest running into unscalable precipices. Hill climbing is often not a very good
method to use in such cases, though it may considerably reduce the amount of trail-and-error search.
The end-bunching of restrictions is a difficulty with hill climbing that is somewhat analogous to the local
maximum difficulty.
Detours and Circling: Problems with multiple equivalently valued paths at the early nodes can be
difficult to solve with hill climbing, but perhaps the greatest frustration in using the method comes in
detour problems, where at some node you must actually choose an action that decreases the evaluation.
Somewhat less difficulty is encountered in what might be called circling problems, where at one or more
nodes you must take actions that do not increase the evaluations. If the nodes where you must detour or
circle have no better choices (that is, no choices that increase the evaluation), then you are more likely to
try detouring or circling than if the critical nodes have better choices. When better choices are available,
you tend to just choose them and go on without considering the possibility of detouring or circling. If the
path you choose does not lead to the goal, you might go back and investigate alternative paths, but the
first ones to be investigated will be those that were equivalent or almost equivalent at some previous
node. Only after all of this fails should you try detouring – that is, choosing an action at some node that
produces a state that has a lower evaluation than the previous state had.
Inference Problems: The description of the problem state must generally be considered to include the
entire set of expressions given or derived up to that point. Since the goal isusually asingleexpression, it is
generally much more difficult todefine an evaluation function that isuseful for hill climbing that compares
the current state with the goalstate. Another reason for the greater difficulty in using hill climbing in
inference problems is that the non-destructive operations frequently found in such problems are often not
one-to-one operations-that is, operations that take one expression as input and produce one expression as
output. There are such one-to-one operations, of course. However, in addition, inference problems usually
contain a variety of two-to-one, and three-to-one, or even more complex operations-that is, operations that
take two or three or more expressions as input and produce one expression as the output (the inferred
expression).
Image Source
We want to apply the concept of means end analysis to establish whether there are any adjustments
needed. The first step is to evaluate the initial state, and compare it with the end goal to establish whether
there are any differences between the two states.
The following image shows a comparison between the initial state and the target state.
Image Source
The image above shows that there is a difference between the current state and the target state. This
indicates that there is a need to make adjustments to the current state to reach the end goal.
The goal can be divided into sub-goals that are linked with executable actions or operations.
The following are the three operators that can be used to solve the problem.
1. Delete operator: The dot symbol at the top right corner in the initial state does not exist in the goal
state. The dot symbol can be removed by applying the delete operator.
Image Source
2. Move operator: We will then compare the new state with the end state. The green diamond in the new
state is inside the circle while the green diamond in the end state is at the top right corner. We will move
this diamond symbol to the right position by applying the move operator.
Image Source
3. Expand operator: After evaluating the new state generated in step 2, we find that the diamond symbol
is smaller than the one in the end state. We can increase the size of this symbol by applying the expand
operator.
Image Source
After applying the three operators above, we will find that the state in step 3 is the same as the end state.
There are no differences between these two states, which means that the problem has been solved.
April 7, 2021
Means end analysis (MEA) is an important concept in artificial intelligence (AI) because it enhances
problem resolution. MEA solves problems by defining the goal and establishing the right action plan.
This technique is used in AI programs to limit search.
This article explains how MEA works and provides the algorithm steps used to implement it. It also
provides an example of how a problem is solved using means end analysis. This article also explains how
this technique is used in real-life applications.
Introduction to MEA and problem-solving in AI
Problem-solving in artificial intelligence is the application of heuristics, root cause analysis, and
algorithms to provide solutions to AI problems.
It is an effective way of reaching a target goal from a problematic state. This process begins with the
collection of data relating to the problem. This data is then analyzed to establish a suitable solution.
Means end analysis is a technique used to solve problems in AI programs. This technique combines
forward and backward strategies to solve complex problems. With these mixed strategies, complex
problems can be tackled first, followed by smaller ones.
In this technique, the system evaluates the differences between the current state or position and the target
or goal state. It then decides the best action to be undertaken to reach the end goal.
Means end analysis uses the following processes to achieve its objectives:
First, the system evaluates the current state to establish whether there is a problem. If a problem is
identified, then it means that an action should be taken to correct it.
The second step involves defining the target or desired goal that needs to be achieved.
The target goal is split into sub-goals, that are further split into other smaller goals.
This step involves establishing the actions or operations that will be carried out to achieve the end state.
In this step, all the sub-goals are linked with corresponding executable actions (operations).
After that is done, intermediate steps are undertaken to solve the problems in the current state. The chosen
operators will be applied to reduce the differences between the current state and the end state.
This step involves tracking all the changes made to the actual state. Changes are made until the target
state is achieved.
The following image shows how the target goal is divided into sub-goals, that are then linked with
executable actions.
Image Source
The following are the algorithmic steps for means end analysis:
Conduct a study to assess the status of the current state. This can be done at a macro or micro level.
Capture the problems in the current state and define the target state. This can also be done at a macro or
micro level.
Make a comparison between the current state and the end state that you defined. If these states are the
same, then perform no further action. This is an indication that the problem has been tackled. If the two
states are not the same, then move to step 4.
Record the differences between the two states at the two aforementioned levels (macro and micro).
Execute the changes and compare the results with the target goal.
If there are still some differences between the current state and the target state, perform course correction
until the end goal is achieved.
We want to apply the concept of means end analysis to establish whether there are any adjustments
needed. The first step is to evaluate the initial state, and compare it with the end goal to establish whether
there are any differences between the two states.
The following image shows a comparison between the initial state and the target state.
Image Source
The image above shows that there is a difference between the current state and the target state. This
indicates that there is a need to make adjustments to the current state to reach the end goal.
The goal can be divided into sub-goals that are linked with executable actions or operations.
The following are the three operators that can be used to solve the problem.
1. Delete operator: The dot symbol at the top right corner in the initial state does not exist in the goal
state. The dot symbol can be removed by applying the delete operator.
Image Source
2. Move operator: We will then compare the new state with the end state. The green diamond in the new
state is inside the circle while the green diamond in the end state is at the top right corner. We will move
this diamond symbol to the right position by applying the move operator.
Image Source
3. Expand operator: After evaluating the new state generated in step 2, we find that the diamond symbol
is smaller than the one in the end state. We can increase the size of this symbol by applying the expand
operator.
Image Source
After applying the three operators above, we will find that the state in step 3 is the same as the end state.
There are no differences between these two states, which means that the problem has been solved.
Organizational planning
Means end analysis is used in organizations to facilitate general management. It helps organizational
managers to conduct planning to achieve the objectives of the organization. The management reaches the
desired goal by dividing the main goals into sub-goals that are linked with actionable tasks.
Business transformation
This technique is used to implement transformation projects. If there are any desired changes in the
current state of a business project, means end analysis is applied to establish the new processes to be
implemented. The processes are split into sub-processes to enhance effective implementation.
Gap analysis
Gap analysis is the comparison between the current performance and the required performance. Means
end analysis is applied in this field to compare the existing technology and the desired technology in
organizations. Various operations are applied to fill the existing gap in technology.
Conclusion
This article has provided an overview of means end analysis and how it works. This is an important
technique that makes it possible to solve complex problems in AI programs.
To summarize:
We have learned the various steps taken in means end analysis to reach the desired state.
We have gained an overview of the algorithm steps for means end analysis.
Happy learning!