Ai Unit-1
Ai Unit-1
1) We are starting from A, so from A there are direct path to node B (with
heuristics value of 32), from A to C (with heuristics value of 25) and from A to D
(with heuristics value of 35).
2) So as per best first search algorithm choose the path with lowest heuristics
value, currently C has lowest value among above node. So, we will go from A to
C.
3) Now from C we have direct paths as C to F (with heuristics value of 17) and
C to E (with heuristics value of 19), so we will go from C to F.
4) Now from F we have direct path to go to the goal node G (with heuristics
value of 0), so we will go from F to G.
5) So now the goal node G has been reached and the path we will follow is A-
>C->F->G.
Advantages of Greedy Best-First Search:
Simple and Easy to Implement: Greedy Best-First Search is a relatively
straightforward algorithm, making it easy to implement.
Fast and Efficient: Greedy Best-First Search is a very fast algorithm,
making it ideal for applications where speed is essential.
Low Memory Requirements: Greedy Best-First Search requires only a
small amount of memory, making it suitable for applications with limited
memory.
Flexible: Greedy Best-First Search can be adapted to different types of
problems and can be easily extended to more complex problems.
Efficiency: If the heuristic function used in Greedy Best-First Search is
good to estimate, how close a node is to the solution, this algorithm can
be a very efficient and find a solution quickly, even in large search
spaces.
Disadvantages of Greedy Best-First Search:
Inaccurate Results: Greedy Best-First Search is not always guaranteed to
find the optimal solution, as it is only concerned with finding the most
promising path.
Local Optima: Greedy Best-First Search can get stuck in local optima,
meaning that the path chosen may not be the best possible path.
Heuristic Function: Greedy Best-First Search requires a heuristic function
in order to work, which adds complexity to the algorithm.
Lack of Completeness: Greedy Best-First Search is not a complete
algorithm, meaning it may not always find a solution if one is exists. This
can happen if the algorithm gets stuck in a cycle or if the search space is a
too much complex.
Applications of Greedy Best-First Search:
Pathfinding: Greedy Best-First Search is used to find the shortest path
between two points in a graph. It is used in many applications such as
video games, robotics, and navigation systems.
Machine Learning: Greedy Best-First Search can be used in machine
learning algorithms to find the most promising path through a search
space.
Optimization: Greedy Best-First Search can be used to optimize the
parameters of a system in order to achieve the desired result.
Game AI: Greedy Best-First Search can be used in game AI to evaluate
potential moves and chose the best one.
Navigation: Greedy Best-First Search can be use to navigate to find the
shortest path between two locations.
Natural Language Processing: Greedy Best-First Search can be use in
natural language processing tasks such as language translation or speech
reorganisation to generate the most likely sequence of words.
Image Processing: Greedy Best-First Search can be use in image
processing to segment image into regions of interest.
A* Search
A* search is the most commonly known form of best-first search. The heuristic
function h(n), along with the distance from the initial state g(n) to the node n, is
used. Due to the combination of UCS and greedy best-first search features, the
issue is effectively solved. Using the heuristic function, the A* search method
locates the shortest route through the search space. This search algorithm
produces the best results more quickly and extends the search tree a little less.
In contrast to UCS, the A* algorithm utilizes g(n)+h(n) instead of g(n).
We use both the search heuristic and the node-reach cost in the A* search
method. Therefore, we can add both costs as follows; this total is known as the
fitness number.
f(n)=g(n)+h(n)
Example: Given the heuristic values and distances between nodes, let’s use the
A* algorithm to find the optimal path from node S to node G.
Here’s the table representing the nodes, their heuristic values, and the distances
between nodes:
Node Heuristic Value
S 5
A 3
B 4
C 2
D 6
G 0
Initialization:
Start with the initial state, which is node S.
Create an open list and add S to it with a cost of 0 (initial cost) and a heuristic
value of 5 (estimated cost to reach G).
Node Expansion:
The algorithm selects the node from the open list with the lowest cost +
heuristic value. In this case, it’s node S with a cost of 0 + 5 = 5.
Expand node S and generate its successor nodes A and G.
Calculate the cost to reach A and G and add them to the open list.
Continued Expansion:
The algorithm selects node A from the open list with a cost of 1 (cost to reach
A) + 3 (heuristic value of A) = 4.
Expand node A and generate its successor node C.
Calculate the cost to reach C and add it to the open list.
Goal Reached:
The algorithm terminates upon reaching the goal state, which is node G with a
cost of 10 (cost to reach G) + 0 (heuristic value of G).
Result:
The A* algorithm finds the optimal path from node S to node G: S -> A -> C ->
G.
This path has the lowest cost among all possible paths, considering both the
actual distance and the heuristic values.
In this example, the A* algorithm efficiently finds the optimal solution, and the
optimal path is indeed S -> A -> C -> G with a cost of 10.
HEURISTIC FUNCTIONS
Heuristic functions are strategies or methods that guide the search process in AI
algorithms by providing estimates of the most promising path to a solution.
They are often used in scenarios where finding an exact solution is
computationally infeasible.
Instead, heuristics provide a practical approach by narrowing down the search
space, leading to faster and more efficient problem-solving.
Heuristic functions transform complex problems into more manageable
subproblems by providing estimates that guide the search process.
This approach is particularly effective in AI planning, where the goal is to
sequence actions that lead to a desired outcome.
Role of Heuristic Functions in AI
Heuristic functions are essential in AI for several reasons:
Efficiency: They reduce the search space, leading to faster solution times.
Guidance: They provide a sense of direction in large problem spaces,
avoiding unnecessary exploration.
Practicality: They offer practical solutions in situations where exact
methods are computationally prohibitive.
Common Problem Types for Heuristic Functions
Heuristic functions are particularly useful in various problem types, including:
1. Pathfinding Problems: Pathfinding problems, such as navigating a maze
or finding the shortest route on a map, benefit greatly from heuristic
functions that estimate the distance to the goal.
2. Constraint Satisfaction Problems: In constraint satisfaction problems,
such as scheduling and puzzle-solving, heuristics help in selecting the most
promising variables and values to explore.
3. Optimization Problems: Optimization problems, like the traveling
salesman problem, use heuristics to find near-optimal solutions within a
reasonable time frame.
Different Categories of Heuristic Search Techniques in AI
Direct heuristic search techniques may also be called blind control strategy, blind
search, and uninformed search.
What is the quickest path between each city and its starting point, given a list of
cities and the distances between each pair of them?
This problem could be brute-force for a small number of cities. But as the number
of cities grows, finding a solution becomes more challenging.
Search Engine
People have been interested in SEO as long as there have been searching engines.
Users want to quickly discover the information they need when utilizing a search
engine. Search engines use heuristics to speed up the search process because such
a staggering amount of data is available. A heuristic could initially attempt each
alternative at each stage. Still, as the search progresses, it can quit at any point if
the present possibility is inferior to the best solution already found. The search
engine's accuracy and speed can be improved in this way.
Local search:
In Hill Climbing, the algorithm starts with an initial solution and then
iteratively makes small changes to it in order to improve the solution.
These changes are based on a heuristic function that evaluates the quality
of the solution. The algorithm continues to make these small changes
until it reaches a local maximum, meaning that no further improvement
can be made with the current set of moves.
There are several variations of Hill Climbing, including steepest ascent
Hill Climbing, first-choice Hill Climbing, and simulated annealing. In
steepest ascent Hill Climbing, the algorithm evaluates all the possible
moves from the current solution and selects the one that leads to the best
improvement. In first-choice Hill Climbing, the algorithm randomly
selects a move and accepts it if it leads to an improvement, regardless of
whether it is the best move. Simulated annealing is a probabilistic
variation of Hill Climbing that allows the algorithm to occasionally
accept worse moves in order to avoid getting stuck in local maxima.
Hill Climbing can be useful in a variety of optimization problems, such as
scheduling, route planning, and resource allocation. However, it has some
limitations, such as the tendency to get stuck in local maxima and the lack
of diversity in the search space. Therefore, it is often combined with other
optimization techniques, such as genetic algorithms or simulated
annealing, to overcome these limitations and improve the search results.
1. Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
2. The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
3. Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
4. It may be less effective than other optimization algorithms, such as
genetic algorithms or simulated annealing, for certain types of problems.
o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
o Deterministic Nature:
Hill Climbing is a deterministic optimization algorithm, which means that
given the same initial conditions and the same problem, it will always
produce the same result. There is no randomness or uncertainty in its
operation.
5. Local Neighborhood:
Hill Climbing is a technique that operates within a small area around the
current solution. It explores solutions that are closely related to the
current state by making small, gradual changes. This approach allows it
to find a solution that is better than the current one although it may not be
the global optimum.
Flat local maximum: It is a flat space in the landscape where all the
neighbour states of current states have the same value.
Step 1: Evaluate the initial state, if it is goal state then return success and
stop.
Else if it is better than the current state then assign new state as a current
state.
Else if not better than the current state, then return to step2.
Step 5: Exit.
2. Steepest-Ascent hill climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not
change.
o Let SUCC be a state such that any successor of the current state
will be better than it.
o For each operator that applies to the current state:
o Apply the new operator and generate a new state.
o Evaluate the new state.
o If it is goal state, then return it and quit, else compare it to
the SUCC.
o If it is better than SUCC, then set new state as SUCC.
o If the SUCC is better than the current state, then set current
state to SUCC.
o Step 5: Exit.
Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make
the initial state as the current state.
Loop until the solution state is found or there are no new operators present which can be
applied to the current state.
o Select a state that has not been yet applied to the current state and apply it to produce
a new state.
o Perform these to evaluate the new state.
o If the current state is a goal state, then stop and return success.
o If it is better than the current state, then make it the current state and proceed
further.
o If it is not better than the current state, then continue in the loop until a
solution is found.
Exit from the function.
Evaluate the initial state. If it is a goal state then stop and return success.
Otherwise, make the initial state as the current state.
Repeat these steps until a solution is found or the current state does not
change
o Select a state that has not been yet applied to the current state.
o Initialize a new ‘best state’ equal to the current state and apply it to
produce a new state.
o Perform these to evaluate the new state
o If the current state is a goal state, then stop and return success.
o If it is better than the best state, then make it the best state else
continue the loop with another new state.
o Make the best state as the current state and go to Step 2 of the second
point.
Exit from the function.
o Evaluate the initial state. If it is a goal state then stop and return
success. Otherwise, make the initial state the current state.
o Repeat these steps until a solution is found or the current state does
not change.
o Select a state that has not been yet applied to the current state.
o Apply the successor function to the current state and generate all the
neighbor states.
o Among the generated neighbor states which are better than the
current state choose a state randomly (or based on some probability
function).
o If the chosen state is the goal state, then return success, else make it
the current state and repeat step 2 of the second point.
o Exit from the function.
The algorithm starts with an initial solution and a high “temperature,” which
gradually decreases over time. Here’s a step-by-step breakdown of how the
algorithm works:
Simulated Annealing has found widespread use in various fields due to its
versatility and effectiveness in solving complex optimization problems.
Some notable applications include:
Continuous space:
i) Discretization
ii) Gradient function
Example:
In case of gradient function, the vector Δf allows to calculate the magnitude and
direction of the steepest slope.Δf for the objective function f(x1,y1,x2,y2,x3,y3) is
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
Δf = ( , , , , , )
𝜕𝑥1 𝜕𝑦 𝜕𝑥2 𝜕𝑦2 𝜕𝜘3 𝜕𝑦3
Application of local search algorithm in continuous state spaces may encounter problems like local
maxima, ridges and plateaus.
random restarts and simulated annealing are more helpful than local search algorithms.
Other methods like stochastic hill climbing and simulated annealing can be applied directly to the
continuous spaces.