0% found this document useful (0 votes)
3 views30 pages

-Module - 3 Search Algorithms-05-08-2024

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

Search algorithms

Framing rules for a problem


Rule Action Description
(x, y), If x<4 (4, y) Fill the 4-litre jug
(x, y), If y<4 (x, 3) Fill the 3-litre jug
(x, y), If x>0 (x-d, y) Pour water out of 4-litre jug
(x, y), If y>0 (x, y-d) Pour water out of 3-litre jug
(x, y), If x>0 (0, y) Empty 4-liter jug
(x, y), If y>0 (x, 0) Empty 3-liter jug
If (x+y)≥4 and y>0 (4,y-(4-x)) Pour water from 3-litre jug to
4-litre jug until it is full.
Framing rules for a problem
Rule Action Description
If (x+y)≤4 and y>0 (x+y,0) Pour all water from 3-litre jug to 4-litre
jug.
If (x+y)≥3 and x>0 (x-(3-y),3) Pour water from 4-litre jug to 3-litre
until it is full.
If (x+y)≤3 and x>0 (0, x+y) Pour all water from 4-litre jug to 3-litre
jug.
(0,2) (2,0) Pour 2-litre from 3-litre jug to 4-litre jug
(x,y) (0,y) Pout out of 4-litre jug to ground.
Framing rules for a problem
X Y Rule
0 0 Initial
0 3 2
3 0 8
3 3 2
4 2 7
0 2 12
2 0 11
Breadth-first search
• Construct a tree with the initial state as its
root.
• Generate all offspring of the root by applying
each applicable rule to the initial state.
• For each leaf-node, generate all its successors by
applying rules which can be applied.
• Continue this process until some rule at some
level, produces the goal-state.
Breadth-first search

Rule 1
(0,0) Rule 2

(4,0) (0,3)
Rule 2 Rule 9 Rule 1 Rule 8

(4,3) (0,0) (1,3) (4,3) (0,0) (3,0)


Rule 12
Rule 11
Breadth-first search - Advantages

• It will not get trapped exploring a blind alley.


• If there is a solution, breadth-first algorithm will find it definitely.
• If there are multiple solutions, then a minimal solution, one that
requires minimum number of steps will be found.
Breadth-first search - Complexity
• Two categories
– Time complexity
– Space complexity
• Big O notation (Landau notation, Bachmann-Landau, asymptotic)
𝑓 𝑥 = 𝑶 𝑔 𝑥 𝑎𝑠 𝑥 → ∞
If 𝑓(𝑥) ≤ 𝑀 𝑔 𝑥 for all 𝑥 > 𝑥0 , where M is a positive real number and 𝑥0 is a
real number.
The time taken for algorithm to be T(n) = 4n2-2n+2. for large values of ‘n’, the
terms ‘2n’ and ‘2’ are negligible. Hence, we can use 𝑇 𝑛 = 𝑶 𝑛2
• The time and space complexity of breadth first algorithm is 𝑶 𝑏 𝑑+1
depth-first search
• Start from the initial state.
• If the initial state is goal state, quit and return
success.
• Generate a successor E, of the initial state. If
there are no more successors, return failure.
• Else generate successor with E as initial state.
• Continue in the loop.
depth-first search
(0,0)
Rule 1
Rule 2

(4,0) (0,3)
Rule 8
Rule 12

(4,3)
Rule 9
(0,0) Rule 11
(3,0)
depth-first search

(0,0)

(4,0) (0,3)

(4,3) (0,0) (1,3)


depth-first search
• Requires less memory, since only the nodes on the current path are stored.
• The probability of finding solution is high without searching too much.
• Depth-first algorithm stops when it finds any one solution is found among
many.
• Complexity measure
– The branching factor ‘b’, which is the maximum number of successors to any node.
– ‘d’ the depth of the shallowest node.
– ‘m’ the maximum length of any path in the state space.
Heuristic Search algorithms
• These algorithms have some information and knowledge with respect to the
problem domain.
• Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation function,
f(n).
• Most best-first algorithms include as a component of f a heuristic function, denoted
h(n).
• Classifications:
– Greedy best-first search algorithm
– A* Search
– Memory bounded heuristic search
Greedy best-first Search algorithm
Certainly! The Greedy Best-First Search algorithm is a way of navigating through a
graph or a search space to find the optimal path or solution. In simple terms, it always
chooses the path that seems best at the moment, without considering the future
consequences.
1. Start at the initial state: Begin at the starting point of your search space.

2. Select the best next step: Look at all possible next steps from your current position and
choose the one that appears to be the most promising based on a heuristic or evaluation
function. The heuristic provides an estimate of how close a given state is to the goal.
3. Move to the selected state: Take the chosen step and move to the next state.
4. Repeat: Keep repeating these steps until you reach the goal state.
Greedy best-first Search algorithm
• The key aspect of the Greedy Best-First Search is that it makes the locally optimal
choice at each step.
• It doesn't necessarily guarantee finding the globally optimal solution, as it might get
stuck in a suboptimal path.
• The algorithm is focused on immediate gains without considering the long-term
consequences.
• In summary, the Greedy Best-First Search is like choosing the path that looks most
promising at each step based on a heuristic, without thinking ahead about the overall
optimal solution.
Greedy best-first Search – Example
• Imagine you're using a navigation app to find the shortest route from your current location to a destination.
The road network is represented as a graph, where each intersection is a node, and each road between
intersections is an edge.
• In this example:
• Current State: Your current location.
• Goal State: Your destination.
The Greedy Best-First Search algorithm, in this context, might work like this:
• Start at the Initial State: Begin at your current location.
• Select the Best Next Step based on Heuristic: Evaluate all the neighboring intersections (nodes) based on a
heuristic. The heuristic could be the straight-line distance to the destination (as the crow flies) or the estimated
time to reach the destination. Choose the intersection that looks most promising according to the heuristic.
• Move to the Selected State: Follow the road to the chosen intersection.
• Repeat Steps 2-3: Repeat the process at the new intersection, considering the heuristic and choosing the next
intersection that seems most promising.
• Reach the Goal State: Keep navigating this way until you reach your destination.
Greedy best-first Search – Example
• Let's say the heuristic is based on the straight-line distance to the destination. The
algorithm might guide you to take the path that appears closest to a straight line,
favoring shorter distances.
• However, because it's greedy, it doesn't consider potential traffic or road closures that
might make a different route more optimal in the long run.
• While this approach may find a reasonably short path, it may not always guarantee
the absolute shortest path due to its local, myopic decision-making.
• Nevertheless, it is computationally efficient and works well in certain scenarios.
A* Search algorithm
• The A* (pronounced "A-star") search algorithm is a popular and efficient method used to find the shortest
path from a starting point to a goal in a graph or grid. It combines the principles of both Dijkstra's algorithm
and Greedy Best-First Search.
Here's a simple explanation of how A* works:
• Start at the Initial State: Begin at your current location, which is the starting point.
• Evaluate Nodes with Two Costs:
– Actual Cost (g-cost): The cost of getting from the starting point to the current node.
– Heuristic Cost (h-cost): An estimated cost from the current node to the goal. This is a heuristic function that provides a
guess of how far away the goal is.
• Calculate the Total Cost (f-cost): Combine the actual cost and heuristic cost for each neighboring node. f(n) =
g(n) + h(n)
• Select the Node with the Lowest Total Cost: Choose the node with the lowest total cost as the next step.
• Move to the Selected Node: Go to the node selected in step 4.
• Repeat Steps 2-5 Until Goal is Reached: Keep evaluating and moving to nodes until you reach the goal.
A* Search algorithm
• The brilliance of A* lies in its ability to prioritize nodes with lower total costs,
combining the efficiency of Greedy Best-First Search with the optimality of Dijkstra's
algorithm.
• By considering both the actual cost and a heuristic estimate of the remaining cost to
the goal, A* tends to explore paths that are both efficient and likely to lead to the
shortest path.
• In summary, A* is like choosing the path that looks promising based on both the
cost incurred so far and an estimate of the remaining cost to reach the goal.
• It strikes a balance between efficiency and optimality, making it widely used in
pathfinding and graph traversal applications.
A* Search – Example
• Imagine you are using a GPS navigation app to find the shortest path from your current location to a destination.
The road network is represented as a graph, where each intersection is a node, and each road between intersections
is an edge
Here's how A* might work in this scenario:
• Start at the Initial State: Begin at your current location.
• Evaluate Nodes with Two Costs:
– Actual Cost (g-cost): The distance traveled from the starting point to the current intersection.
– Heuristic Cost (h-cost): An estimate of the remaining distance from the current intersection to the destination. This could be based on the
straight-line distance or the expected travel time.

• Calculate the Total Cost (f-cost): Combine the actual cost and heuristic cost for each neighboring intersection. f(n)
= g(n) + h(n)
• Select the Node with the Lowest Total Cost: Choose the intersection with the lowest total cost as the next step.
• Move to the Selected Node: Follow the road to the chosen intersection.
• Repeat Steps 2-5 Until Goal is Reached: Keep evaluating and moving to nodes until you reach the destination.
A* Search – Example
• For example, if A* is used in a navigation app, it might consider both the
distance already traveled and an estimate of the remaining distance to the
destination. This helps the algorithm prioritize paths that are not only short in
distance but also likely to lead to the overall shortest path.
• A* is widely used in real-time applications like GPS navigation, video games,
and robotics where finding an optimal path is essential for efficiency and quick
decision-making.
Memory-bounded search
• Memory-bounded heuristic search algorithms are designed to efficiently
explore large search spaces while keeping memory usage within predefined
limits.
• One such algorithm that fits this description is the Memory-Bounded A*
(MA*) algorithm.
Memory-bounded search
Let's break down the key concepts of this algorithm in simple terms:
• Start at the Initial State: Begin at the starting point of your search problem.
• Evaluate Nodes with Costs: Similar to A*, MA* evaluates nodes based on two costs:
– Actual Cost (g-cost): The cost of reaching the current node from the starting point.
– Heuristic Cost (h-cost): An estimated cost from the current node to the goal.
• Calculate Total Cost and Store in Memory: Combine the actual cost and heuristic cost for each neighboring
node. Store the total cost of each node in memory.
• Memory Limit Check: Regularly check the memory usage against a predefined limit.
• Prune Nodes to Manage Memory:
– If the memory usage exceeds the limit, prune (discard) nodes with the highest total cost from memory.
– Pruning helps to focus on the most promising paths and prevents the algorithm from running out of memory.
• Select Node with Lowest Total Cost: - Choose the node with the lowest total cost from the remaining nodes
in memory.
• Move to the Selected Node: Move to the selected node and continue the search.
• Repeat Until Goal is Reached:
Memory-bounded search
• The key idea behind MA* is to balance the exploration of the search space
with memory constraints. By regularly pruning high-cost nodes, the algorithm
ensures that it focuses on the most promising paths while avoiding an excessive
increase in memory usage.
• In summary, Memory-Bounded A* is like A* search, but with an added
memory management strategy to handle large search spaces efficiently and
prevent memory overflow. It's particularly useful in situations where available
memory is limited.
Memory-bounded search - example
• A practical example where Memory-Bounded Heuristic Search algorithms are commonly used is in
route planning for autonomous vehicles or drones. Consider a scenario where a drone needs to find
the optimal path from its current location to a destination while avoiding obstacles and minimizing
travel time. Memory-Bounded A* (MA*) or similar algorithms could be employed in such a case.
Here's how it could work:
• Start at Initial State: The drone begins at its current location.
• Evaluate Nodes with Costs: Nodes represent possible locations or waypoints in the environment.
• - Actual Cost (g-cost) may be the distance traveled.
• - Heuristic Cost (h-cost) may be based on the estimated straight-line distance to the destination.
• Calculate Total Cost and Store in Memory: The total cost of each evaluated node (combination of
g-cost and h-cost) is stored in memory.
• Memory Limit Check: Regularly check the memory usage against the available memory limit on the
drone.
• Prune Nodes to Manage Memory: If memory usage exceeds the limit, the algorithm prunes
(discards) nodes from memory, prioritizing those with higher total costs.
Memory-bounded search - example
• Select Node with Lowest Total Cost: Choose the node with the lowest total cost
from the remaining nodes in memory.
• Move to the Selected Node: The drone navigates to the selected location, updating
its position.
• Repeat Until Goal is Reached: Continue the process iteratively, evaluating, storing,
pruning, and moving to nodes until the destination is reached.
• In this context, the Memory-Bounded Heuristic Search helps the drone efficiently
explore the possible paths while keeping memory usage within limits. The algorithm
adapts to the real-time constraints of the drone, ensuring that it doesn't run out of
memory and can make decisions promptly.
• This approach is relevant not only for drones but also for autonomous vehicles or
any systems where route planning with limited onboard memory is a crucial
consideration.
Hill-climbing search algorithm
• Hill Climbing is a simple local search algorithm that starts from an initial solution and
iteratively moves towards a better solution, making small steps in the direction of increasing
improvement. In simpler terms, it's like climbing a hill to reach the highest point.
Here's how it could work:
• Start at Initial State: Begin at a particular point or state, representing a potential solution
to the problem
• Evaluate the current state: Measure how good or bad the current solution is using an
evaluation function or heuristic.
• Generate neighbor states: Identify neighboring solutions by making small modifications to
the current solution.
• Evaluate neighbor states: Measure the quality of each neighboring solution.
• Move to the best neighbor: Choose the neighboring solution that improves the evaluation
the most.
Hill-climbing search algorithm
• Repeat steps 3-5: Continue this process iteratively, moving to the best neighbor at each step.
• Stop when No Better Neighbors are Found or Reach a Stopping Criterion: If no
neighbors offer a better solution, or a predefined stopping criterion is met, the algorithm
stops.
• The term "hill climbing" comes from the analogy of climbing a hill where you aim to reach
the peak. The algorithm makes incremental steps upward, always choosing the direction
that leads to the highest point in the immediate vicinity.
• However, it's essential to note that Hill Climbing is a local search algorithm, and it might
get stuck at a local optimum (the top of a hill) without reaching the global optimum (the
highest peak). It lacks the ability to explore the entire solution space systematically.
• In summary, Hill Climbing is like taking steps uphill, always choosing the path that seems
to lead to a higher point in the solution space. It's a straightforward and intuitive algorithm,
but its limitation lies in its tendency to get stuck in local optima.
Hill-climbing search - example
• Imagine you are using a navigation app to find the quickest route from your current
location to a destination. The road network is represented as a graph, where each
intersection is a node, and each road between intersections is an edge.
Here's how Hill Climbing might work in this scenario:
• Start at the Initial State (Current Route): Begin with the initial route suggested by
the navigation app.
• Evaluate the Current State (Route): Measure the current route's estimated time or
distance to the destination.
• Generate Neighbor States (Alternative Routes): Identify neighboring routes by
making small modifications to the current route. This could involve considering
alternative roads or turns.
• Evaluate Neighbor States (Evaluate Alternative Routes): Measure the estimated
time or distance for each alternative route.
Hill-climbing search - example
• Move to the Best Neighbor (Select the Shortest Route): Choose the neighboring route that
minimizes the estimated time or distance to the destination.
• Repeat Steps 3-5: Iteratively explore alternative routes, choosing the one that seems to offer
the quickest path.
• Stop when No Better Neighbors are Found or Reach a Stopping Criterion: If no
neighboring route offers a shorter estimated time, or a predefined stopping criterion (such
as a maximum number of iterations) is met, the algorithm stops.
• In this example, the Hill Climbing algorithm is akin to adjusting your route step by step,
always choosing the immediate improvement in travel time. The algorithm keeps making
adjustments until it converges to a route where no small changes lead to a faster arrival time.
• However, it's important to note that Hill Climbing, as a local search algorithm, might not
always find the globally optimal route, especially if there are hidden, shorter paths that
require going temporarily in the opposite direction before reaching the destination.

You might also like