0% found this document useful (0 votes)
22 views53 pages

Uninformed Search

Uploaded by

satya prakash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views53 pages

Uninformed Search

Uploaded by

satya prakash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

3.

4 Uninformed search
strategies
3.4 Uninformed search
strategies
■ Uninformed search
– no information about the number of steps
– or the path cost from the current state to the goal
– search the state space blindly
■ Informed search, or heuristic search
– a cleverer strategy that searches toward the goal,
– Strategies that know whether one goal is more promising
than another.
– based on the information from the current state so far
Uninformed search strategies
■ Breadth-first search
■ Uniform cost search
■ Depth-first search
– Depth-limited search
– Iterative deepening search
■ Bidirectional search
Breadth-first search
■ The root node is expanded first (FIFO)
■ All the nodes generated by the root node are then expanded
■ All the nodes are expanded at a given depth in the search
tree before any nodes at the next level are expanded.
•Initialize a node with the initial state.
•Check if the initial state is already the goal state. If so, return the solution.
•Use a FIFO (first-in-first-out) queue (frontier) to store nodes to be
explored.
•Start a loop:
•If the frontier is empty, return failure (no solution found).
•Pop the shallowest node (the first in the queue).
•Add the node’s state to the explored set.
•For each possible action from the current state, generate a child node.
•If the child state hasn't been explored and is not in the frontier,
check if it is the goal state. If it is, return the solution.
•Otherwise, add the child to the frontier to explore further.
Breadth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
Breadth-First Strategy

New nodes are inserted at the end of FRINGE


1

2 3 Fringe= (1)

4 5 6 7
Breadth-First Strategy

New nodes are inserted at the end of FRINGE


1

2 3 Fringe = (2, 3)

4 5 6 7
Breadth-First Strategy

New nodes are inserted at the end of FRINGE


1

2 3 Fringe = (3, 4, 5)

4 5 6 7
Breadth-First Strategy

New nodes are inserted at the end of FRINGE


1

2 3 Fringe = (4, 5, 6, 7)

4 5 6 7
Breadth-first search
(Analysis)
■ Breadth-first search
– Complete – if the shallowest goal node is at some
finite depth d, breadth-first search will eventually find
it after expanding all shallower nodes
– Optimal, if step cost is 1

The disadvantage
– if the branching factor of a node is large,
■ the space complexity and the time complexity are
enormous
Properties of breadth-first search
■ Complete? Yes (if b is finite)

■ Time? 1+b+b2+b3+… +bd = O(bd) (Worst Case)


When goal test is applied when selected for expansion than when generated
then it is - O(bd+1)

■ Explored Set: O(bd-1)


■ Space? O(bd+1) (keeps every node in memory)

■ Optimal? Yes (if cost = 1 per step)

■ Space is the bigger problem (more than time)


Breadth-first search
(Analysis)
■ assuming 1000 nodes can be processed per second,
each with 1000 bytes of storage
Disadvantages of breadth-first
search
■ The memory requirements are a bigger problem for breadth9rst
search than is the execution time.
■ Exponential-complexity search problems cannot be solved by
uninformed methods for any but the smallest instances.
Uniform-Cost
Search(UCS)
UCS expands the node n with the lowest path cost 𝑔( 𝑛),
where 𝑔(𝑛) is the cumulative cost from the start node to n.

costs are non-negative and 𝑐>0(smallest positive step cost).


 It is optimal for any step-cost function, provided all step

 UCS ensures that the first goal node selected for expansion
is the one with the lowest cost.
 UCS is complete if the cost of every step c>0c > 0c>0,
meaning no step has a zero cost (to prevent infinite loops,
such as in a "NoOp" action).
 If there’s a zero-cost loop (e.g., an action that returns to the
same state), UCS can get stuck. This is why step costs must
be strictly positive.
Uniform-Cost Search
• Like breadth-first search
• Expand node with lowest path cost.

Properties:
 Complete (if b and d are finite)
 Optimal (if path cost increases with depth)
 Cost is O(b c*/e)
 Could be worse than breadth first search
Search
Uniform-Cost Search
Uniform-Cost Snapshot
1 Initial
Visited
4 3
Fringe
2 3 Current
7 2 2 4
Visible
Goal
4 5 6 7

Edge Cost 9
2 5 4 4 4 3 6 9

8 9 10 11 12 13 14 15

3 4 7 2 4 8 6 5 4 3 4 2 8 3 9 2

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Fringe: [27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)]
+ [22(16), 23(15)]
Uniform Cost Fringe Trace
1. [1(0)]

2. [3(3), 2(4)]
3. [2(4), 6(5), 7(7)]
4. [6(5), 5(6), 7(7), 4(11)]
5. [5(6), 7(7), 13(8), 12(9), 4(11)]
6. [7(7), 13(8), 12(9), 10(10), 11(10), 4(11)]
7. [13(8), 12(9), 10(10), 11(10), 4(11), 14(13), 15(16)]
8. [12(9), 10(10), 11(10), 27(10), 4(11), 26(12), 14(13),
15(16)]
Assumption: New nodes with the same cost as existing nodes are added after the existing node.
1. [10(10), 11(10), 27(10), 4(11), 26(12), 25(12), 14(13), 24(13), 15(16)]
2. [11(10), 27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)]
3. [27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 22(16),
21(18)]
4. [4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 22(16), 21(18)]
5. [25(12), 26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 22(16), 9(16), 21(18)]
6. [26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 22(16), 9(16), 21(18)]
7. [14(13), 24(13),8(13), 20(14), 23(15), 15(16), 22(16), 9(16), 21(18)]
8. [24(13),8(13), 20(14), 23(15), 15(16), 22(16), 9(16), 29(16),21(18), 28(21)]
9. Goal reached!
Depth-first search
■ Always expands one of the nodes at the
deepest level of the tree
■ Only when the search hits a dead end
– goes back and expands nodes at shallower
levels
– Dead end  leaf nodes but not the goal
■ Backtracking search
– only one successor is generated on
expansion
– rather than all successors
– fewer memory
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
– fringe = LIFO queue, i.e., put successors at
front
Depth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Depth-first search (Analysis)
■ Not complete
– because a path may be infinite or looping
– then the path will never fail and go back try another
option
■ Not optimal
– it doesn't guarantee the best solution
■ It overcomes
– the time and space complexities
Properties of depth-first search
■ Complete? No: fails in infinite-depth spaces, spaces
with loops
■  complete in finite spaces

■ Time? O(bm): terrible if m is much larger than d


– but if solutions are dense, may be much faster than
breadth-first
– b is the branching factor, which is the maximum number of
children each node can have.
– m is the maximum depth (or height) of the search tree.

■ Space? O(bm), i.e., linear space!

■ Optimal? No
Depth-Limited search(DLS)
• Like depth-first search but with depth limit L.

Properties:
 Incomplete (if L < d)
 non optimal (if L > d)
 Time complexity is O(bL)
 Space complexity is O(bL)
■ Completeness: DLS is incomplete if the depth limit 𝐿L is
less than the actual depth of the solution 𝑑d. This means it
might miss solutions that exist below the limit.
■ Optimality: DLS is non-optimal if the depth limit 𝐿L is
greater than 𝑑d. In such cases, it may find a solution, but it
might not be the most optimal (i.e., shallowest) solution.

where 𝑏 is the branching factor, and 𝐿 is the depth limit.


■ Time Complexity: The time complexity of DLS is O(bL)

be visited, as the search explores down to the limit 𝐿 but


This represents the maximum number of nodes that could

no further.
■ Space Complexity: The space complexity is also 𝑂(𝑏𝐿), as
the algorithm only needs to store up to 𝐿 levels of the
search tree in the recursion stack at any time. This makes it
more memory-efficient than traditional DFS in deep trees.
Iterative Deepening Search(IDS)
• A combination of depth and breadth-first search.
• If no solution is found within the current limit, the algorithm
increases 𝑙 by 1 and repeats the search.
•Commonly used in scenarios where the depth of the solution is
unknown, combining the memory efficiency of DFS with the
completeness and optimality of BFS.
•Low Memory Requirements: IDS combines the space efficiency of
DFS with the completeness and optimality of BFS.
Iterative Deepening
Characteristics
■ Completeness:
IDS is complete because it will eventually reach any depth
where the goal resides.
■ Optimality:
It finds the optimal solution when step costs are uniform
(like BFS).
■ Time Complexity:
Similar to BFS: O(b^d)), where b is the branching factor
and d is the depth of the shallowest goal.
■ Space Complexity:
Like DFS: O(bd), where d is the depth of the goal.
■ Problem: Find a path from "Arad" to "Bucharest" in the map
of Romania using IDS.
■ Steps:
■ Set depth limit L=0.Explore depth 0:
– only "Arad" is checked. Goal not found.
■ Increment L to 1:
– Explore nodes: "Arad," its direct successors.Goal not
found.
■ Increment L to 2: Explore nodes: "Arad," successors of
direct successors.
■ Continue until "Bucharest" is reached.
Search
Bidirectional Search
• Run two searches – one from the initial state
and one backward from the goal.
• We stop when the two searches meet.

Motivation:
 Time complexity: (b d/2
+b d/2
)<b d

Searching backwards not easy.


Repeated States/Looping

■ Avoiding redundant state expansions to enhance efficiency. In a state space


structured like a tree, where each state can only be reached by a single path,
redundancy is naturally limited.
■ However, in cases where a state space has cycles or allows multiple paths to
the same state (as seen in some problems like route-finding or puzzles), failing
to track visited states can lead to an overwhelming number of expansions.
■ The solution often involves employing a "closed list" in an approach called
GRAPH-SEARCH, where every expanded state is stored and checked before
further expansion. This method greatly reduces the number of repeated state
explorations
■ However, maintaining a closed list increases memory demands, especially for
depth-first search and iterative deepening search.
■ This space-time tradeoff underscores a core challenge in search algorithms:
balancing the memory required to store state histories against the
computational time saved by avoiding redundant searches.
Avoiding Looping & Repeated
States
• Use a list of expanded states; non-expanded
states (open and close list)
• Use domain specific knowledge
• Use sophisticated data structures to find
already visited states more quickly.
• Checking for repeated states can be quite
expensive and slow down the search alg.
SEARCHING WITH PARTIAL INFORMATION
■ Sensorless problems (also called conformant problems): If the agent
has no sensors at all, then (as far as it knows) it could be in one of several
possible initial states, and each action might therefore lead to one of several
possible successor states.
■ Contingency problems: If the environment is partially observable or if
actions are uncertain, then the agent's percepts provide new information
after each action. Each possible percept defines a contingency that must be
planned for. A problem is called adversarial if the uncertainty is caused by
the actions of another agent.
■ Exploration problems: When the states and actions of the environment are
unknown, the agent must act to discover them. Exploration problems can be
viewed as an extreme case of contingency problems.
■ A sensorless vacuum agent operates without any sensory input, relying solely
on its knowledge of action effects.
■ To solve sensorless problems, we search in the space of belief states rather
than physical states. The initial state is a belief state, and each action maps
from a belief state to another belief state.
■ Initial Belief State: The vacuum agent knows it could start in any state
within a set of possible states (e.g., {1,2,3,4,5,6,7,8}).
■ Action-Based State Calculation: Despite lacking sensors, the agent can
determine which states it could be in after certain actions, e.g., the action
sequence [Right, Suck, Left, Suck] always leads to the goal state 7, regardless
of the initial state.
■ Belief State Space Complexity: For a space with S
physical states, the belief state space contains 2S2^S2S
possible belief states. In the sensorless vacuum example,
there are only 12 reachable belief states out of a possible
256.
■ Deterministic vs. Nondeterministic Environments: In
a deterministic environment, actions lead to predictable
belief state changes. In a nondeterministic environment
(e.g., Murphy's Law), where actions may have multiple
outcomes, belief states become unions of possible
outcomes.
■ Reasoning in Partially Observable Worlds: Without full
observability, the agent reasons over sets of possible
states (belief states), unlike fully observable cases where
each belief state maps to a single physical state.
Contingency problems
■ In a contingency problem, an agent must adapt its actions based on new
information received through its sensors after each step. This leads to
decision-making paths that can change depending on the environment.
■ Example - Murphy’s Law World: An agent in Murphy’s Law world, with
limited sensory information (like a position sensor and a local dirt sensor),
faces unpredictability. For instance, it may take an action like “Suck” to clean
dirt, but due to Murphy’s Law, dirt might reappear or other unintended
changes might occur, requiring the agent to adjust its actions dynamically.
■ A fixed action sequence may fail because of contingencies (e.g., unexpected
dirt reappearing). Instead, agents need conditional plans like [Suck, Right, if
[R, Dirty] then Suck], allowing them to adjust actions based on current
conditions.
■ Contingency problems are common in real-world scenarios where precise
predictions are impossible, such as in driving or walking, where constant
adjustment based on new sensory input is necessary.
Contingency problems
■ Sequential Solutions in Fully Observable Environments:
In fully observable environments (where all state information is
visible), some problems may allow sequential solutions without
conditional branching, simplifying the solution.
■ Interleaving Search and Execution: In complex
environments, an agent might begin executing actions even
without a fully guaranteed plan. This interleaving approach
helps by allowing the agent to respond to arising contingencies
in real-time, which is also applicable in areas like exploration
and game playing.

You might also like