0% found this document useful (0 votes)
12 views29 pages

Problem Solving Mod 2

Uploaded by

Shankar Paikira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views29 pages

Problem Solving Mod 2

Uploaded by

Shankar Paikira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

PROBLEM‐SOLVING

Uninformed Search Strategies


INTRODUCTION
■ “In which we see how an agent can find a sequence of actions that
achieves its goals when no single action will do”.
■ Reflex Agents: These are simple agents that act based on a direct
mapping from states to actions. However, they are limited because
they struggle in complex environments where the state-action
mapping would be too large to store or learn efficiently.
■ Goal-Based Agents: These agents take future actions and their
potential outcomes into consideration. Unlike reflex agents, they aim
to achieve specific goals by evaluating the desirability of different
outcomes.
■ Problem-Solving Agents: A specific type of goal-based agent,
problem-solving agents work with atomic representations, meaning
they consider states of the world as indivisible wholes without
internal structure. They solve problems by finding a sequence of
actions that lead to a desired goal.
Search Algorithms:
■ This chapter introduces several search algorithms, both uninformed
and informed:
■ Uninformed Search Algorithms: These algorithms have no
additional information about the problem besides its definition. While
they can solve any solvable problem, they are generally inefficient.
■ Informed Search Algorithms: These algorithms are more efficient
as they are guided by additional information that helps them find
solutions more effectively.
■ Task Environments: The focus is on simple task environments where
the solution to a problem is a fixed sequence of actions. More complex
environments, where actions may vary based on future perceptions,
are covered in later chapters.
■ Concepts: The chapter also discusses important concepts like
asymptotic complexity (using O() notation) and NP-completeness,
which are critical for understanding the efficiency and feasibility of
problem-solving algorithms.
Problem-Solving Agents
■ Intelligent agents are supposed to maximize
their performance measure.
■ Agent Example: Touring Holiday in Arad,
Romania
■ Scenario: An agent is on a holiday in Arad
with various objectives (e.g., sightseeing,
improving Romanian, enjoying nightlife).
■ Complex Decision: The agent faces a
complex decision with multiple factors to
consider.
■ Goal Setting: With a flight from Bucharest
the next day, the agent sets a clear goal: Get
to Bucharest.
■ Impact of Goal: Simplifies decision-making
by focusing only on actions that lead to
Bucharest.
Problem-Solving Agents
■ Problem Formulation
■ Goal Definition: A goal is a set of states where the goal is achieved.
■ Decision Process: The agent must decide which actions and states
to consider to reach the goal.
■ Action Level: Instead of detailed actions (e.g., "move foot"), the
agent considers higher-level actions (e.g., "drive to the next town").
■ Using a Map for Decision-Making
■ Map as a Tool: Provides information on possible routes and states.
■ Route Planning: The agent examines possible routes on the map to
Bucharest, selecting the best one to achieve the goal.
■ Execution: Once the best route is found, the agent follows the
planned actions to reach Bucharest.
Search and Execution in Problem-Solving Agents

■ Environment Assumptions
■ Observable: The agent always knows its current state (e.g., city
signs on the map).
■ Discrete: Limited actions at each state (e.g., cities connected by a
few roads).
■ Known: The agent knows the outcome of each action (e.g., accurate
map).
■ Deterministic: Each action leads to a single outcome (e.g., driving
from Arad to Sibiu).
Problem-Solving Process
1. Search:
• Objective: Find a sequence of actions that leads to the goal.
• Algorithm: Takes the problem and returns a solution (action
sequence).
2. Execution:
• Implementation: The agent follows the action sequence without
reconsidering (open-loop).
• Certainty: The agent must be certain of outcomes, ignoring percepts
during execution.
Problem-Solving Process
■ Key Design: Formulate, Search, Execute

• Formulate: Define the goal and problem.


• Search: Find the solution (sequence of actions).
• Execute: Carry out the solution step by step.
3.1.1 Well-defined problems and solutions- Algorithm
3.1.2 Formulating problems
■ In the preceding section we proposed a formulation of the problem of
getting to Bucharest in terms of the initial state, actions, transition
model, goal test, and path cost.
■ Abstract Model:
■ Simplified State: Example: "In(Arad)" ignores details like weather,
scenery, or travel companions.
■ Simplified Actions: Focus only on location changes (e.g., driving),
ignoring minor actions (e.g., turning on the radio).
■ Abstraction:
■ Definition: The process of removing irrelevant details from a problem
representation.
■ Purpose: Makes the problem manageable by focusing only on
relevant factors.
Validity and Usefulness of Abstraction
■ Validity: The abstract solution should be expandable into a detailed
solution (e.g., driving from Arad to Sibiu corresponds to many possible
detailed trips).
■ Usefulness: Abstract actions should be simple enough to execute
without needing further planning.
■ Key Point: Abstraction allows intelligent agents to solve complex
problems without being overwhelmed by real-world details
3.2 Example problems
■ The problem-solving approach has been applied to a vast array of task
environments.
■ We list some of the best known here, distinguishing between toy and
real-world problems.
■ A toy problem is intended to illustrate or exercise various problem-
solving methods. It can be given a concise, exact description and
hence is usable by different researchers to compare the performance
of algorithms.
■ A real-world problem is one whose solutions people actually care
about. Such problems tend not to have a single agreed-upon
description, but we can give the general flavor of their formulations.
3.2.1 Toy problems
• This can be formulated as a problem as follows:
• States: The state is determined by both the agent
location and the dirt locations. The agent is in one
of two locations, each of which might or might not
contain dirt. Thus, there are 2 × 2^2 = 8 possible
world states. A larger environment with n locations
has n* 2n states.
• Initial state: Any state can be designated as the
initial state.
• Actions: In this simple environment, each state has
just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
• Transition model: The actions have their expected
effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and
Sucking in a clean square have no effect.
• Goal test: This checks whether all the squares are
clean.
• Path cost: Each step costs 1, so the path cost is the
number of steps in the path.
8-Puzzle
■ States: A state description specifies the location of
each of the eight tiles and the blank in one of the
nine squares.
• Initial state: Any state can be designated as the
initial state. Note that any given goal can be
reached from exactly half of the possible initial
states
• Actions: The simplest formulation defines the
actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are
possible depending on where the blank is.
• Transition model: Given a state and action, this
returns the resulting state; for example, if we apply
Left to the start state, the resulting state has the 5
and the blank switched.
• Goal test: This checks whether the state matches
the goal configuration
• Path cost: Each step costs 1, so the path cost is the
number of steps in the path.
8-queens problem
The goal of the 8-queens problem is to place eight queens on a chessboard such that
no queen attacks any other.

States: Any arrangement of 0 to 8 queens on


the board is a state.
• Initial state: No queens on the board.

• Actions: Add a queen to any empty square.


• Transition model: Returns the board with a
queen added to the specified square.
• Goal test: 8 queens are on the board, none
attacked
3.2.2 Real-world problems
■ Route-finding algorithms are used in a variety of applications. Some,
such as Web sites and in-car systems that provide driving directions,
are relatively straightforward extensions of the Romania example.
■ Others, such as routing video streams in computer networks, military
operations planning, and airline travel-planning systems, involve
much more complex specifications.
■ Consider the airline travel problems that must be solved by a travel-
planning Web site:
■ States: Each state obviously includes a location (e.g., an airport) and
the current time. Furthermore, because the cost of an action (a flight
segment) may depend on previous segments, their fare bases, and
their status as domestic or international, the state must record extra
information about these “historical” aspects.
Consider the airline travel problems that must be solved by a travel-
planning Web site:
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving
after the current time, leaving enough time for within-airport transfer if
needed.
• Transition model: The state resulting from taking a flight will have the flight’s
destination as the current location and the flight’s arrival time as the current
time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs
and immigration procedures, seat quality, time of day, type of airplane,
frequent-flyer mileage awards, and so on
• Example:Touring problems
• Traveling salesperson problem
• VLSI layout
• Robot navigation
• Automatic assembly sequencing
3.3 Searching for solutions
■ Having formulated some problems, we now need to solve them.
■ A solution is an action sequence, so search algorithms work by
considering various possible action sequences.
■ The possible action sequences starting at the initial state form a
search tree with the initial state at the root; the branches are actions
and the nodes correspond to states in the state space of the problem.
Search tree
Search tree
■ The root node of the tree corresponds to the initial state, In(Arad)
■ The first step is to test whether this is a goal state. Then we need to
consider taking various actions. We do this by expanding the current
state; that is, applying each legal action to the current state, thereby
generating a new set of states. In this case, we add three branches from
the parent node In(Arad) leading to three new child nodes: In(Sibiu),
In(Timisoara), and In(Zerind). Now we must choose which of these three
possibilities to consider further.
■ Suppose we choose Sibiu first. We check to see whether it is a goal state
(it is not) and then expand it to get In(Arad), In(Fagaras), In(Oradea), and
In(RimnicuVilcea). We can then choose any of these four or go back and
choose Timisoara or Zerind. Each of these six nodes is a leaf node, that
is, a node with no children in the tree.
■ The set of all leaf nodes available for expansion at any given point is
called the frontier.
■ The process of expanding nodes on the frontier continues until either a
solution is found or there are no more states to expand.
3.3.1 Infrastructure for search algorithms
■ Search algorithms require a data structure to keep track of the search
tree that is being constructed. For each node n of the tree, we have a
structure that contains four components
■ n.STATE: the state in the state space to which the node corresponds;
■ n.PARENT: the node in the search tree that generated this node;
■ n.ACTION: the action that was applied to the parent to generate the
node;
■ n.PATH-COST: the cost, traditionally denoted by g(n), of the path from
the initial state to the node, as indicated by the parent pointers.
■ Node Data Structure
■ PARENT Pointers:
– String nodes into a tree structure.
– Allow extraction of the solution path by following pointers back to the root.
■ SOLUTION Function:
– Returns the sequence of actions from a goal node to the root.
■ Nodes vs. States:
– Node: A bookkeeping structure representing the search tree.
– State: A configuration of the world.
– Different nodes can contain the same state if reached by different search paths.
■ Queue Operations
■ Purpose: Stores the frontier to choose the next node to expand.
■ Operations:
– EMPTY?(queue): Returns true if the queue is empty.
– POP(queue): Removes and returns the first element of the
queue.
– INSERT(element, queue): Inserts an element into the queue
and returns the updated queue.
3.3.2 Measuring problem-solving performance
■ We can evaluate an algorithm’s performance in four ways:
■ Completeness: Is the algorithm guaranteed to find a solution when
there is one?
■ Optimality: Does the strategy find the optimal solution
■ Time complexity: How long does it take to find a solution?
■ Space complexity: How much memory is needed to perform the
search?
3.4 Uninformed search strategies
■ The term means that the strategies have no additional information
about states beyond that provided in the problem definition.
■ All they can do is generate successors and distinguish a goal state
from a non-goal state. All search strategies are distinguished by the
order in which nodes are expanded.
■ Strategies that know whether one non-goal state is “more promising”
than another are called informed search or heuristic search strategies
3.4.1 Breadth-first search
■ Breadth-first search is a simple strategy in which the root node is
expanded first, then all the successors of the root node are expanded
next, then their successors, and so on.
■ In general, all the nodes are expanded at a given depth in the search
tree before any nodes at the next level are expanded.
■ Breadth-first search is an instance of the general graph-search
algorithm, in which the shallowest unexpanded node is chosen for
expansion.
■ This is achieved very simply by using a FIFO queue for the frontier.
Thus, new nodes go to the back of the queue, and old nodes, which
are shallower than the new nodes, get expanded first.
3.4.1 Breadth-first search
3.4.2 Uniform-cost search
■ When all step costs are equal, breadth-first search is optimal because
it always expands the shallowest unexpanded node.
■ Instead of expanding the shallowest node, uniform-cost search
expands the node n with the lowest path cost g(n). This is done by
storing the frontier as a priority queue ordered by g.
3.4.3 Depth-first search
■ Depth-first search always expands the deepest node in the current
frontier of the search tree. The search proceeds immediately to the
deepest level of the search tree, where the nodes have no successors.
■ As those nodes are expanded, they are dropped from the frontier, so
then the search “backs up” to the next deepest node that still has
unexplored successors.
■ The depth-first search algorithm is an instance of the graph-search
algorithm whereas breadth-first-search uses a FIFO queue, depth-first
search uses a LIFO queue. A LIFO queue means that the most recently
generated node is chosen for expansion.
■ As an alternative to the GRAPH-SEARCH-style implementation, it is
common to im plement depth-first search with a recursive function that
calls itself on each of its children in turn.

You might also like