0% found this document useful (0 votes)
90 views29 pages

Unit-2.1 Prob Solving Methods - Search Strategies

This document discusses problem solving methods in artificial intelligence and machine learning. It covers uninformed and informed search strategies, heuristics, local search algorithms, searching with partial observations, constraint satisfaction problems, constraint propagation, backtracking search, game playing, and optimal decisions in games such as alpha-beta pruning and stochastic games. Examples of problem formulations include the vacuums world and the 8-puzzle sliding tile puzzle. Problem solving involves defining the search space, starting and goal states, and finding a path from start to goal.

Uploaded by

mani111111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views29 pages

Unit-2.1 Prob Solving Methods - Search Strategies

This document discusses problem solving methods in artificial intelligence and machine learning. It covers uninformed and informed search strategies, heuristics, local search algorithms, searching with partial observations, constraint satisfaction problems, constraint propagation, backtracking search, game playing, and optimal decisions in games such as alpha-beta pruning and stochastic games. Examples of problem formulations include the vacuums world and the 8-puzzle sliding tile puzzle. Problem solving involves defining the search space, starting and goal states, and finding a path from start to goal.

Uploaded by

mani111111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

CSA2001 - FUNDAMENTALS IN AI and ML

Unit II Problem Solving Methods


Problem solving Methods - Search Strategies - Uninformed - Informed -
Heuristics - Local Search Algorithms and Optimization Problems -
Searching with Partial Observations – Constraint Satisfaction Problems –
Constraint Propagation - Backtracking Search - Game Playing – Optimal
Decisions in Games – Alpha - Beta Pruning - Stochastic Games.

Prepared by
Dr M.MANIMARAN
Senior Associate Professor (Grade-1)
School of Computing Science and Engineering
VIT Bhopal University
Unit II Problem Solving Methods / Dr M.Manimaran
Problem Solving Introduction
■ In which we see how an agent can find a sequence of actions that
achieves its goals, when no single action will do.
■ The method of solving problem through AI involves
■ the process of defining the search space
■ deciding start and goal states
■ finding the path from start state to goal state through search space.
■ State space search is a process used in the field of computer science,
including artificial intelligence(AI), in which successive configurations
or states of an instance are considered, with the goal of finding a goal
state with a desired property.

2
Problem Solving Introduction
■ This is a one kind of goal-based agent called a problem-solving agent.
■ Problem-solving agents use atomic representations, states of the
world are considered as wholes, with no internal structure visible to
the problem solving algorithms.

3
Problem Solving Introduction
■ Goal-based agents that use more advanced factored or structured representations
are usually called planning agents.
■ Problem solving begins with precise definitions of problems and their solutions and
give several examples to illustrate these definitions.
■ We then describe several general-purpose search algorithms that can be used to
solve these problems. We will see several uninformed search algorithms—
algorithms that are given no information about the problem other than its definition.
■ Although some of these algorithms can solve any solvable problem, none of them
can do so efficiently.
■ Informed search algorithms, on the other hand, can do quite well given some
guidance on where to look for solutions.

4
Problem Solving Agents

■ Intelligent agents are supposed to maximize their


performance measure.
■ Problem-solving agents: find sequence of actions that achieve
goals.
■ In this section we will use a map as an example, if you take fast
look you can deduce that each node represents a city, and the
cost to travel from a city to another is denoted by the number
over the edge connecting the nodes of those 2 cities.

5
Problem Solving Agents
■ In order to solve a problem it should pass by 2 phases of formulation:
Goal Formulation:
■ Problem solving is about having a goal we want to reach, (I want to
travel from ‘A’ to ‘E’).
■ Goals have the advantage of limiting the objectives the agent is trying to
achieve.
■ We can say that goal is a set of environment states in which our goal is
satisfied.

Problem Formulation:
■ A problem formulation is about deciding what actions and states to
consider, we will come to this point it shortly.
■ We will describe our states as “in(CITYNAME)”
■ where CITYNAME is the name of the city in which we are currently in.

6
Problem Solving Agents

■ Once our agent has found the sequence of cities it should pass by
to reach its goal it should start following this sequence.
■ The process of finding such sequence is called search,
■ Search algorithm is like a black box which takes problem as
input returns a solution.
■ Once the solution is found the sequence of actions it
recommends is carried out and this is called the execution
phase.
■ We now have a simple (formulate, search, execute) design for our
problem solving agent, so lets find out precisely how to formulate
a problem.

7
Formulating Problems
A problem can be defined formally by 4 components:
■ Initial State:
– it is the state from which our agents start solving the problem {e.i: in(A)}.
■ State Description:
– a description of the possible actions available to the agent, it is common to describe it
by means of a successor function, given state x then SUCCESSOR-FN(x) returns a set of
ordered pairs <action, successor> where action is a legal action from state x and
successor is the state in
which we can be by applying action.
– The initial state and the successor function together defined what is called state space
which is the set of all possible states reachable from the initial state {e.i: in(A), in(B),
in(C), in(D), in(E)}.

8
Formulating Problems
■ Function used in problem solving:
■ MoveGen()
■ GoalTest()
■ Goal Test: we should be able to decide whether the current state is a goal state {e.i: is
the current state is in(E)?}.
■ Path cost:
– a function that assigns a numeric value to each path, each step we take in solving the
problem should be somehow weighted, so If I travel from A to E our agent will pass by
many cities, the cost to travel between two consecutive cities should have some cost
measure, {e.i: Traveling from ‘A’ to ‘B’ costs 20 km or it can be typed as c(A, 20, B)}.
■ A solution to a problem is path from the initial state to a goal state, and solution
quality is measured by the path cost, and the optimal solution has the lowest path
cost among all possible solutions.

9
Mathematical Representation of State Space Search Problem

■ Sp:S,A,Action(s),Result(s,a),Cost(s,a)
■ S - Set of States
■ A - Set of Actions
■ Action - Selected Action
■ Result(s,a) - The resulting state post action
■ Cost(s,a) - Cost of action performed

10
Problem solving agents are goal-directed agents
■ 1. Goal Formulation: Set of one or more (desirable) world states (e.g. checkmate in
chess).
■ 2. Problem formulation: What actions and states to consider given a goal and an initial
state.
■ 3. Search for solution: Given the problem, search for a solution --- a sequence of actions
to achieve the goal starting from the initial state.
■ 4. Execution of the solution

■ Note: Formulation feels somewhat “contrived,” but was meant to model very general
(human) problem solving process.

11
Problem solving agents are goal-directed agents

Formulate goal:
– destination state
Formulate problem:
– action: drive between pair of
connected cities(direct road)
Find solution:
– sequence of cities leading from start to
goal state, e.g., Arad, Sibiu, Fagaras,
Bucharest
Execution
– drive from Arad to Bucharest according
to the solution

12
Example: Vacuum world state space graph

13
Vacuum World
– Initial state:
■ Our vacuum can be in any state of the 8 states shown in the picture.
– State description:
■ Successor function generates legal states resulting from applying the
three actions {Left, Right, and Suck}.
■ The states space is shown in the picture, there are 8 world states.
– Goal test:
■ Checks whether all squares are clean.
– Path cost:
■ Each step costs 1, so the path cost is the sum of steps in the path.

14
15
Example: The 8-puzzle “sliding tile puzzle”

15
Example: The 8-puzzle “sliding tile puzzle”
■ Initial state:
– Our board can be in any state resulting from making it in any configuration.
■ State description:
– Successor function generates legal states resulting from applying the three
actions {move blank Up, Down, Left, or Right}.
– State description specifies the location of each of the eight titles and the blank.
■ Goal test:
– Checks whether the states matches the goal configured in the goal state shown
in the picture.
■ Path cost:
– Each step costs 1, so the path cost is the sum of steps in the path.

16
Example: Robotic assembly
• states?: real-valued coordinates of
robot joint angles parts of the object to be
assembled
• actions?: continuous motions of robot
joints
• goal test?: complete assembly
• path cost?: time to execute

17
Other example search tasks
• VLSI layout: positioning millions of components and connections on a chip to
minimize area, circuit delays, etc.
• Robot navigation / planning
• Automatic assembly of complex objects
• Protein design: sequence of amino acids that will fold into the 3dimensional
protein with the right properties.
• Literally thousands of combinatorial search / reasoning / parsing / matching
problems can be formulated as search problems in exponential size state
spaces.

18
Real World Problems
Airline Travelling Problem
■ States: Each is represented by a location and the current time.
■ Initial State: This is specified by the problem.
■ Successor Function: This returns the states resulting from taking any scheduled
flight, leaving later than the current time plus the within airport transit time, from
the current airport to another.
■ Goal Test: Are we at the destination by some pre-specified time?
■ Path Cost: This depends on the monetary cost, waiting time, flight time, customs
and immigration procedures, seat quality, time of day, type of air place, frequent-
flyer mileage awards and so on.

19
Search Techniques

20
Searching for a (shortest / least cost) path to goal state(s).

Search through the state space.

We will consider search techniques that use an


explicit search tree that is generated by the
initial state + successor function.

initialize (initial node)


Loop
choose a node for expansion
according to strategy
goal node?  done
expand node with successor function

21
Tree-search algorithms

• Basic idea:
• simulated exploration of state space by generating successors of already-explored states
(a.k.a. ~ expanding states)

Note: 1) Here we only check a node for possibly being a goal state, after we select the
node for expansion.
2) A “node” is a data structure containing state + additional info (parent node, etc.

22
Tree search example
Node selected
for expansion.

23
Nodes added to tree.

24
Selected for expansion.

Added to tree.

Note: Arad added (again) to tree!


(reachable from Sibiu)

Not necessarily a problem, but


in Graph-Search, we will avoid
this by maintaining an
“explored” list.

25
Graph-search

Note:
1) Uses “explored” set to avoid visiting already explored states.
2) Uses “frontier” set to store states that remain to be explored and expanded.
3) However, with eg uniform cost search, we need to make a special check when
node (i.e. state) is on frontier. Details later.
26
Implementation: states vs. nodes
• A state is a --- representation of --- a physical configuration.

• A node is a data structure constituting part of a search tree includes state, tree
parent node, action (applied to parent), path cost (initial state to node) g(x), depth

• The Expand function creates new nodes, filling in the various fields and using the
SuccessorFn of the problem to create the corresponding states.
Fringe is the collection of nodes that have been generated but not (yet)
expanded. Each node of the fringe is a leaf node.

27
Implementation: general tree search

28
Search strategies

• A search strategy is defined by picking the order of node expansion.

• Strategies are evaluated along the following dimensions:


• completeness: does it always find a solution if one exists?
• time complexity: number of nodes generated
• space complexity: maximum number of nodes in memory
• optimality: does it always find a least-cost solution?

• Time and space complexity are measured in terms of
• b: maximum branching factor of the search tree
• d: depth of the least-cost solution
• m: maximum depth of the state space (may be ∞)

29

You might also like