0% found this document useful (0 votes)
32 views153 pages

AI Unit-2

The document discusses problem-solving agents in artificial intelligence, detailing the components of well-defined problems, such as initial state, actions, and goal tests. It also covers various search strategies, including uninformed and informed search methods, and provides examples of toy and real-world problems like the 8-puzzle and the traveling salesperson problem. Additionally, it evaluates algorithm performance based on completeness, optimality, time complexity, and space complexity.

Uploaded by

Vits21 3504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views153 pages

AI Unit-2

The document discusses problem-solving agents in artificial intelligence, detailing the components of well-defined problems, such as initial state, actions, and goal tests. It also covers various search strategies, including uninformed and informed search methods, and provides examples of toy and real-world problems like the 8-puzzle and the traveling salesperson problem. Additionally, it evaluates algorithm performance based on completeness, optimality, time complexity, and space complexity.

Uploaded by

Vits21 3504
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 153

Artificial Intelligence

UNIT - II
UNIT – II (Contents)

Problem Solving Agents Beyond Classical Search


Local Search Algorithms and
Example problems
Optimization Problems
Local Search in Continues
Searching for Solutions
Spaces
Searching with
Uninformed Search Strategies
Nondeterministic Actions
Searching with partial
Informed search Strategies
observations
Online search agents and
Heuristic Functions
unknown environments
Problem Solving Agents
Problem Solving Agents

• Imagine an agent in the city of Arad,


Romania, enjoying a touring holiday.
• Goal formulation, based on the
• Suppose the agent has a
current situation and the agent’s
nonrefundable ticket to fly out of
performance measure, is the first
Bucharest the following day. In that
step in problem solving.
case, it makes sense for the agent to
adopt the goal of getting to
Bucharest.
Problem Solving Agents

• The agent’s task is to find out how to


act, now and in the future, so that it • Problem formulation is the process
• reaches a goal state. Before it can do of deciding what actions and states
this, it needs to decide what sorts of to consider, given a goal.
actions and states it should consider.
Problem Solving Agents
• The process of looking for a
• Our agent has now adopted the goal
sequence of actions that reaches the
of driving to Bucharest and is
goal is called search.
considering where to go from Arad.
• A search algorithm takes a problem
• Three roads lead out of Arad, one
as input and returns a solution in the
toward Sibiu, one to Timisoara, and
form of an action sequence.
one to Zerind.
• Once a solution is found, the actions
• None of these achieves the goal.
it recommends can be carried out.
This is called the execution phase.
Problem Solving Agents
Well-defined problems and solutions:
• A problem can be defined formally
by five components: • The initial state that the agent starts
1. Initial state in.
2. Actions • For example, the initial state for our
3. Successor agent in Romania might be
4. Goal Test described as In(Arad)
5. Path Cost
Problem Solving Agents
Well-defined problems and solutions:
• Given a particular state s,
• A problem can be defined formally
ACTIONS(s) returns the set of
by five components:
actions that can be executed in s.
1. Initial state
• We say that each of these actions is
2. Actions
applicable in s.
3. Successor
• For example, from the state
4. Goal Test
In(Arad), the applicable actions are
5. Path Cost
{Go(Sibiu), Go(Timisoara), Go(Zerind)}
Problem Solving Agents
Well-defined problems and solutions:
• A problem can be defined formally • A description of what each action

by five components: does; the formal name for this is the

1. Initial state transition model, specified by a

2. Actions function RESULT(s, a) that returns

3. Successor the state that results from doing

4. Goal Test action a in state s.


• RESULT(In(Arad), Go(Zerind)) = In(Zerind)
5. Path Cost
Problem Solving Agents
Well-defined problems and solutions:
• A problem can be defined formally
by five components: • The goal test, which determines

1. Initial state whether a given state is a goal state.

2. Actions • The agent’s goal in Romania is the

3. Successor singleton set {In(Bucharest)}.

4. Goal Test
5. Path Cost
Problem Solving Agents
Well-defined problems and solutions:
• A problem can be defined formally • Sometimes the goal is specified by

by five components: an abstract property rather than an

1. Initial state explicitly enumerated set of states.

2. Actions For example, in chess, the goal is to

3. Successor reach a state called “checkmate,”

4. Goal Test where the opponent’s king is under

5. Path Cost attack and can’t escape.


Problem Solving Agents
• A path cost function that assigns a
Well-defined problems and solutions:
numeric cost to each path. The
• A problem can be defined formally
problem-solving agent chooses a
by five components:
cost function that reflects its own
1. Initial state
performance measure.
2. Actions
• For the agent trying to get to
3. Successor
Bucharest, time is of the essence, so
4. Goal Test
the cost of a path might be its length
5. Path Cost
in kilometers.
Problem Solving Agents
Well-defined problems and solutions: • The step cost of acting a in state s to

• A problem can be defined formally reach state s is denoted by c(s, a, s).

by five components: • A solution to a problem is an action

1. Initial state sequence that leads from the initial

2. Actions state to a goal state. Solution quality

3. Successor is measured by the path cost

4. Goal Test function, and an optimal solution

5. Path Cost has the lowest path cost among all


solutions.
Example problems

• A toy problem is intended to


• The problem-solving approach has
illustrate or exercise various
been applied to a vast array of task
problem-solving methods.
environments.
• A real-world problem is one whose
• We list some of the best known here,
solutions people care about. Such
distinguishing between toy and real-
problems tend not to have a single
world problems.
agreed-upon description.
Example problems
Toy Problems:
Vacuum world (Example 1)
• Thus, there are 2 × 2^2 = 8 possible
• States: The state is determined by
world states. A larger environment
both the agent location and the dirt
with n locations has n · 2^n states.
locations.
• Initial state: Any state can be
• The agent is in one of two locations,
designated as the initial state.
each of which might or might not
contain dirt.
Example problems
• Actions: In this simple environment,
each state has just three actions: Left, • Transition model: The actions have
Right, and Suck. their expected effects, except that
• Larger environments might also include moving Left in the leftmost square,
Up and Down. moving Right in the rightmost
• Goal test: This checks whether all the square, and Sucking in a clean
squares are clean. square have no effect.
• Path cost: Each step costs 1, so the
path cost is the number of steps in the
Example problems
Example problems
The 8-puzzle (Example 2)
• Consists of a 3×3 board with eight
numbered tiles and a blank space.
• A tile adjacent to the blank space can
slide into the space.
• The object is to reach a specified goal
state, such as the one shown on the
right of the figure.
Example problems
• States: A state description specifies the
location of each of the eight tiles and
the blank in one of the nine squares.
• Initial state: Any state can be
designated as the initial state.
• Actions: Left, Right, Up, or Down.
• Transition model: Given a state and
action, this returns the resulting state.
Example problems

• Goal test: This checks whether the


state matches the goal state.
• Path cost: Each step costs 1, so the
path cost is the number of steps in the
path.
Example problems

• The 8-puzzle has 9!/2 = 181, 440


reachable states and is easily solved.
• The 15-puzzle (on a 4×4 board) has
around 1.3 trillion states, and random
instances can be solved optimally in a
few milliseconds by the best search
algorithms
Example problems

• The goal of the 8-queens problem is to


place eight queens on a chessboard
such that no queen attacks any other.
• Although efficient special-purpose
algorithms exist for this problem and
for the whole n-queens family, it
remains a useful test problem for
search algorithms.
Example problems
The 8-Queens Problem (Example 3)
• The goal of the 8-queens problem is to
place eight queens on a chessboard
such that no queen attacks any other.
• Although efficient special-purpose
algorithms exist for this problem and
for the whole n-queens family, it
remains a useful test problem for
search algorithms.
Example problems

• An incremental formulation involves


operators that enhance the state
description, starting with an empty
state; for the 8-queens problem, this
means that each action adds a queen
to the state
Example problems

• A complete-state formulation starts


with all 8 queens on the board and
moves them around. In either case, the
path cost is of no interest because only
the final state counts.
Example problems

• States: Any arrangement of 0 to 8 queens


on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a
queen added to the specified square.
• Goal test: 8 queens are on the board, none
attacked.
Example problems

• States: Any arrangement of 0 to 8 queens


on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a
queen added to the specified square.
• Goal test: 8 queens are on the board, none
attacked.
Example problems

• We have 64 · 63 · · · 57 ≈ 1.8 × 1014


possible sequences to investigate.
• A better formulation would prohibit
placing a queen in any square that is
already attacked:
• This formulation reduces the 8-queens
state space from 1.8 × 1014 to just 2,057
and solutions are easy to find.
Example problems

Real world Problems:


• Route-finding problem
• Touring problems
• Traveling salesperson problem
• VLSI layout
• Robot navigation
• Automatic assembly sequencing
Example problems

Real world Problems:


• Route-finding problem
• Touring problems
• Traveling salesperson problem
• VLSI layout
• Robot navigation
• Automatic assembly sequencing
Example problems

Real world Problems:


• Route-finding problem
• Touring problems
• Traveling salesperson problem
• VLSI layout
• Robot navigation
• Automatic assembly sequencing
Example problems

Real world Problems:


• Route-finding problem
• Touring problems
• Traveling salesperson problem
• VLSI layout
• Robot navigation
• Automatic assembly sequencing
Example problems
Searching for Solutions
Searching for Solutions
Searching for Solutions
Searching for Solutions
Searching for Solutions
Searching for Solutions
Searching for Solutions
Searching for Solutions

Measuring problem-solving performance:


We can evaluate an algorithm’s performance in four ways:
1. Completeness: Is the algorithm guaranteed to find a solution when there is one?
2. Optimality: Does the strategy find the optimal solution?
3. Time complexity: How long does it take to find a solution?
4. Space complexity: How much memory is needed to perform the search?
Uninformed Search Strategies
• Also called blind search.
Different search methods
• The term means that the strategies
• Breadth-first search
have no additional information about
• Uniform-cost search
states beyond that provided in the
• Depth-first search
problem definition.
• Depth-limited search
• All they can do is generate successors
• Iterative deepening depth-first search
and distinguish a goal state from a non-
• Bidirectional search
goal state.
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search • Breadth-first search is a simple strategy in which

• Uniform-cost search the root node is expanded first, then all the

• Depth-first search successors of the root node are expanded next,

• Depth-limited search then their successors, and so on.

• Iterative deepening depth- • In general, all the nodes are expanded at a given

first search depth in the search tree before any nodes at the

• Bidirectional search next level are expanded


Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search • Breadth-first search is a simple strategy in which

• Uniform-cost search the root node is expanded first, then all the

• Depth-first search successors of the root node are expanded next,

• Depth-limited search then their successors, and so on.

• Iterative deepening depth- • In general, all the nodes are expanded at a given

first search depth in the search tree before any nodes at the

• Bidirectional search next level are expanded


Uninformed Search Strategies
Different search methods
• Breadth-first search • Breadth-first search is a simple strategy in which

• Uniform-cost search the root node is expanded first, then all the

• Depth-first search successors of the root node are expanded next,

• Depth-limited search then their successors, and so on.

• Iterative deepening depth- • In general, all the nodes are expanded at a given

first search depth in the search tree before any nodes at the

• Bidirectional search next level are expanded


Uninformed Search Strategies
• Imagine searching a uniform tree where every state has b
Different search methods successors.
• Breadth-first search • The root of the search tree generates b nodes at the first

• Uniform-cost search level, each of which generates b more nodes, for a total of
b^2 at the second level.
• Depth-first search
• Each of these generates b more nodes, yielding b3 nodes at
• Depth-limited search
the third level, and so on.
• Iterative deepening depth- • Now suppose that the solution is at depth d
first search • Then the total number of nodes generated is

• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Instead of expanding the shallowest node,
• Uniform-cost search
uniform-cost search expands the node n with
• Depth-first search
the lowest path cost g(n).
• Depth-limited search
• This is done by storing the frontier as a priority
• Iterative deepening depth-
queue ordered by g
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Uninformed Search Strategies
Different search methods • The successors of Sibiu are Rimnicu Vilcea and

• Breadth-first search Fagaras, with costs 80 and 99, respectively.

• Uniform-cost search • The least-cost node, Rimnicu Vilcea, is

• Depth-first search expanded next, adding Pitesti with cost

• Depth-limited search 80 + 97 = 177.

• Iterative deepening depth- • The least-cost node is now Fagaras, so it is

first search expanded, adding Bucharest with cost

• Bidirectional search 99 + 211 = 310.


Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search • Depth-first search always expands the deepest
• Depth-limited search node in the current frontier of the search tree.
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search • Depth-first search always expands the deepest
• Depth-limited search node in the current frontier of the search tree.
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search • Depth-first search always expands the deepest
• Depth-limited search node in the current frontier of the search tree.
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search • Depth-first search always expands the deepest
• Depth-limited search node in the current frontier of the search tree.
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• The embarrassing failure of depth-first search
• Breadth-first search
in infinite state spaces can be alleviated by
• Uniform-cost search
supplying depth-first search with a
• Depth-first search
predetermined depth limit .
• Depth-limited search
• That is, nodes at depth are treated as if they
• Iterative deepening depth-
have no successors. This approach is called
first search
depth-limited search.
• Bidirectional search
Uninformed Search Strategies
Different search methods
• The embarrassing failure of depth-first search
• Breadth-first search
in infinite state spaces can be alleviated by
• Uniform-cost search
supplying depth-first search with a
• Depth-first search
predetermined depth limit .
• Depth-limited search
• That is, nodes at depth are treated as if they
• Iterative deepening depth-
have no successors. This approach is called
first search
depth-limited search.
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Iterative deepening search (or iterative
• Breadth-first search
deepening depth-first search) is a general
• Uniform-cost search
strategy,
• Depth-first search
• often used in combination with depth-first tree
• Depth-limited search
search, that finds the best depth limit. It does
• Iterative deepening depth-
• this by gradually increasing the limit—first 0,
first search
then 1, then 2, and so on—until a goal is found.
• Bidirectional search
Uninformed Search Strategies
Uninformed Search Strategies
Uninformed Search Strategies
Uninformed Search Strategies
Different search methods
• Breadth-first search
• The idea behind bidirectional search is to run
• Uniform-cost search
two simultaneous searches—one forward from
• Depth-first search
the initial state and the other backward from
• Depth-limited search
the goal—hoping that the two searches meet in
• Iterative deepening depth-
the middle
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Informed search algorithms
Outline
• Heuristics
• Best-first search
• Greedy best-first search
• A* search
Best-first search
• Idea: use an evaluation function f(n) for each node
• f(n) provides an estimate for the total cost.
→ Expand the node n with smallest f(n).

• Implementation:
Order the nodes in fringe increasing order of cost.

• Special cases:
• greedy best-first search
• A* search
Greedy best-first search

• f(n) = estimate of cost from n to goal


• e.g., fSLD(n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal.
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
• Complete? No – can get stuck in loops.
• Time? O(bm), but a good heuristic can give remarkable improvement
• Space? O(bm) - keeps all nodes in memory
• Optimal? No
e.g. Arad→Sibiu→Rimnicu Virea→Pitesti→Bucharest is shorter!
A* search
• Idea: avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal
• Best First search has f(n)=h(n)
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Properties of A*
• Complete? Yes
• Time/Space? Exponential b d

• Optimal? Yes
• Optimally Efficient: Yes
Heuristic Functions

• The 8-puzzle was one of the earliest


heuristic search problems.
• The average solution cost for a
randomly generated 8-puzzle
instance is about 22 steps.
• The branching factor is about 3.
• Approx. 3^22 ≈ 3.1 × 10^10 states
Heuristic Functions

• If we want to find the shortest


solutions by using A∗.
• We need a heuristic function that
never overestimates the number of
steps to the goal.
Heuristic Functions
First Heuristic:
• h1 = the number of misplaced tiles.
• In diagram all the eight tiles are out
of position, so the start state would
have h1 = 8.
• h1 is an admissible heuristic because
any tile that is out of place must be
moved at least once.
Heuristic Functions

Second Heuristic:
• h2 = the sum of the distances of the
tiles from their goal positions.
• Tiles cannot move along diagonals,
the distance we will count is the sum
of the horizontal and vertical
distances.
Heuristic Functions

• This is sometimes called the city block


distance or Manhattan distance.
• Tiles 1 to 8 in the start state give a
Manhattan distance of
Heuristic Functions

The effect of heuristic accuracy on Ex:


performance: • d = 5, N = 52, b∗ = 1.92
• If the total number of nodes generated by A∗ • The effective branching factor can vary
for a particular problem is N and the solution
across problem instances.
depth is d, then b∗ is the branching factor that
• Therefore, experimental measurements
a uniform tree of depth d would have to have
to contain N + 1 nodes. Thus, of b∗ on a small set of problems can
provide a good guide to the heuristic’s
overall usefulness.
Heuristic Functions
Heuristic Functions
Generating admissible heuristics from
subproblems: Pattern databases:
• For example, Diagram shows a
subproblem involves getting tiles 1, 2,
3, 4 into their correct positions.
• Clearly, the cost of the optimal
solution of this subproblem is a lower
bound on the cost of the complete
problem.
Local Search Algorithms and Optimization Problems

• The search algorithms are designed to


explore search spaces systematically • In many problems the path to the goal
by keeping one or more paths in is irrelevant. For example, in the 8-
memory and alternatives have been queens problem the final configuration
explored at each point along the path. of queens is important, not the order in
• When a goal is found, the path to that which they are added.
goal also constitutes a solution to the
problem.
Local Search Algorithms and Optimization Problems

• They have two key advantages:


• Local search algorithms operate using
1. Very little memory
a single current node and generally
2. Find reasonable solutions
move only to neighbors of that node.
• Local search algorithms are useful for
• Typically, the paths followed by the
solving pure optimization problems, in
search are not retained. Although local
which the aim is to find the best state
search algorithms are not systematic.
according to an objective function.
Local Search Algorithms and Optimization Problems

• To understand local search, we find it


useful to consider the state-space
landscape.
• A landscape has both “location”
(defined by the state) and “elevation”
(defined by the value of the heuristic
cost function or objective function).
Local Search Algorithms and Optimization Problems

• If elevation corresponds to cost, then


the aim is to find the lowest valley—a
global minimum;
• If elevation corresponds to an
objective function, then the aim is to
find the highest peak—a global
maximum.
Local Search Algorithms and Optimization Problems

Hill-climbing search:
• It is simply a loop that continually
moves in the direction of increasing
value—that is, uphill.
• It terminates when it reaches a “peak”
where no neighbor has a higher value.
Local Search Algorithms and Optimization Problems

Hill-climbing search:
• The algorithm does not maintain a
search tree, the current node need
only record the state and the value of
the objective function.
• Hill climbing does not look ahead
beyond the immediate neighbors.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems

Simulated annealing: • Therefore, it seems reasonable to try


• A hill-climbing algorithm that never to combine hill climbing with a random
makes “downhill” moves toward states walk in some way that yields both
with lower value (or higher cost) is efficiency and completeness.
guaranteed to be incomplete, because • Simulated annealing is such an
it can get stuck on a local maximum. algorithm.
Local Search Algorithms and Optimization Problems

Simulated annealing: • Therefore, it seems reasonable to try


• A hill-climbing algorithm that never to combine hill climbing with a random
makes “downhill” moves toward states walk in some way that yields both
with lower value (or higher cost) is efficiency and completeness.
guaranteed to be incomplete, because • Simulated annealing is such an
it can get stuck on a local maximum. algorithm.
Local Search Algorithms and Optimization Problems

Simulated annealing:
• Gradient descent: Imagine the task of
getting a ping-pong ball into the
deepest crevice in a bumpy surface.
• If we just let the ball roll, it will come
to rest at a local minimum.
• If we shake the surface, we can bounce
the ball out of the local minimum.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems

Simulated annealing:
• Instead of picking the best move, picks
a random move. If the move improves
the situation, it is always accepted.
• The probability of making a "bad"
move decreases exponentially as the
quality of the move worsens, which is
quantified by ΔE.
Local Search Algorithms and Optimization Problems

Simulated annealing:
• Probability decreases as the
"temperature" parameter T
decreases.
• At higher values of T, "bad" moves
are more likely to be allowed, but as
T decreases, these moves become
increasingly unlikely to be chosen.
Local Search Algorithms and Optimization Problems

Local beam search: • Local beam search with k states restarts


• The local beam search algorithm3 keeps in parallel instead of in sequence.
track of k states rather than just one. • In fact, the algorithms are quite
• At each step, all the successors of all k different.
states are generated. If any one is a • Each search process runs independently
goal, the algorithm halts. of the others.
• Otherwise, it selects the k best • Useful information is passed among the
successors from the complete list and parallel search threads.
repeats.
Local Search Algorithms and Optimization Problems

• The states that generate the best


successors say to the others, “Come
over here, the grass is greener!”
• The algorithm quickly abandons
unfruitful searches and moves its
resources to where the most progress is
being made
Local Search Algorithms and Optimization Problems

Genetic algorithms: • GAs begin with a set of k randomly

• In genetic algorithm (or GA) successor generated states, called the population.

states are generated by combining two • Each state is represented as a string

parent states rather than by modifying a over a finite alphabet—most commonly,

single state. a string of 0s and 1s.


Local Search Algorithms and Optimization Problems

Genetic algorithms: • GAs begin with a set of k randomly

• In genetic algorithm (or GA) successor generated states, called the population.

states are generated by combining two • Each state is represented as a string

parent states rather than by modifying a over a finite alphabet—most commonly,

single state. a string of 0s and 1s.


Local Search Algorithms and Optimization Problems

Example:
• For example, an 8-queens state must
specify the positions of 8 queens, each
in a column of 8 squares, and so
requires bits.
• Alternatively, the state could be
represented as 8 digits, each in the
range from 1 to 8.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search in Continues Spaces

• The most real-world environments are Example: Place three new airports
continuous. anywhere in Romania
• In continuous state and action spaces,
they have infinite branching factors.
• Suppose we want to place three new
airports anywhere in Romania.
• The sum of squared distances from each
city on the map to its nearest airport is
minimized.
Local Search in Continues Spaces

• The state space is then defined by the Example: Place three new airports
coordinates of the airports: (x1, y1), (x2, anywhere in Romania
y2), and (x3, y3).
• Moving one or more of the airports on
the map.
• The objective function f(x1, y1, x2, y2,
x3, y3) is relatively easy to compute for
any state once we compute the closest
cities.
Local Search in Continues Spaces

• The most real-world environments are • Let Ci be the set of cities whose closest
continuous. airport (in the current state) is airport i.
• In continuous state and action spaces, • Then, in the neighborhood of the
they have infinite branching factors. current state, where the Ci remain
• Suppose we want to place three new constant, we have
airports anywhere in Romania.
• The sum of squared distances from each
city on the map to its nearest airport is
minimized.
Local Search in Continues Spaces

• For example, we can move only one


airport at a time in either the x or y
• This expression is correct locally, but not direction by a fixed amount ±δ.
globally because the sets Ci are • With 6 variables, this gives 12 possible
(discontinuous) functions of the state. successors for each state.
• One way to avoid continuous problems • We can then apply any of the local
is simply to discretize the neighborhood search algorithms described previously.
of each state.
Searching with Nondeterministic Actions

• In partially observable environments,


agents lack complete information about • In both cases, the future percepts
the world and rely on percepts to cannot be determined in advance and
narrow down possible states, aiding in the agent’s future actions will depend
decision-making. on those future percepts.
• In nondeterministic environments, the • Agents use contingency plans,
outcomes of actions are uncertain, and specifying actions based on received
percepts inform the agent about actual percepts.
outcomes, guiding adaptive strategies.
Searching with Nondeterministic Actions

The deterministic vacuum world


• If the environment is observable,
deterministic, and completely known,
the solution is an action sequence.
• For example, if the initial state is 1, then
the action sequence [Suck, Right, Suck]
will reach a goal state, 8.
Searching with Nondeterministic Actions
The erratic vacuum world
• In the erratic vacuum world, the Suck
action works as follows:
1. Applied to a dirty square the action
cleans the square and sometimes
cleans up dirt in an adjacent square,
too.
2. Applied to a clean square the action
sometimes deposits dirt on the
carpet.
Searching with Nondeterministic Actions

The erratic vacuum world


• The Suck action in state 1 leads to a state
in the set {5, 7}—the dirt in the right-
hand square may or may not be
vacuumed up.
• The notion of a solution to the problem.
Searching with Nondeterministic Actions

AND–OR search trees • In a nondeterministic environment,

• In a deterministic environment, the only branching is also introduced by the

branching is introduced by the agent’s environment’s choice of outcome for

own choices in each state. each action.

• We call these nodes OR nodes. • We call these nodes AND nodes.

• In the vacuum world, for example, at an • For example, the Suck action in state 1

OR node the agent chooses Left or Right leads to a state in the set {5, 7}, so the

or Suck. agent would need to find a plan for state


5 and for state 7
Searching with Nondeterministic Actions

Try, try again


• Consider the slippery vacuum world,
which is identical to the ordinary (non-
erratic) vacuum world except that
movement actions sometimes fail,
leaving the agent in the same location.
• For example, moving Right in state 1
leads to the state set {1, 2}.
Searching with partial observations

Searching with no observation:


• Assume that the agent knows the
geography of its world but doesn’t know
its location or the distribution of dirt. In
that case, its initial state could be any
element of the set {1, 2, 3, 4, 5, 6, 7, 8}.
Searching with partial observations

Searching with no observation:


• Now, consider what happens if it tries
the action Right. This will cause it to be in
one of the states {2, 4, 6, 8}—the agent
now has more information! Furthermore,
the action sequence [Right, Suck] will
always end up in one of the states {4, 8}.
Searching with partial observations

Searching with no observation:


• Finally, the sequence [Right, Suck, Left,
Suck] is guaranteed to reach the goal
state 7 no matter what the start state.
We say that the agent can coerce the
world into state 7.
Searching with partial observations

• Suppose the underlying physical problem


P is defined by ACTIONSP , RESULTP ,
GOAL-TESTP , and STEP-COSTP.
• Then we can define the corresponding
sensorless problem as follows::
• Belief states: The entire belief-state
space contains every possible set of
physical states.
Searching with partial observations

• Initial state: Typically, the set of all states


in P, although in some cases the agent
will have more knowledge than this.
• Actions: This is slightly tricky. Suppose
the agent is in belief state b = {s1, s2},
but
Searching with partial observations

• Transition model: The process of


generating the new belief state after the
action is called the prediction step; the
notation b = PREDICTP (b, a) will come in
handy.
Searching with partial observations
Searching with partial observations

Goal test:
• The agent wants a plan that is sure to Path cost:

work, which means that a belief state • This is also tricky. If the same action can

satisfies the goal only if all the physical have different costs in different states,

states in it satisfy GOAL-TESTP . then the cost of taking an action in each

• The agent may accidentally achieve the belief state could be one of several

goal earlier, but it won’t know that it has values.

done so.
Online search agents and unknown environments

• In offline search agents compute a


complete solution before setting foot in
the real world and then execute the • Online search is a necessary idea for
solution. unknown environments, where the agent
• In online search agent interleaves does not know what states exist or what
computation and action: first it takes an its actions do.
action, then it observes the environment
and computes the next action.
Online search agents and unknown environments

Examples:
1. A robot that is placed in a new building
and explore it to build a map that it can
use for getting from A to B.
2. Methods for escaping from mazes
required knowledge for aspiring heroes
of antiquity
Online search agents and unknown environments

Examples:
3. Wumpus world: Wumpus World is a
simple grid-based puzzle game that
involves navigating an agent through a
grid with pits and a Wumpus (a
dangerous creature) while searching for
a hidden gold. The goal is to find the
gold and exit the grid safely.

You might also like