AI Unit-2
AI Unit-2
UNIT - II
UNIT – II (Contents)
4. Goal Test
5. Path Cost
Problem Solving Agents
Well-defined problems and solutions:
• A problem can be defined formally • Sometimes the goal is specified by
• Uniform-cost search the root node is expanded first, then all the
• Iterative deepening depth- • In general, all the nodes are expanded at a given
first search depth in the search tree before any nodes at the
• Uniform-cost search the root node is expanded first, then all the
• Iterative deepening depth- • In general, all the nodes are expanded at a given
first search depth in the search tree before any nodes at the
• Uniform-cost search the root node is expanded first, then all the
• Iterative deepening depth- • In general, all the nodes are expanded at a given
first search depth in the search tree before any nodes at the
• Uniform-cost search level, each of which generates b more nodes, for a total of
b^2 at the second level.
• Depth-first search
• Each of these generates b more nodes, yielding b3 nodes at
• Depth-limited search
the third level, and so on.
• Iterative deepening depth- • Now suppose that the solution is at depth d
first search • Then the total number of nodes generated is
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Instead of expanding the shallowest node,
• Uniform-cost search
uniform-cost search expands the node n with
• Depth-first search
the lowest path cost g(n).
• Depth-limited search
• This is done by storing the frontier as a priority
• Iterative deepening depth-
queue ordered by g
first search
• Bidirectional search
Uninformed Search Strategies
Different search methods
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-
first search
• Bidirectional search
Uninformed Search Strategies
Uninformed Search Strategies
Different search methods • The successors of Sibiu are Rimnicu Vilcea and
• Implementation:
Order the nodes in fringe increasing order of cost.
• Special cases:
• greedy best-first search
• A* search
Greedy best-first search
• Optimal? Yes
• Optimally Efficient: Yes
Heuristic Functions
Second Heuristic:
• h2 = the sum of the distances of the
tiles from their goal positions.
• Tiles cannot move along diagonals,
the distance we will count is the sum
of the horizontal and vertical
distances.
Heuristic Functions
Hill-climbing search:
• It is simply a loop that continually
moves in the direction of increasing
value—that is, uphill.
• It terminates when it reaches a “peak”
where no neighbor has a higher value.
Local Search Algorithms and Optimization Problems
Hill-climbing search:
• The algorithm does not maintain a
search tree, the current node need
only record the state and the value of
the objective function.
• Hill climbing does not look ahead
beyond the immediate neighbors.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Simulated annealing:
• Gradient descent: Imagine the task of
getting a ping-pong ball into the
deepest crevice in a bumpy surface.
• If we just let the ball roll, it will come
to rest at a local minimum.
• If we shake the surface, we can bounce
the ball out of the local minimum.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Simulated annealing:
• Instead of picking the best move, picks
a random move. If the move improves
the situation, it is always accepted.
• The probability of making a "bad"
move decreases exponentially as the
quality of the move worsens, which is
quantified by ΔE.
Local Search Algorithms and Optimization Problems
Simulated annealing:
• Probability decreases as the
"temperature" parameter T
decreases.
• At higher values of T, "bad" moves
are more likely to be allowed, but as
T decreases, these moves become
increasingly unlikely to be chosen.
Local Search Algorithms and Optimization Problems
• In genetic algorithm (or GA) successor generated states, called the population.
• In genetic algorithm (or GA) successor generated states, called the population.
Example:
• For example, an 8-queens state must
specify the positions of 8 queens, each
in a column of 8 squares, and so
requires bits.
• Alternatively, the state could be
represented as 8 digits, each in the
range from 1 to 8.
Local Search Algorithms and Optimization Problems
Local Search Algorithms and Optimization Problems
Local Search in Continues Spaces
• The most real-world environments are Example: Place three new airports
continuous. anywhere in Romania
• In continuous state and action spaces,
they have infinite branching factors.
• Suppose we want to place three new
airports anywhere in Romania.
• The sum of squared distances from each
city on the map to its nearest airport is
minimized.
Local Search in Continues Spaces
• The state space is then defined by the Example: Place three new airports
coordinates of the airports: (x1, y1), (x2, anywhere in Romania
y2), and (x3, y3).
• Moving one or more of the airports on
the map.
• The objective function f(x1, y1, x2, y2,
x3, y3) is relatively easy to compute for
any state once we compute the closest
cities.
Local Search in Continues Spaces
• The most real-world environments are • Let Ci be the set of cities whose closest
continuous. airport (in the current state) is airport i.
• In continuous state and action spaces, • Then, in the neighborhood of the
they have infinite branching factors. current state, where the Ci remain
• Suppose we want to place three new constant, we have
airports anywhere in Romania.
• The sum of squared distances from each
city on the map to its nearest airport is
minimized.
Local Search in Continues Spaces
• In the vacuum world, for example, at an • For example, the Suck action in state 1
OR node the agent chooses Left or Right leads to a state in the set {5, 7}, so the
Goal test:
• The agent wants a plan that is sure to Path cost:
work, which means that a belief state • This is also tricky. If the same action can
satisfies the goal only if all the physical have different costs in different states,
• The agent may accidentally achieve the belief state could be one of several
done so.
Online search agents and unknown environments
Examples:
1. A robot that is placed in a new building
and explore it to build a map that it can
use for getting from A to B.
2. Methods for escaping from mazes
required knowledge for aspiring heroes
of antiquity
Online search agents and unknown environments
Examples:
3. Wumpus world: Wumpus World is a
simple grid-based puzzle game that
involves navigating an agent through a
grid with pits and a Wumpus (a
dangerous creature) while searching for
a hidden gold. The goal is to find the
gold and exit the grid safely.