AI UNIT 1 BEC Part 2
AI UNIT 1 BEC Part 2
The initial state that the agent starts in. For example, the initial
state for our agent in Romania might be described as In(Arad).
A description of the possible actions available to the agent. The most common
formulation uses a successor function.
For example, from the state In(Arad), the successor function for the Romania
problem would return.
The problem-solving agent chooses a cost function that reflects its own
performance measure.
For the agent trying to get to Bucharest, time is of the essence, so the
cost of a path might be its length in kilometres.
The cost of a path can be described as the sum of the costs of the
individual actions along the path.
The step costs for Romania are shown in Figure 3.2 as route
distances.
A solution to a problem is a path from the initial state to a goal
state.
Find solution:
– sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
– Goal
State
Execution
– drive from Arad to
Bucharest according to Environment: fully observable (map),
the solution
deterministic, and the agent knows effects
of each action. Is this really the case?
Searching for Solutions
Having formulated some problems, we now need to solve them.
This is done by a search through the state space.
Figure 3.6 shows some of the expansions in the search tree for
finding a route from Arad to Bucharest. The root of the search tree
is a search node corresponding to the initial state, In(Arad).
Figure 3.7 An informal description of the general tree-search and graph search
algorithms .
Difference between state and node
Figure 3.8 Nodes are the data structures from which the search tree is
constructed. Each has a parent, a state, and various bookkeeping fields. Arrows
point from child to parent.
The frontier needs to be stored in such a way that the search algorithm
can easily choose the next node to expand according to its preferred
strategy. The appropriate data structure for this is a queue. The
operations on a queue are as follows:
• Completeness:
• is the strategy guaranteed to find a solution when there is one?
• Optimality:
• does the strategy find the highest-quality solution when there are
several different solutions?
• Time complexity:
• how long does it take to find a solution?
• Space complexity:
• how much memory is needed to perform the search?
Uninformed Search Strategies
• Uninformed search (also called blind search).
• no information about the number of steps or the path cost from
the current state to the goal.
• search the state space blindly
• Informed search, or heuristic search
• a cleverer strategy that searches toward the goal, based on the
information from the current state so far.
A D
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Optimal soln. found?
Yes (if all step costs are identical)
b: maximum branching factor of the search tree
d: depth of the least-cost solution
Uniform-cost search
Note that if all step costs are equal, this is identical to breadth-first
search.
Uniform-cost search does not care about the number of steps a path
has, but only about their total cost.
Priority Queue is an abstract data type that is similar to a queue,
and every element has some priority value associated with it. The
priority of the elements in a priority queue determines the order in
which elements are served (i.e., the order in which they are
removed). If in any case the elements have same priority, they are
served as per their ordering in the queue.
Example of UNIFORM COST
Instead, let C* be the cost of the optimal solution, and assume that every
action costs at least ε.
Then the algorithm's worst-case time and space complexity is
O(b(C*/ ε)).
For example, in Figure 3.16, depth first search will explore the entire left
subtree even if node C is a goal node.
If node J were also a goal node, then depth-first search would return it as
a solution instead of C, which would be a better solution; hence, depth-
first search is not optimal.
Depth-first search
S
A D
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
iterative deepening depth-first search
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Bidirectional Strategy
For example, in Romania, one might estimate the cost of the cheapest
path from Arad to Bucharest via the straight-line distance from Arad to
Bucharest.
hSLD is correlated with actual road distances and is, therefore, a useful
heuristic.
For example, hSLD (In(Arad)=) 366. Notice that the values of hSLD cannot be computed
from the problem description itself.
Figure 3.23 shows the progress of a greedy best-first search using hSLD to find a path
from Arad to Bucharest. The first node to be expanded from Arad will be Sibiu
because it is closer to Bucharest than either Zerind or Timisoara. The next node to be
expanded will be Fagaras because it is closest. Fagaras in turn generates Bucharest,
which is the goal.
It is not optimal, however: the path via Sibiu and Fagaras to
Bucharest is 32 kilometers longer than the path through Rimnicu
Vilcea and Pitesti.
The worst-case time and space complexity for the tree version is O(bm), where m
is the maximum depth of the search space.
Optimal is =
A* search: Minimizing the total estimated solution cost
The most widely-known form of best-first search is called A* search .
g(n) = gives the path cost from the start node to node n.
h(n) = is the estimated cost of the cheapest path from n to the goal.
A heuristic h(n) is consistent if, for every node n and every successor n′
of n generated by any action a, the estimated cost of reaching the goal
from n is no greater than the step cost of getting to n′ plus the estimated
cost of reaching the goal from n′.
Optimality of A*
A∗ is optimal if h(n) is admissible, while the graph-search version is
optimal if h(n) is consistent.
To find a good heuristic function If we want to find the shortest solutions by using A ∗,
we
need a heuristic function that never overestimates the number of steps to the goal.
h1 = the number of misplaced tiles. For Figure 3.28, all of the eight tiles are out of
position, so the start state would have h1 = 8.
h2 = the sum of the distances of the tiles from their goal positions. Because tiles
cannot move along diagonals, the distance we will count is the sum of the
horizontal and vertical distances. This is sometimes called the city block distance
or Manhattan distance. h2 is also admissible because all any move can do is move
one tile one step closer to the goal.
As expected,
Tiles 1 to 8 in the start state give a Manhattan distanceneither
of of these overestimates th
true solution cost, which is 26.
LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS
Introduction
Typically, the paths followed by the search are not retained. Although
local search algorithms are not systematic, they have two key
advantages:
If elevation corresponds to cost, then the aim is to find the lowest valley-a global
minimum;
if elevation corresponds to an objective function, then the aim is to find the highest
peak-a global maximum,.
Problem: depending on initial state, can get stuck in local optimum (here
maximum)
LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS
1) Hill-climbing search
The successor function returns all possible states generated by moving a single
queen to another square in the same column (so each state has 8 x 7 = 56
successors).
1) Local maxima:
2) Ridges:
3) Plateaux:
1) Local maxima: a local maximum is a peak that is higher than each
of its neighboring states, but lower than the global maximum.
Ridges: a ridge is shown in Figure 4.4.
Figure 4.4
Ridge = a long, narrow hilltop, mountain range.
Plateaux: a plateau is an area of the state space landscape where the
evaluation function is flat. It can be a flat local maximum, from
which no uphill exit exists, or a shoulder, from which it is possible to
make progress. (See Figure 4.10.)
A hill-climbing search might be unable to find its way off the
plateau.
A plateau is a flat, elevated landform that rises sharply above the surrounding
area on at least one side
In stochastic hill climbing 1) it does not examine all its neighbour.
2) select one neighbour node at random and decides
whether to choose it as current state.
Coalesce = meld.
If the move improves the situation, it is always accepted. Otherwise, the algorithm
accepts the move with some probability less than 1.
e
If the move improves the situation, it is always accepted. Otherwise, the algorithm
accepts the move with some probability less than 1.
probability