Searching Algorithms-Problem Solving in AI
Searching Algorithms-Problem Solving in AI
Setting goals help the agent organize its behavior by limiting the
objectives that the agent is trying to achieve and hence the actions it
needs to consider. This Goal formulation based on the current
situation and the agent’s performance measure is the first step in
problem solving.
After the search phase, the agent has to carry out the actions that are
recommended by the search algorithm. This final phase is
called execution phase.
Initial State
For example, if a taxi agent needs to get to location(B) but the taxi is currently at location(A)
Actions
Transition Model
Goal Test
The goal test determines whether a given state is a goal state or not.
Sometimes there is an explicit set of possible goal states and the test
simply checks whether the given state is one of them. Sometimes the
goal is specified by an abstract property rather than an explicitly
enumerated set of states.
Path Cost
1. Breadth-first search
2. Depth-first search
3. Depth-limited search
4. Iterative deepening depth-first search
5. Bidirectional search
6. Uniform cost search
Breadth-First Search
It starts from the root node, explores the neighbouring nodes first and
moves towards the next level neighbours. It generates one tree at a time
until the solution is found. It can be implemented using FIFO queue data
structure. This method provides shortest path to the solution.
Bidirectional Search
It searches forward from initial state and backward from goal state till both meet
to identify a common state.
The path from initial state is concatenated with the inverse path from the goal
state. Each search is done only up to half of the total path.
It executes two simultaneous searches called forward-search and
backwards-search and reaches the goal state. Here, the graph is
divided into two smaller sub-graphs.
In one graph, the search is started from the initial start state and in
the other graph, the search is started from the goal state. When these
two nodes intersect each other, the search will be terminated.
Here, the start state is E and the goal state is G. In one sub-graph, the search
starts from E and in the other, the search starts from G. E will go to B and then
Greedy best-first search uses the properties of both depth-first search and
breadth-first search. Greedy best-first search traverses the node by selecting the
path which appears best at the moment. The closest path is selected by using
Greedy best-first search first starts with A and then examines the next neighbour
B and C. Here, the heuristics of B is 12 and C is 4. The best path at the moment
best-first search algorithms. It uses the advantages of both with better memory
usage. It uses a heuristic function to find the shortest path. A* search algorithm
uses the sum of both the cost and heuristic of the node to find the best path.
Note the point that A* search uses the sum of path cost and heuristics value to
From A to H, it is 7 + 0 = 7.
Here, the lowest cost is 4 and the path A to B is chosen. The other paths will be
on hold.
From A to B to E, it is 1 + 6 + 6 = 13.
The lowest cost is 7. Path A to B to D is chosen and compared with other paths
The lowest cost is 5 which is also lesser than other paths which are on hold.
Heuristic Functions :
Heuristics are typically rules of thumb or approximate strategies that guide the
search for a solution. They provide a way to assess the desirability of different
options without exhaustively exploring every possibility.
1. Up
2.Down
3. Right or
4. Left
The empty space cannot move diagonally and can take only one
step at a time (i.e. move the empty space one position at a time).
We first move the empty space in all the possible directions in the
start state and calculate the f-score for each state. This is called
expanding the current state.
After expanding the current state, it is pushed into the closed list
and the newly generated states are pushed into the open list. A state
with the least f-score is selected and expanded again. This process
continues until the goal state occurs as the current state.
Basically, here we are providing the algorithm a measure to choose
its actions. The algorithm chooses the best possible action and
proceeds in that path.
This solves the issue of generating redundant child states, as the
algorithm will expand the node with the least f-score.