0% found this document useful (0 votes)
21 views16 pages

Oup Assignment

The document discusses informed search algorithms in artificial intelligence. It describes informed search algorithms and heuristics functions. It then explains two main informed search algorithms - best first search/greedy search and A* search algorithm.

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views16 pages

Oup Assignment

The document discusses informed search algorithms in artificial intelligence. It describes informed search algorithms and heuristics functions. It then explains two main informed search algorithms - best first search/greedy search and A* search algorithm.

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

HAWASSA UNIVERSITY BENSA DAYE CAMPUS

INTRODUCTION TO ARTIFICIAL INTELLIGIENCE

INDIVIDUAL ASSIGNMENT

Name :Ayano Boresa

Id: 0l0472/14
1. What is an informed searching algorithm and discuss some informed searching algorithms in artificial
intelligence?

Informed Search Algorithms


 So far we have talked about the uninformed search algorithms which looked through search
space for all possible solutions of the problem without having any additional knowledge
about search space. But informed search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to goal node, etc. This knowledge help
agents to explore less to the search space and find more efficiently the goal node.
 The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
 Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however, might not
always give the best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states. The value of the heuristic
function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less
than or equal to the estimated cost.

Pure Heuristic Search:


 Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes
based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the
CLOSED list, it places those nodes which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded.
 On each iteration, each node n with the lowest heuristic value is expanded and generates all
its successors and n is placed to the closed list. The algorithm continues unit a goal state is
found.

In the informed search we will discuss two main algorithms which are given below:

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):


 Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search allows us to take the advantages of
both algorithms. With the help of best-first search, at each step, we can choose the most
promising node. In the best first search algorithm, we expand the node which is closest to
the goal node and the closest cost is estimated by heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal node or not. If
any successor node is goal node, then return success and terminate the search, else
proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and then
check if the node has been in either OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.


o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below
table.

In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the
iteration for traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]


Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m
is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite

Optimal: Greedy best first search algorithm is not optimal.

2.) A* Search Algorithm:


A* search is the most commonly known form of best-first search. It uses heuristic function h(n),
and cost to reach the node n from the start state g(n). It has combined features of UCS and greedy
best-first search, by which it solve the problem efficiently. A* search algorithm finds the shortest
path through the search space using the heuristic function. This search algorithm expands less
search tree and provides optimal result faster. A* algorithm is similar to UCS except that it uses
g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can
combine both costs as following, and this sum is called as a fitness number.

 At each point in the search space, only those node is expanded which have the lowest value of
f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and
approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all
states is given in the below table so we will calculate the f(n) of each state using the formula f(n)=
g(n) + h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.

Solution:

Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}


Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all remaining
paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an admissible
heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and
the number of nodes expanded is exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.

Constraint Satisfaction Search


 A constraint search does not refer to any specific search algorithm but to a
layer of complexity added to existing algorithms that limit the possible solution
set. Heuristic and acquired knowledge can be combined to produce the desired
result a constraint satisfaction problem is a special kind of search problem in
which states are defined by the values of a set of variables and the goal state
specifies a set of constraints that the value must obey. There are many
problems in AI in which the goal state is not specified in the problem and it
requires to be discovered according to some specific constraint. Examples of
some constraint satisfaction search include design problem, labeling graphs,
robot path planning and cryptarithmatic problem etc.
 that subset. The search space of CSPS is often exponential. Therefore a
number of different approaches to the problem have been proposed to reduce
the search space and find a feasible solution in a reasonable time based on the
search space exploring and variable selection heuristics different algorithms
and can be developed for a CSP problem. The algorithms can be divided into
two major categories such as complete and incomplete algorithm.

 Complete algorithms seek any solution or solutions of a CSP or they try to


prove that no solution into the categories like constraint propagation
techniques which tries to eliminate values that are consistent with some
constraints and systematic search techniques. Which explores systematically
the whole search space. But on the other hand incomplete search methods do
not explore the whole search space. They search the space either non-
systematically or in a systematic manner, but with a limit on some resource.

 They may not provide a solution but their computational time is reasonably
reduced. They cannot be applied to find all solutions or to prove that no
solution exists. Let us look an algorithm to solve a constraint satisfaction
problem.
Algorithm:
1) Open all objects that must be assigned values in a complete solution.
2) Repeat until all objects assigned valid values.
3) Select an object and strengthen as much as possible. The set of constraints
that apply to object.
4) If set of constraints is different from previous set then open all objects that
share any of these constraints. Remove selected objects.
5) If union of constraints discovered above defines a solution, return solution.
6) If union of constraints discovered above defines a contradiction, return failure
7) Make a guess in order to proceed. Repeat until a solution is found.
8) Select an object with a number assigned value and try strengthen its
constraints.

The Repeated States


Avoiding Repeated States
When we are finding solution by searching in state space tree, some states can be
repeatedly explored. This will increase the total cost. Hence we need to avoid
repeated states.
In some problems we need to explore all the repeated states again as they may reach
to goal state.
For example
Problems where the actions are reversible, such as route-finding problems and
sliding-blocks puzzles.
Search tree for these problems are infinite. But if we cut off some repeated states.
We can generate only the portion of the tree that engross the state space graph.
In some problems repeated states are not required to explore again as these states
will not surely lead to goal states.
In a state space graph we can drop multiple paths and can construct a tree where
there is only one path to each state.
For example -
Consider the 8-queen problem. Here each state can be reached through only 1 path.
If we formulate the 8-queens problem such that a queen can be placed in any column,
then each state with 'n' queens can be reached by n! different paths.

Serious Problems Caused due to Repeated States


1. Because of repeated states solvable problem can become unsolvable if the
algorithm is unable to detect repeated states.
2. Because of repeated states looping paths can be generated which can lead to
infinite search which is not practical approach.
3. Because of repeated states memory is unnecessarily wasted as same state is
maintained multiple times.
In the search tree shown in Fig. 3.6.3, A is a root node. On a grid each state has four
successors, therefore the tree including repeated states has 4d leaves. But as we can
see, only about 2 d2 are distinct states within d steps of any given state.
Note If limit of depth in state space tree is specified then it is easy to reduce
repeated states which leads to exponential reduction in search cost.

Algorithm that can Avoid Repeated States (Algorithms that forget their
history are sure to repeate the states.)
 If an algorithm remembers every state that it has visited, then it can be viewed
as exploring state space graph directly.
 We can device a new algorithm called as graph-search algorithm which is more
efficient than earlier tree-search algorithm. A graph-search algorithm maintains
two data structures closed list and open list to avoid exporation of those node
which are already explored (i.e. trying to avoid repeated states).
 Close list: A closed list is a data structure maintained by algorithm which
stores every expanded node. Algorithm discards the current node if it matches
with node on the closed list.
 Open list: A open list is a data structure maintained by algorithm which stores
fringe of unexpanded node.
Graph-Search Algorithm
[Data structure required]:
- Node n
-State description
- Parent (may be backpointer) (if needed)
-Operator used to generate n (optional)
- Depth of n(optional)
- Path cost from s to n (if available)
- Open list
• initialization: {s}
• node insertion removal depends on specific search strategy
- Closed list
• initialization: { }
• organized by backpointers to construct a solution path
[Algorithm]:
open : ={s};
closed : = {};
repeat
{
n : = select (open); /* select one node from open for expansion */
if n is a goal
then exit with success; /* delayed goal testing */
expand (n)
/* generate all children of n put these newly generated nodes in open (check
duplicates)
put n in closed (check duplicates) */
until open = {};
}
exit with failure
 For the above algorithm worst case time and space requirements are
proportional to the size of the state space. This may be much smaller than O
(bd).
 A repeated state is detected when algorithm has found two paths to same
state. if the newly discovered path is shorter than the original one then it will be
discarded by algorithm. Then there is a chance that graph-search could miss
an optimal solution as it can discard one of the paths that can lead to optimal
state.

Adversarial Search
Adversarial search is a search, where we examine the problem which arises when we try to plan
ahead of the world and other agents are planning against us.

o In previous topics, we have studied the search strategies which are only associated with a single
agent that aims to find the solution which often expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the solution in the
same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in which each
agent is an opponent of other agent and playing against each other. Each agent needs to consider
the action of other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the same
search space for the solution, are called adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are the two
main factors which help to model and solve games in AI.

Types of Games in AI:


Deterministic Chance Moves

Perfect information Chess, Checkers, go, Othello Backgammon, monopoly

Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nuclear war

o Perfect information: A game with the perfect information is that in which agents can look into the
complete board. Agents have all the information about the game, and they can see each other moves
also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game and not
aware with what's going on, such type of games are called the game with imperfect information,
such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern and set of
rules for the games, and there is no randomness associated with them. Examples are chess,
Checkers, Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various unpredictable
events and has a factor of chance or luck. This factor of chance or luck is introduced by either dice or
cards. These are random, and each action response is not fixed. Such games are also called as
stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Note: In this topic, we will discuss deterministic games, fully observable environment, zero-sum, and
where each agent acts alternatively.

Zero-Sum Game
o Zero-sum games are adversarial search which involves pure competition.
o In Zero-sum game each agent's gain or loss of utility is exactly balanced by the losses or gains of
utility of another agent.
o One player of the game try to maximize one single value, while other player tries to minimize it.
o Each move by one player in the game is called as ply.
o Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking


The Zero-sum game involved embedded thinking in which one agent or player is trying to figure out:

o What to do.
o How to decide the move
o Needs to think about his opponent as well
o The opponent also thinks what to do

Each of the players is trying to find out the response of his opponent to their actions. This requires
embedded thinking or backward reasoning to solve the game problems in AI.

Formalization of the problem:


A game can be defined as a type of search in AI which can be formalized of the following
elements:

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of moves in the state space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The state
where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game that ends in terminal states s
for player p. It is also called payoff function. For Chess, the outcomes are a win, loss, or draw and its
payoff values are +1, 0, ½. And for tic-tac-toe, utility values are +1, -1, and 0.

Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the
moves by players. Game tree involves initial state, actions function, and result Function.

Example: Tic-Tac-Toe game tree:

The following figure is showing part of the game-tree for tic-tac-toe game. Following are some key
points of the game:

o There are two players MAX and MIN.


o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.
Example Explanation:

o From the initial state, MAX has 9 possible moves as he starts first. MAX place x and MIN place o, and
both player plays alternatively until we reach a leaf node where one player has three in a row or all
squares are filled.
o Both players will compute each node, minimax, the minimax value which is the best achievable utility
against an optimal adversary.
o Suppose both the players are well aware of the tic-tac-toe and playing the best play. Each player is
doing his best to prevent another one from winning. MIN is acting against Max in the game.
o So in the game tree, we have a layer of Max, a layer of MIN, and each layer is called as Ply. Max place
x, then MIN puts o to prevent Max from winning, and this game continues until the terminal node.
o In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole search space of
possibilities that MIN and MAX are playing tic-tac-toe and taking turns alternately.

Hence adversarial Search for the minimax procedure works as follows:

o It aims to find the optimal strategy for MAX to win the game.
o It follows the approach of Depth-first search.
o In the game tree, optimal leaf node could appear at any depth of the tree.
o Propagate the minimax values up to the tree until the terminal node discovered.

In a given game tree, the optimal strategy can be determined from the minimax value of each node,
which can be written as MINIMAX(n). MAX prefer to move to a state of maximum value and MIN
prefer to move to a state of minimum value then:

You might also like