0% found this document useful (0 votes)
24 views31 pages

Ai Unit Ii

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views31 pages

Ai Unit Ii

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Artificial Intelligence

UNIT-II

Problem Solving Agents:


 Intelligent agents are supposed to maximize their performance measure. Achieving this
is sometimes simplified if the agent can adopt a goal and aim at satisfying it.
 Goals help organize behaviour by limiting the objectives that the agent is trying to
achieve and hence the actions it needs to consider. Goal formulation, based on the
current situation and the agent’s performance measure, is the first step in problem
solving.
 Problem formulation is the process of deciding what actions and states to consider,
given a goal.
Well-defined problems and solutions
A problem can be defined formally by five components:
 The initial state that the agent starts in. For example, the initial state for our agent in
 Romania might be described as In(Arad).
 A description of the possible actions ACTIONS available to the agent. Given a particular
states, ACTIONS(s) returns the set of actions that can be executed in s.
 A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s.
 The goal test, which determines whether a given state is a goal state. Sometimes there
is an explicit set of possible goal states, and the test simply checks whether the given
state is one of them.
 A path PATH COST cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance measure.
A simple “formulate, search, execute” design for the agent, as shown in Figure 3.1.

EXAMPLE PROBLEMS
Vacuum world:
This can be formulated as a problem as follows:
• States: The state is determined by both the agent location and the dirt locations. The agent is
in one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 22
= 8 possible world states. A larger environment with n locations has n ・ 2^n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no
effect. The complete state space is shown in Figure 3.3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

The Figure 3.3 gives a clear idea.

8-Puzzle Problem:
8-Puzzle problem consists of a 3×3 board with eight numbered tiles and a blank space. A tile
adjacent to the blank space can slide into the space. The object is to reach a specified goal state, such
as the one shown on the right of the figure. The standard formulation is as follows:
• States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can be
reached from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank space eft,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
• Transition model: Given a state and action, this returns the resulting state;
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
The Figure 3.4 gives clear idea.
8-Queens Problem:
The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen
attacks any other. (A queen attacks any piece in the same row, column or diagonal.) Figure 3.5
shows an attempted solution that fails: the queen in the rightmost column is attacked by the
queen at the top left.
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.
The Figure 3.5 gives clear idea

Route Finding Problem:


• States: Each state obviously includes a location (e.g., an airport) and the current time. Furthermore,
because the cost of an action (a flight segment) may depend on previous segments, their fare bases, and
their status as domestic or international, the state must record extra information about these “historical”
aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving after the current time,
leaving enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination as the
current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.

An example of a route finding problem is shown in the Figure 3.2


UNINFORMED SEARCH STRATEGIES:
This section covers several search strategies that come under the heading of uninformed
search (also called blind search).
The term means that the strategies have no additional information about states beyond that
provided in the problem definition.
All they can do is generate successors and distinguish a goal state from a non-goal state.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
o BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

Advantages:
o BFS will provide a solution if any solution exists.
o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
It requires lots of memory since each level of the tree must be saved into memory to expand
the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from
the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the
path which is shown by the dotted arrow, and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

2. Depth-first Search
o Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no guarantee of finding
the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In
this algorithm, the node at the depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
o Depth-limited search also has a disadvantage of incompleteness.
o It may not be optimal if the problem has more than one solution.
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also
not optimal even if ℓ>d.

4. Uniform-cost Search Algorithm:


Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost.
A uniform-cost search algorithm is implemented by the priority queue.
Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same.
Advantages:
o Uniform cost search is optimal because at every state the path with the least cost is
chosen.
Disadvantages:
o It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.

Example:

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

5. Iterative deepeningdepth-first Search:


 The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.
 This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
 This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
 The iterative search algorithm is useful uninformed search when search space is large,
and depth of goal node is unknown.
Advantages:

o Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:

Completeness:
This algorithm is complete is ifthe branching factor is finite.
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time complexity
is O(bd).
Space Complexity: The space complexity of IDDFS will be O(bd).
Optimal:IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of
the node.
1'stIteration----->A
2'ndIteration---->A,B,C
3'rdIteration------>A,B,D,E,C,F,G
4'thIteration------>A,B,D,H,I,E,C,F,K,G
In the fourth iteration, the algorithm will find the goal node.

6. Bidirectional Search Algorithm:

 Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to find
the goal node.
 Bidirectional search replaces one single search graph with two small subgraphs in
which one starts the search from an initial vertex and other starts from goal vertex.
 The search stops when these two graphs intersect each other.
 Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.


o In bidirectional search, one should know the goal state in advance.
Example:In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward
direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both searches.


Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.

Pseudo codes for uninformed search strategies are:


1) BFS
2) Depth Limited Search

3) Iterative Deepening Search:

Some more examples for uninformed search strategies:


BFS:

Uniform cost search:


DFS:

Iterative Deepening Depth First Search:


Question. Which solution would BFS find to move from node S to node G if run on the graph
below?

Solution.

Path: S -> D -> G


Question. Which solution would DFS find to move from node S to node G if run on the graph
above?

Path: S -> A -> B -> C -> G


Comparing uninformed search strategies
Figure 3.21 compares search strategies in terms of the four evaluation criteria
Artificial Intelligence
UNIT-II
INFORMED (HEURISTIC) SEARCH STRATEGIES
 An informed search strategy—one that uses problem-specific knowledge beyond the
definition of the problem itself—can find solutions more efficiently than can an
uninformed strategy.
 Informed search algorithm contains an array of knowledge such as how far we are from
the goal, path cost, how to reach to goal node, etc. This knowledge help agents to
explore less to the search space and find more efficiently the goal node.
 The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.

1.) Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which appears best at that moment.
It is the combination of depth-first search and breadth-first search algorithms. It uses the
heuristic function and search. Best-first search allows us to take the advantages of both
algorithms.

With the help of best-first search, at each step, we can choose the most promising node. In the
best first search algorithm, we expand the node which is closest to the goal node and the closest
cost is estimated by heuristic function, i.e. f(n)= g(n).
1. h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal node or
not. If any successor node is goal node, then return success and terminate the search,
else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.


o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).Space Complexity: The worst case space complexity of Greedy best first search
is O(bm). Where, m is the maximum depth of the search space. Complete: Greedy best-
first search is also incomplete, even if the given state space is finite.Optimal: Greedy
best first search algorithm is not optimal.

2.) A* Search Algorithm:

A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n).

It has combined features of UCS and greedy best-first search, by which it solve the problem
efficiently. A* search algorithm finds the shortest path through the search space using the
heuristic function.

This search algorithm expands less search tree and provides optimal result faster. A* algorithm
is similar to UCS except that it uses g(n)+h(n) instead of g(n).

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
A* search algorithm is the best algorithm than other search algorithms.
A* search algorithm is optimal and complete.
This algorithm can solve very complex problems.
Disadvantages:
It does not always produce the shortest path as it mostly based on heuristics and approximation.
A* search algorithm has some complexity issues.
The main drawback of A* is memory requirement as it keeps all generated nodes in the
memory, so it is not practical for various large-scale problems.
Example:
Solution:

Initialization: {(S, 5)}


Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost
6.
Complete: A* algorithm is complete
Optimal: A* search algorithm is optimal if it follows below two conditions:
Admissible: the first condition requires for optimality is that h(n) should be an admissible
heuristic for A* tree search. An admissible heuristic is optimistic in nature.
Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on heuristic function,
and the number of nodes expanded is exponential to the depth of solution d. So the time
complexity is O(b^d), where b is the branching factor.
Space Complexity: The space complexity of A* search algorithm is O(b^d)

Memory bounded Heuristic search or RBFS (Recursive Best First Search):


 Recursive best-first search (RBFS) is RECURSIVE a simple recursive algorithm that
attempts to mimic the operation of standard best-first search, but using only linear
space.
 RBFS replaces the f-value of each node along the path with a backed-up value—the
best f-value of its children. In this way, RBFS remembers the f-value of the best leaf in
the forgotten subtree and can therefore decide whether it’s worth reexpanding the
subtree at some later time.
An Example: Figure 3.27 shows how RBFS reaches Goal node Bucharest.
Heuristic Functions in Artificial Intelligence:
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds
the most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal
A good heuristic function is determined by its efficiency. More is the information about the
problem, more is the processing time.
Properties of a Heuristic search Algorithm
Use of heuristic function in a heuristic search algorithm leads to following properties of a
heuristic search algorithm:
 Admissible Condition: An algorithm is said to be admissible, if it returns an optimal
solution. If a heuristic returns an optimal solution then it is said to be an admissible
heuristic.
 Completeness: An algorithm is said to be complete, if it terminates with a solution (if
the solution exists).
 Dominance Property: If there are two admissible heuristic
algorithms A1 and A2 having h1 and h2 heuristic functions, then A1 is said to
dominate A2 if h1 is better than h2 for all the values of node n.
 Optimality Property: If an algorithm is complete, admissible,
and dominating other algorithms, it will be the best one and will definitely give an
optimal solution.
An Example:

A heuristic function for the 8-puzzle problem is defined below:


h(n)=Number of tiles out of position.
So, there is total of three tiles out of position i.e., 6,5 and 4. Do not count the empty tile present
in the goal state). i.e. h(n)=3. Now, we require to minimize the value of h(n) =0.
We can construct a state-space tree to minimize the h(n) value to 0, as shown below

The Heuristic information can be related to the nature of the state, cost of transforming from
one state to another, goal node characterstics, etc., which is expressed as a heuristic function.
LOCAL SEARCH ALGORITHMS AND OPTIMIZATION PROBLEMS:
 If the path to the goal does not matter, we might consider a different class of algorithms,
ones that do not worry about paths at all.
 Local search algorithms operate using a single current node (rather than multiple
paths) and generally move only to neighbors of that node.

 Although local search algorithms are not systematic, they have two key advantages:
o they use very little memory—usually a constant amount; and
o they can often find reasonable solutions in large or infinite (continuous) state
spaces for which systematic algorithms are unsuitable.
Local search algorithms are useful for solving pure op timization problems, in which the aim
is to find the best state according to an objective function.

1) Hill-Climbing Search:
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance travelled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.A node of hill climbing algorithm has two
components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph
as it only keeps a single current state.

The state-space landscape is a graphical representation of the hill-climbing algorithm which is


shown in Figure 4.1.

Local Maximum: Local maximum is a state which is better than its neighbor states, but there
is also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states
of current states have the same value.
Shoulder: It is a plateau region which has an uphill edge.

Algorithm for Simple Hill Climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.

o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

Example:

Path: S-A-C-I-M

2) Simulated Annealing:
 A hill-climbing algorithm which never makes a move towards a lower value guaranteed
to be incomplete because it can get stuck on a local maximum.
 And if algorithm applies a random walk, by moving a successor, then it may complete
but not efficient.
 Simulated Annealing is an algorithm which yields both efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state.
 The same process is used in simulated annealing in which the algorithm picks a random
move, instead of picking the best move. If the random move improves the state, then it
follows the same path.
 Otherwise, the algorithm follows the path which has a probability of less than 1 or it
moves downhill and chooses another path.

3) Genetic Algorithms:
A genetic algorithm (or GA) is a variant of stochastic beam search in which are generated
by combining two parent states rather than by modifying a single state.

Example: 8-queens problem

Graphical Illustration:

GAs begin with a set of k randomly generated states, called the population. Each state, or
individual, is represented as a string over a finite alphabet.

The production of the next generation of states is shown in Figure 4.6(b)–(e). In (b), Feach
state is rated by the objective function, or (in GA terminology) the fitness function. A fitness
function should return higher values for better states.
In (c), two pairs are selected at random for reproduction, in accordance with the probabilities
in (b). Notice that one individual is selected twice and one not at all.4 for each pair to be mated,
a crossover point is chosen randomly from the positions in the string. In Figure 4.6, the
crossover points are after the third digit in the first pair and after the fifth digit in the second
pair.

In (d), the offspring themselves are created by crossing over the parent strings at the crossover
point. For example, the first child of the first pair gets the first three digits from the first parent
and the remaining digits from the second parent, whereas the second child gets the first three
digits from the second parent and the rest from the first parent.

Finally, in (e), each location is subject to random mutation with a small independent
probability. One digit was mutated in the first, third, and fourth offspring.

The algorithm is shown in Figure 4.8

Constraint Satisfaction Problem:


Constraint satisfaction is a technique where a problem is solved when its values satisfy certain
constraints or rules of the problem. Such type of technique leads to a deeper understanding of
the problem structure as well as its complexity.
A constraint satisfaction problem consists of three components, X,D, and C:
X is a set of variables, {X1, . . . ,Xn}.
D is a set of domains, {D1, . . . ,Dn}, one for each variable.
C is a set of constraints that specify allowable combinations of values.
Each domain Di consists of a set of allowable values, {v1,…, vk} for variable Xi. Each
constraint Ci consists of a pair _scope, rel _, where scope is a tuple of variables that participate
in the constraint and rel is a relation that defines the values that those variables can take on.
Example Problem: Map Coloring
Consider a map of Australia showing each of its states and territories (Figure 6.1(a)).

We are given the task of coloring each region either red, green, or blue in such a way that no
neighboring regions have the same color. To formulate this as a CSP, we define the variables
to be the regions.

A constraint graph is shown in Figure 6.1(b).

Cryptarithmetic:
Another example is provided by cryptarithmetic puzzles. (See Figure 6.2(a).) Each letter in a
cryptarithmetic puzzle represents a different digit. For the case in Figure 6.2(a), this would be
represented as the global constraint Alldiff (F, T, U, W, R, O). The addition constraints on the
four columns of the puzzle can be written as the following n-ary constraints:
Where C10, C100, and C1000 are auxiliary variables representing the digit carried over into
the tens, hundreds, or thousands column.
These constraints can be represented in a constraint hypergraph, such as the one shown in Figure
6.2(b).
A hyper graph consists of ordinary nodes (the circles in the figure) and hypernodes (the
squares), which represent n-ary constraints.

Some more examples:

Greedy Best First Search:


A* Search:
RBFS:

You might also like