1,1 Intelligent System: AI - UNIT - 1
1,1 Intelligent System: AI - UNIT - 1
Understanding of AI
Understanding of related terms intelligence, knowledge, reasoning,cognition learning and a
number of other computer related terms. Shows rest on complex problems for which general
principles do not help much though there are few useful general principles.
The first view of aI is about duplicating what the human brain does
The second view is that a is about duplicating what the human brain should do what is doing
things logically or rationally
ELIZA
The main characteristics of briefly mention here:
Simulation of Intelligence
Quality of response
Coherence
Semantics
ELIZA is a program which makes conversation with user in English just like siri in in iPhones.
Categorization of intelligent systems
In order to design intelligent systems it is important to categorize these systems there are four
possible categories of such systems.
1. Systems that think like humans: It is an area of cognitive science. the Stimuli are
converted into mental representation and cognitive process manipulate this representation
to build new representation that is used to generate actions.
2. Systems that act like human required that the overall behavior of the system should be
human like which could be achieved by observation. Turing test is an example
3. System which think rationally: Thinking rationally or logically logical formula and
theories used for synthesizing outcomes.
4. System that acts rationally: In the final category of intelligent systems where by
rational behavior we mean doing the right things.
Components of AI program
Any AI program should have knowledge base and navigational capability which contains control
strategy and inference mechanism
Knowledge base: Program should be learning in nature and update its knowledge accordingly
which consists of fats and rules and has the following characteristics:
Control strategy: It determines which rule to be applied. Some heuristics or thumb rules based
on problem domain may be used.
Inference mechanism: It requires search through knowledge base and derive new knowledge
using the existing knowledge with the help of inference rule
1. 2 Foundation of AI
Commonly used techniques and theories are rule-based fuzzy Logic neural networks decision
theory statistics probability theory genetic algorithms etc. Since AI is interdisciplinary in nature
foundation of AI in various fields such as
Mathematics : AI systems used formal logical methods and Boolean logic analysis of limits to
what can be computed probability theory uncertainty that forms the basis for most modern
approaches to AI fuzzy logic etc
Neuroscience: Science of medicine helps in studying the functions of brain researchers are
working to know as how to how to have a mechanical brain which systems will require
computing remapping and interconnections large extent.
Control theory: Machines can modify their behavior in response to the environment.
Data mining
Expert problem solving
Neural networks, aI tools,
web agents
1.4 Applications
Find applications in almost all areas of real life applications. Broadly speaking, businesses,
engineering, medicine, education and manufacturing are the main areas.
Business: Financial strategies, give advice
Engineering: Check design, offer suggestions to create new product, expert systems for all
Engineering applications
Manufacturing: Assembly, inspection and maintenance
Medicine: Monitoring ,diagnosing and prescribing
Education: Teaching
Fraud detection
Object identification
Space shuttle scheduling
Information retrieval
2.1
General problem solving:
Production system :
Production system is one of the formulation that helps AI problem programs to do such process
more conveniently instead fees problem. The system comprises of start( Initial. ) States And
both States. Of the problem along with one or more database consisting of suitable and
necessary information for the particular task. Knowledge representation scheme of are used to
structure information in the database. Production rule has left side that determines the
applicability of the rule and right side that describes the action to be performed with the rule is
applied.
SOLUTION PATH 1
SOLUTION PATH 2
RULE APPLIED 5G JUG 3G JUG STEP NO
START STATE 0 0
3 0 3 1
5 3 0 2
3 3 3 3
7 5 1 4
2 0 1 5
5 1 0 6
3 1 3 7
5 4 0 8
GOAL STATE 4 - -
8 puzzle problem:
Problem statement: 8 puzzle problem has 3 cross 3 grid with 8 randomly Number 12 age. Tiles.
Arranged on it with one empty cell at any point the adjacent time can move to the empty shell.
Creating a new empty cell. Solve this problem involves arranging X such that we get the coal
start from the starting state two goals State.
3 7 6 initial state
up 5 1 2
4 8
3 7 6 left 3 7 6
5 2 5 1 2
3 7 6 4 8
4 1 8
5 1 2
4 8
up
3 6 5 3 6
3 6 left down
5 7 2 7 2
5 7 2
4 1 8 4 1 8
4 1 8
right
5 3 6
7 2
4 1 8
Control strategies:
control strategy is one of the most important component of problem solving that describes the
order of applications of the rule to the current state. Control strategy should be such that it
causes the motion towards the solution. It is that it should explore the solution space in a
systematic manner. If you select a control strategy where we select a rule randomly for the
applicable room but definitely it causes the motion and eventually with the to the solution.
Decomposability of a problem: Divide the problem into a set of independent smaller some
problems solved them and combine the solution to get the final solution. The process of dividing
some problem continues till we get the receipt of smallest sub problems for which small
collection of specific rules are used. Divide and conquer technique is commonly used method
for solving such problems. Which can be solved in parallel processing Environment?
Role of knowledge: Knowledge plays an important role in solving any problem knowledge
could be in the form of rules. And Facts which helps generating search space for finding the
solution.
Consistency of knowledge base used in solving problem: Which should that knowledge base
used to solve problem is consistent inconsistent knowledge base will lead to wrong solution.
Like if it is humid it will rain, If it is sunny then it is a daytime.
Requirement of solutions: We should analyze the problem whether the solution required is
absolute or relative. We call solution to be absolute if you have to find exact solution. Where as
it is relative if you have reasonable good and approximate solution.
Depth first search. In as far as possible or before backing up and trying alternative it works by
always generating a descendant of the most recently expanded until some Dept of is reached and
then back to the next most recently expanded node generate one of its descendants. dfs Is
memory efficient and it only store a single path from the root to leaf node along with the
remaining and expanded siblings for each node in the path.
the path is obtained from the list stored in CLOSED .The solution path is
(0,0)->(5,0)->(5,3)->(0,3)->(3,0)->(3,3)->(5,1)->(0,1)->(1,0)->(1,3)->(4,0)
Comparisons:
1. BFS is effective when the search tree has a low branching factor
2. BFS can work even in trees that are infinitely deep
3. BFS requires a lot of memory as number of nodes in level of the tree increases
exponentially
4. BFS is superior when the goal exists in the upper right portion of search tree.
5. BFS gives optimal solution
6. DFS is effective when there are a few sub trees in the search tree that have only one
connection point to the rest of the states
7. DFS is best when the goal exists in the lower left portion of the search tree
8. DFS can be dangerous when the path closer to the start and farther from the goal has
been chosen
9. DFS is memory efficient as the path from the start to current node is stored .Each node
should contain state and its parent
10. DFS may not give optimal solutions.
DEAPTH FIRST ITERATIVE DEEPENING
Since depth first iterative deepening expands all nodes at a given path Before expanding any
nodes at greater depth, it is guaranteed find shortest path are optimal solution from start to goal
state. The working of algorithm is shown as in figure below
BIDIRECTIONAL SEARCH
Bidirectional search is a Graph Search Algorithm that runs two simultaneous searches. One
search forward start date and other moon backward from the goal state and stops when the to
meet in the middle. It is useful for those problems which have a single start state and single goal
state. It can be applied to bidirectional search for K = 123 ... k iteration consist of generating
state in the forward direction from the start date up to the depth of k using BFS from which state
using dfs 1 to death and other to the death K + 1 storing States but simply matching against the
stood state guaranteed from forward direction. The reason for this approach is that each of the
structures has time complexity O(b power of d/2)
the trace of finding the path from node 1 to 16 using Bidirectional search is given in above figure
we can clearly see that the part is obtained is: 1 2 6 11 14 16
In branch and bound method if g(X)=1 all operator then degenerate Hindi to simple BFs.
from AI. point of view as bad as depth first and breadth first. this can be improved augment it bi
dynamic programming ladies that is delete those parts which are redundant
hill climbing
It is an Optimization technique that belong belong belong family of local purchase. it is a
relatively simple technique to implement popular first choice is explored. hill climbing problems
that have many solutions but some solutions are then other moving through a tree of 5 proceed in
depth first order but the choices are according to some heuristic value
Beam Search
Beam search is a heuristic search algorithm in which W number of best nodes at each level is
always expanded. It progresses level by level and moves downward only from the best W nodes
at each level. Beam search uses breadth-first search to build its search tree. At each level of the
tree. it generates all successors of the states at the current level, sorts them in order of increasing
heuristic values. However, it only considers a W number of states at each level. Other nodes are
ignored. Best nodes are decided on the heuristic cost associated with the node. Here W is called
width of beam search. If B is the branching factor, there will be only W * B nodes under
consideration at any depth but only W nodes will be selected. If beam width is smaller, the more
states are pruned. If W = l, then it becomes hill climbing search where always best node is
chosen from successor nodes. If beam width is infinite, then no states are pruned and beam
search is identical to breath-first search.
Output • Yes or No
Method.
node=root_node,Foung== false;
• If NODE 1 s the goal node. = true else find SUCCs if any with its estimated cost and store in
OPEN list
• while (FOUND = false and not able to proceed further) do
{
• sort OPEN list:
. select top elements from OPEN list and put it in W OPEN list and empty OPEN list;
• for each NODE from W OPEN list
{
• if NODE = Goal state then FOUND true else find SUCCs of NODE. if any with its estimated
cost and store in OPEN list:
}
} /* end while */
if FOUND true then return Yes otherwise return No;
Stop
A* Algorithm
it is a combination of 'branch and bound' and 'best search' methods combined with the dynamic
programming principle. It uses a heuristic evaluation function usually denoted by f(X) to
determine the order in which the search visits nodes in the tree. The heuristic function for a node
N is defined as follows:
f(N)=g(N)+h(N)
The function g is a measure of the cost of getting from the start node to the current node N, is
sum of costs of the rules that were applied along the best path to the current node.
The function h is an estimate of additional cost of getting from the current node N to the goal
node.
Generally, A* algorithm is called OR graph / tree search algorithm. A* algorithm incrementally
searches all the routes starting from the start node until it finds tl shortest path to a goal. Starting
with a given node, the algorithm expands the node with lowest f(X) value. It maintains a set of
partial solutions.
let us consider an example of 8 puzzle again and solve it by using A* .The simple evalution
function f(x) is defined as follows:
f(x)=g(x)+h(x),where
h(x)=the number of tiles not in the goal position in a given state X
g(x)=depth of node X in the search tree
given
start state goal state
3 7 6 5 3 6
5 1 2 7 2
4 8 4 1 8
\
in the start state f=0+4 (g->level ,h=4(number of tiles misplaced ie 3,7,5,1))
a better estimate of h function might be as follows .the function g may remain same .
h(X)=the sum of distance of the tiles (1 to 8) from the goal position is given state X
1 2 3 4 5 6 7 8
here start state has h(start state)= 1+0+1+0+1+0+2+0=5
A* algorithm ensures to find an optimal path to a goal, if one exists. Here we assume
that h for each node X is underestimated. i.e., heuristic value is less than node X to goal node. In
Fig. 2.11,start node A is expanded B ,C and D with f value as 4,5 and 6 respectively. Here
assuming that the cost of all arc is 1 the sake of simplicity. Node B has minimum f value, so
expand this node to E which has value as 5, C is also 5, we resolve in favor of E, the path
currently we expanding, Now node E is expanded to node F with f value as 6, Clearly, expansion
of a node F is stopped as f value of C the smallest. Thus. we sec that by underestimating heuristic
value.
Overestimation
Let us consider another situation. Here we are overestimating heuristic value of each node in the
graph/tree. We expand B to E, E to F, and F to G for a solution path o! length 4. But assume that
there is a direct path from D to a solution giving a path of length 2 as h value of D is also
overestimated. We will never find it because of overestimating h(D).We may find some other
worse solution without ever expanding D. So by overestimating h, we cannot be guaranteed to
find the shortest path. Overestimated Figure 2.12 Example Search Graph for Overestimation
Admissibility of A*
A search algorithm is admissible, if for any graph, it always terminates in an optimal path from
start state to goal state, if path exists. We have seen earlier that if heuristic function 'h'
underestimates the actual value from current state to goal state, then it bounds to give an optimal
solution and hence is called admissible function. So, we can say that A* always terminates with
the optimal path in case h is an admissible heuristic function.
Monotonic Function
A heuristic function h is monotone if
2.5 Iterative-Deepening A*
Iterative-Deepening A* (IDA*) is a combination of the depth-first iterative deepening and A*
algorithm. Here the successive iterations are
Corresponding to increasing values of the total cost of a path rather than increasing depth of the
search. Algorithm works as follows:
• For each iteration, perform a DFS pruning off a branch when its total cost (g + h) exceeds a
given threshold.
• The initial threshold starts at the estimate cost of the start state and increases for each iteration
of the algorithm.
• The threshold used for the next iteration is the minimum cost of all values exceeded the current
threshold.
IDA* algorithm as shown in Fig. 2.13. Initially, the threshold value is the estimated cost of the
start node. In the first iteration, Threshold = 5' Now we generate all the successors of start node
and compute their estimated values as 6, 8, 4' 8, and 9. The successors having values greater than
5 are to be pruned. Now for next iteration,we consider the threshold to be the minimum of the
pruned nodes value, that is, threshold=6 and the node wit 6 value along With node With value 4
are retained for further expansion.
The IDA* will find a solution of least cost or optimal solution (if one exists), if an admissible
monotonic cost function is used. IDA* not only finds cheapest path to a solution but uses far
space than A* , and it expands approximately the same number of nodes as that of A* in a
search. An additional benefit of IDA* over A* is that it is simpler to implement, as there are
open and closed lists to be maintained. A simple recursion performs DFS inside an outer loop to
handle iterations.
given set of constraints instead of finding optimal path to the solution. Such problems are called
Constraint Satisfaction (CS) Problems. For example, some of the simple constraint satisfaction
problems are cryptography, the n-Queen problem, map coloring. Crossword puzzle, etc.
A cryptography problem: A number puzzle in which a group of arithmetical operations has some
or all of its digits replaced by letters and the original digits must be found. In such a puzzle, each
letter represents a unique digit. Let us consider the following problem in which we have to
replace each letter by a distinct digit (0—9) so that the resulting sum is correct.
B A S E
+ B A L L
G A M E S
The n-Queen problem: The condition is that on the same row, or column, or diagonal no two
queens attack each other.
A map coloring problem: Given a map, color regions of map using three colors, blue, red, and
black such that no two neighboring countries have the same color.
• Operator: assigns value to any unassigned variable, provided that it does not conflict with
previously assigned variables.
example 1
Statement: Solve the following puzzle by assigning numeral (0—9) in such a way that each letter
is assigned unique digit which satisfy the following addition:
B A S E
+ B A L L
G A M E S
•Constraints: No two letters have the same value (the constraints of arithmetic).
•Initial Problem State
G=? ;A=?;M=?;E=?;S=?;B=?;L=?;
•Apply constraint inference rules to generate the relevant new constraints.
• Apply the letter assignment rules to perform all assignments required by the current set of
constraints. Then choose other rules to generate an additional assignment, which will. in turn,
generate new constrains at the next cycle.
•At each cycle, there may be several choices of rules to apply.
•A useful heuristics can help to select the best rule to apply first
for example ,if letter that has only two possible values and another with six possible values ,then
there is a better chance of guessing right on the first than on the second.
C4 C3 C2 C1 <-carrier
B A S E
+ B S L L
G A M E S
We can easily see that G is non zero value value ,so the value of carry C4 should be 1
hence G=1
now consider eq1 where E+L =S ,which is a carrying c1 then it can re-written as
E+L=S+10
E=S-L+10--1
As S+L=E which is a carrying c2 then it can re-written BY Substitute the eq 1 in
above equation
S+L=S-L+10
2L=10
L=5
CHARACTER VALUE
G 1
B
A
M
S
L 5
E
From E+L=S
=>S-E=L
now consider the possible range of vales of S and E i.e. in the pair
----- c2+A+A=M
1+3+3=7 which is conflicting with S?
whose values are not conflicting with any alphabet ,then the final state of alphabets are
CHARACTER VALUE
G 1
B 7
A 4
M 9 1 1 <-carrier
S 8 7 4 8 3
L 5 + 7 4 5 5
E 3 1 4 9 3 8
example 2
C3 C2 C1 <-carrier
T W O
+ T W O
F O U R
3 Game Playing
Since the beginning of AI Paradigin, game playing has been considered to he a major topic of AI
as, it requires intelligence and has certain well-defined states and rules. a game is s defined as a
'sequence of choices where each choice is made from a number of discrete alternatives. Each
sequence ends in a certain outcome and every outcome has a definite value for the opening
player. Playing games by human experts against computers has always been a great fascination.
We will consider only two-player games in which both the players have exactly opposite goals.
Games can classify into two types: perfect information games and imperfect information games.
Perfect information games are those in which both the players have access to the same
information about the in progress; for example, Checker, Tic—Tac—Toe, Chess, Go, etc. On the
other hand, in imperfect information games. Players do not have access to complete information
about the game: for e\ample, games involving the use of cards (such as Bridge) and dice. We will
restrict our study to discrete and perfect information games. A game is said to be discrete if it
contains a finite number of states or configurations.
move available to his opponent are the AND nodes. Therefore, game tree, one level is treated in
the an OR node level and other as AND node level from one player's Point of view. On the other
hand. in a general AND—OR tree, both types of nodes may be on the same level.
Game theory is based on the philosophy of minimizing the maximum possible loss and
maximizing the minimum gain. In game playing involving computers, one player is assumed to
be the computer, while the other is a human. During a game, two types of nodes are encountered,
namely MAX and MIN. The MAX node will try to maximize its own game, while minimizing
the opponent (MIN) game. Either of the two players, MAX and MIN, can play as the first player.
We will assign the computer to be the MAX player and the opponent to be the MIN player. Our
aim is t0 make the computer win the game by always making the best possible move at its turn.
For this we have to look ahead at all possible moves in the game by generating the complete
game tree and then decide which move is the best for MAX. As a part of game playing, game
trees labeled ' MAX level and MIN level are generated alternately.
Solving a game tree implies labeling the root node with one of labels, namely. WIN (W), LOSS
(L), or DRAW (D). There is an optimal playing strategy associated with each root label, which
tells how that label can be guaranteed regardless of the way MIN plays. An optimal strategy for
MAX is a sub-tree in which all nodes, starting from first MAX, or WIN.
Let us consider the game for explaining the concept with single pile of 7 matchsticks for the sake
of simplicity. Each player in a particular move can pick up a maximum of half the number
matchsticks from the pile at a given point in time. We will develop complete game tree with
either MAX or MIN playing as first player.
STRATEGY
if at the time of MAX players turn their N matchsticks in a pile ,then MAX can force a
win by leaving M matchsticks for the MIN player to play, where M {1,3,7,15,31,63….} using
the rule of game .The sequence {1,3,7,15,31,63….} can be generated using the formula
1. the first method is to look up from the sequence {1,3,7,15,31,..} and figure out the closest
number less than the given number N of match sticks in the pile .The difference between
N and that number gives the desired number of sticks that have to be picked up.
for example if N=45 the closest number to 45 in the sequence is 31,soer obtain the
desired number of match sticks to be picked up as 14 on subtracting 31 from 45.In this
case we have to maintain a sequence {1,3,7,15,31,..}.
2. The desired number is obtained by removing the most significant digit from the binary
representation of N and adding it to the least significant digit position.
case 1
if MAX is the first player and N consider a pile of 29 sticks and let MAX be
the first player and wins.
case 2
if MAX is the second player and N consider a pile of 15 sticks and MAX
wins.
If MAX is the first player and N at the root of the game ,then MAX can fore
to win,
If MAX can leave any player number out of the sequence (3,7,15,31,63,…}for MIN ,MAX
might lose only in the case of getting 7
Case 3 if Max is the first player and N at root of the game ,then MAX can
force a win using the strategy mentioned above in all cases except when MAX gets a number
from the sequence as its turns.
Case 4
if Max is the second player and N ,then MAX can force a win using the
strategy mentioned above in all cases except when MAX gets a number from the sequence
as its turns.
If a player can develop the game tree to a limited extent before deciding on the move, then this
shows that the player is looking ahead; this is called look-ahead strategy. If a player is looking
ahead n number of levels before making a move, then the strategy is called n-move look-ahead.
This function provides numerical assessment of how favorable the game state for MAX.
The general strategy for MAX is to play in such a manner that it maximizes its winning
chances, while simultaneously minimizing the chances of opponent. The node which offers
the best path is then chosen to make a move. for the sake of convince ,let us assume the root
node to be MAX.
Consider the one-play and two play game as shown, the score leaf node is assumed to be
calculated using evaluation function.
The procedure through which the scoring information travels up the game tree is called
MINIMAX procedure. This procedure represents a recursive algorithm for choosing the next
move in two-play game.
MINIMAX Procedure
The player hoping to achieve a positive number is called the maximizing player, while the
opponent is called the minimizing player. At each move, the MAX player will try to take a
path that leads to a large positive number; on the other hand, the opponent will try to force
the game towards situations with strongly negative static evaluations.
MINIMAX Procedure
The algorithmic steps of this procedure may bc written follows:
• Keep on gcnerating tree till the limit, say depth d of the tree, has been
• Compute the static value ofthc leaf nodes at depth d from the current position of the game
tree using evaluation function.
• Propagate the values till the current position on the basis of the MINIMAX strategy.
MINIMAX Strategy
•Depth(Pos,Depth): It is a Boolean function that returns true if the search has reached the
maximum depth from the current position ,else it returns false.
The MINIMAX function returns a structure consisting of Val field containing heuristic value of
the current state obtained by EVAL function and Path field containing the entire path from the
current state. This path is constructed backwards starting from the last element to the first
element because of recursion.
Tic-tac-toe Game
Tic-tac-toe is a two-player game in which the players take turns one by one and mark the in a 3 x
3 grid using appropriate symbols. One player uses 'o' and other uses player succeeds in placing
three respective symbols in a horizontal, vertical, or diagonal row wins the game.
Let us define the static evaluation functionfto a position P of the grid (board) as follows.
• If P is a win for MAX. then
f(P) = n. (where n is a very large positive number)
• If P is a for MIN, then
f(P) = -n
• If P is not a winning position for either player, then
f(P) = (total number of rows, columns, and diagonals that are still open for
- (total number of rows, columns, and diagonals that are still open for MIN)
• Total number of rows, columns, and diagonals still open for MAX (thick lines) = 6
• Total number of rows, columns, and diagonals still open for MIN (dotted lines) = 4
The alpha—beta pruning procedure requires the maintenance of two threshold values: one
representing a lower bound ( ) on the value that a maximizing node may ultimately be assigned
(we call this alpha) and another representing upper bound ( ) on the value that a minimizing
node may be assigned (we call it beta).
MIN (<=2)B
MAX (2)D
Assume that the evaluation function generates a = 2 for state D. At this point, the upper
2 at state B and is shown as <=2.
After the first step, we have to backtrack and generate another state E from B in the SECOND
step as shown in Fig. 3.23.
The state E gets = 7 and since there is no further Successor of B (assumed), the value at state
B becomes equal to 2. Once the value is fixed at MIN level the lower bound = 2 gets
propagated to state A as => 2.
In the third step, expand A to another successor C, and then expand C's successor to F with
From Fig. 3.24 we note that the value at state C is <=1 and the value of a root A cannot be
less then 2; the path from A through C is not useful and thus further expansion of C is pruned.
Therefore there is no need to explore the right side of the tree fully as that result is not going to
alter the move decision. Since there are no further successors of A (assumed), the value of root is
fixed as 2 , that is,
The MINIMAX procedure still has some problems even with all the refinements discussed
above; It is based on the assumption that the opponent will always choose an optimal move. In a
winning situation, this assumption is acceptable but in a losing situation, one may try other
options and gain some benefit in case the opponent makes a mistake (Rich & Knight, 2003).
Suppose, we have to choose one move out of two possible moves, both of which may lead to bad
situations for us if the opponent plays perfectly. MINIMAX procedure will always choose the
bad move out of the two: however, here we can choose an option which is slightly less bad than
the other. This is based on the assumption that the less bad move could lead to a good situation
for us if the opponent makes a single mistake. Similarly, there might be a situation when one
move appears to be only slightly more advantageous than the other. Then, it might be better to
choose the less advantageous move. To implement such systems, we should have a model of
individual playing styles of opponents.
Iterative Deepening
Rather than searching till a fixed depth in a given game tree, it is advisable to first search ohnley.
one-ply, then apply MINIMAX to two-ply, then three-ply till the final goal state is searched .
There is a good reason why iterative deepening is popular in cast of games such as chess and
others programs. In competitions, there is an average amount of time allowed per move. The idea
that enables us to conquer this constraint is to do as much look-ahead as can be done in the
available time. we use iterative deepening, we can keep on increasing the look-ahead depth until
we run out of time. We can arrange to have a record of the best move for given look-ahead even
if we have to interrupt our attempt to go one level deeper. This could not 'X done using
(unbounded) depth-first search. With effective ordering, ( pruning MINIMA algorithm
can prune many branches and the total search time can be decreased.