B Section Mod 1
B Section Mod 1
Since the invention of computers or machines, their capability to perform various tasks went on growing
exponentially. Humans have developed the power of computer systems in terms of their diverse working
domains, their increasing speed, and reducing size with respect to time.
A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines
as intelligent as human beings.
According to the father of Artificial Intelligence John McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer programs”.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent
software and systems.
Philosophy ofAI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder, “Can a
machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines that
we find and regard high in humans.
Goals ofAI
• To Create Expert Systems: The systems which exhibit intelligent behavior, learn, demonstrate, explain,
and advice its users.
• To Implement Human Intelligence in Machines: Creating systems that
understand, think, learn, and behave like humans.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.
Programming Without and WithAI
The programming without and with AI is different in following ways:
WhatisAI Technique?
In the real world, the knowledge has some unwelcomed properties:
AI techniques elevate the speed of execution of the complex program it is equipped with.
Applications ofAI
AI has been dominant in various fields such as:
• Gaming
AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can
think of large number of possible positions based on heuristic knowledge.
• Natural Language Processing
It is possible to interact with the computer that understands natural language spoken by
humans.
• Expert Systems
There are some applications which integrate machine, software, and special information to
impart reasoning and advising. They provide explanation and advice to the users.
• Vision Systems
These systems understand, interpret, and comprehend visual input on the computer. For
example,
o A spying aeroplane takes photographs which are used to figure out spatial information or map of
the areas.
o Doctors use clinical expert system to diagnose the patient.
o Police use computer software that can recognize the face of criminal with the stored portrait
made by forensic artist.
• Speech Recognition
Some intelligent systems are capable of hearing and comprehending the language in terms of
sentences and their meanings while a human talks to it. It can handle different accents, slang
words, noise in the background, change in human’s noise due to cold, etc.
• Handwriting Recognition
The handwriting recognition software reads the text written on paper by a pen or on screen by a
stylus. It can recognize the shapes of the letters and convert it into editable text.
• Intelligent Robots
Robots are able to perform the tasks given by a human. They have sensors to detect physical
data from the real world such as light, heat, temperature, movement, sound, bump, and
pressure. They have efficient processors, multiple sensors and huge memory, to exhibit
intelligence. In addition, they are capable of learning from their mistakes and they can adapt to
the new environment.
History ofAI
Here is the history of AI during 20th century:
Karel Čapek’s play named “Rossum's Universal Robots” (RUR) opens in London, first use of the word
1923
"robot" in English.
1943 Foundations for neural networks laid.
1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics.
Alan Turing introduced Turing Test for evaluation of intelligence and published Computing
1950 Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a
search.
John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program
1956
at Carnegie Mellon University.
Danny Bobrow's dissertation at MIT showed that computers can understand natural language
1964
well enough to solve algebra word problems correctly.
Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in
1965
English.
Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion,
1969 perception, and problem solving.
Artificial Intelligence
The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot,
1973 capable of using vision to locate and assemble models.
1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built.
1985
Harold Cohen created and demonstrated the drawing program, Aaron.
Artificial Intelligence
The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
1997
Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face
2000 that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates
meteorites.
While studying artificially intelligence, you need to know what intelligence is. This chapter covers Idea of
intelligence, types, and components of intelligence.
Whatis Intelligence?
The ability of a system to calculate, reason, perceive relationships and analogies, learn from experience,
store and retrieve information from memory, solve problems, comprehend complex ideas, use natural
language fluently, classify, generalize, and adapt new situations.
Types ofIntelligence
As described by Howard Gardner, an American developmental psychologist, the Intelligence comes in
multifold:
5
Artificial Intelligence
The ability to speak, recognize, and use mechanisms of phonology
Linguistic intelligence
(speech sounds), syntax (grammar), and semantics (meaning). Narrators, Orators
Logical- mathematical The ability of use and understand relationships in the absence of
Mathematicians,
intelligence action or objects. Understanding complex and abstract ideas.
Scientists
You can say a machine or a system is artificially intelligent when it is equipped with at least one and at most
all intelligences in it.
1. Reasoning
2. Learning
3. Problem Solving
4. Perception
5. Linguistic Intelligence
6
Artificial Intelligence
Even if all of the premises are true in a statement, If something is true of a class of things in general, it is
inductive reasoning allows for the conclusion to be false. also true for all members of that class.
1. Intelligent Agent’s:
2.1 Agents andenvironments:
7
Artificial Intelligence
2.1.1 Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors and acting upon that
Artificial Intelligence
environment through actuators.
✓ A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, andother body parts
foractuators.
✓ A robotic agent might have cameras and infrared range finders for sensors and variousmotors foractuators.
✓ A software agent receives keystrokes, file contents, and network packets as sensory
An AI system is composed of an agent and its environment. The agents act in their environment.
The environment may contain other agents.
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs
such as hands, legs, mouth, for effectors.
robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for
effectors.
A software agent has encoded bit strings as its programs and actions.
8
Artificial Intelligence
Agents Terminology
3 Performance Measure of Agent: It is the criteria, which determines how successful an agent is.
4 Behavior of Agent: It is the action that agent performs after any given sequence of percepts.
5 Percept: It is agent’s perceptual inputs at a given instance.
6 Percept Sequence: It is the history of all that an agent has perceived till date.
• Agent Function: It is a map from the precept sequence to an action.
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of judgment.
Rationality is concerned with expected actions and results depending upon what the agent has perceived.
Performing actions with the aim of obtaining useful information is an important part of rationality.
A rational agent always performs right action, where the right action means the action that causes the
agent to be most successful in the given percept sequence. The problem the agent solves is characterized
by Performance Measure, Environment, Actuators, and Sensors (PEAS).
9
Artificial Intelligence
Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of current precept.
• Their environment is completely observable.
Internal State: It is a representation of unobserved aspects of current state depending on percept history.
10
Artificial Intelligence
Goal-Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent
since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.
11
Artificial Intelligence
Utility-Based Agents
They choose actions based on a preference (utility) for each state.
TheNatureofEnvironments
Some programs operate in the entirely artificial environment confined to keyboard input, database,
computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbots domains.
The simulator has a very detailed, complex environment. The software agent needs to choose from a
long array of actions in real time. A softbot designed to scan the online preferences of the customer and
show interesting items to the customer works in the real as well as an artificial environment.
The most famous artificial environment is the Turing Test environment, in which one real and other
artificial agents are tested on equal ground. This is a very challenging environment as it is highly difficult
for a software agent to perform as well as a human.
12
Artificial Intelligence
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two persons, one plays the
role of the tester. Each of them sits in different rooms. The tester is unaware of who is machine and who
is a human. He interrogates the questions by typing and sending them to both intelligences, to which he
receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response from the human
response, then the machine is said to be intelligent.
Properties of Environment
The environment has multifold properties:
• Discrete / Continuous: If there are a limited number of distinct, clearly defined, states of the environment,
the environment is discrete (For example, chess); otherwise it is continuous (For example, driving).
• Observable / Partially Observable: If it is possible to determine the complete state of the environment at
each time point from the percepts it is observable; otherwise it is only partially observable.
• Static / Dynamic: If the environment does not change while an agent is acting, then it is static; otherwise it
is dynamic.
• Single agent / Multiple agents: The environment may contain other agents which may be of the same or
different kind as that of the agent.
• Accessible vs. inaccessible: If the agent’s sensory apparatus can have access to the complete state of the
environment, then the environment is accessible to that agent.
• Deterministic vs. Non-deterministic: If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is deterministic; otherwise it is non-
deterministic.
• Episodic vs. Non-episodic: In an episodic environment, each episode consists of the agent perceiving and
then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not
depend on the actions in the previous episodes. Episodic environments are much simpler because the
agent does not need to think ahead.
13
5. Popular Search Algorithms
Artificial Intelligence
Searching is the universal technique of problem solving in AI. There are some single-player games such as
tile games, Sudoku, crossword, etc. The search algorithms help you to search for a particular position in
such games.
The other examples of single agent pathfinding problems are Travelling Salesman Problem, Rubik’s Cube,
and Theorem Proving.
Search Terminology
Problem Space: It is the environment in which the search takes place. (A set of states and set of
operators to change those states)
Problem Instance: It is Initial state + Goal state
Problem Space Graph: It represents problem state. States are shown by nodes and operators are shown
by edges.
Depth of a problem: Length of a shortest path or shortest sequence of operators from Initial State to goal
state.
Space Complexity: The maximum number of nodes that are stored in memory.
Time Complexity: The maximum number of nodes that are created. Admissibility: A property
of an algorithm to always find an optimal solution. Branching Factor: The average number of
child nodes in the problem space graph. Depth: Length of the shortest path from initial state to
goal state.
Brute-ForceSearch Strategies
They are most simple, as they do not need any domain-specific knowledge. They work fine with small
number of possible states.
Requirements –
• State description
Breadth-First Search
It starts from the root node, explores the neighboring nodes first and moves towards the next level
neighbors. It generates one tree at a time until the solution is found. It can be implemented using FIFO
queue data structure. This method provides shortest path to the solution.
If branching factor (average number of child nodes for a given node) = b and depth = d, then number of
nodes at level d = bd.
Disadvantage: Since each level of nodes is saved for creating next one, it consumes a lot of memory
space. Space requirement to store nodes is exponential.
Its complexity depends on the number of nodes. It can check duplicate nodes.
Depth-First Search
It is implemented in recursion with LIFO stack data structure. It creates the same set of nodes as Breadth-
First method, only in the different order.
As the nodes on the single path are stored in each iteration from root to leaf node, the space
requirement to store nodes is linear. With branching factor b and depth as m, the storage space is bm.
Disadvantage: This algorithm may not terminate and go on infinitely on one path. The solution to this
issue is to choose a cut-off depth. If the ideal cut-off is d, and if chosen cut- off is lesser than d, then this
algorithm may fail. If chosen cut-off is more than d, then execution time increases.
Its complexity depends on the number of paths. It cannot check duplicate nodes.
Bidirectional Search
It searches forward from initial state and backward from goal state till both meet to identify a common
state.
The path from initial state is concatenated with the inverse path from the goal state. Each search is done
only up to half of the total path.
Disadvantage: There can be multiple long paths with the cost ≤ C*. Uniform Cost search must explore
them all.
It never creates a node until all lower nodes are generated. It only saves a stack of nodes. The algorithm
ends when it finds a solution at depth d. The number of nodes created at depth d is bd and at depth d-1 is
bd-1.
Informed (Heuristic)SearchStrategies
To solve large problems with large number of possible states, problem-specific knowledge needs to be
added to increase the efficiency of search algorithms.
In each iteration, a node with a minimum heuristic value is expanded, all its child nodes are created and
placed in the closed list. Then, the heuristic function is applied to the child nodes and they are placed in
the open list according to their heuristic value. The shorter paths are saved and the longer ones are
disposed.
A* Search
It is best-known form of Best First search. It avoids expanding paths that are already expensive, but
expands most promising paths first.
• f(n) estimated total cost of path through n to goal. It is implemented using priority queue by increasing f(n).
LocalSearchAlgorithms
They start from a prospective solution and then move to a neighboring solution. They can return a valid
solution even if it is interrupted at any time before they end.
Hill-Climbing Search
It is an iterative algorithm that starts with an arbitrary solution to a problem and attempts to find a better
solution by changing a single element of the solution incrementally. If the change produces a better
solution, an incremental change is taken as a new solution. This process is repeated until there are no
further improvements.
problem, a problem
neighbor, a node
State[current]
current ← neighbor
end
Disadvantage: This algorithm is neither complete, nor optimal.
Otherwise the (initial k states and k number of successors of the states = 2k) states are placed in a pool. The
pool is then sorted numerically. The highest k states are selected as new initial states. This process
continues until a maximum value is reached.
loop
if any of the states = solution, then return the state else select
end
Simulated Annealing
Annealing is the process of heating and cooling a metal to change its internal structure for modifying its
physical properties. When the metal cools, its new structure is seized, and the metal retains its newly
obtained properties. In simulated annealing process, the temperature is kept variable.
We initially set the temperature high and then allow it to ‘cool' slowly as the algorithm proceeds. When
the temperature is high, the algorithm is allowed to accept worse solutions with high frequency.
Start
End
Travelling Salesman Problem
In this algorithm, the objective is to find a low-cost tour that starts from a city, visits all cities en-route
exactly once and ends at the same starting city.
Start
Find out all (n -1)! Possible solutions, where n is the total number of cities.
Determine the minimum cost by finding out the cost of each of these (n -1)! solutions.
6.1.2 PerceptSequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.
6.1.4 Agentprogram
Internally, the agent function for an artificial agent will be implemented by an agent program. It is
important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in Fig
2.1.5. This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move right,
suck up the dirt, or do nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial tabulation of this agent
function is shown in Fig 2.1.6.
Fig 2.1.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner
world shown in the Fig 2.1.5
Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept (location, status) and
returns an action each time
Strategies of Solving Tic-Tac-Toe Game Playing
Tic-Tac-Toe is a simple and yet an interesting board game. Researchers have used various approaches to
study the Tic-Tac-Toe game. For example, Fok and Ong and Grim et al. have used artificial neural
network based strategies to play it. Citrenbaum and Yakowitz discuss games like Go-Moku,
Hex and Bridg-It which share some similarities with Tic-Tac-Toe.
Fig 1.
The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the form of a 3x3 matrix (Fig.
1). The game is played by 2 players and either of them can start. Each of the two players is assigned a
unique symbol (generally 0 and X). Each player alternately gets a turn to make a move. Making a move is
compulsory and cannot be deferred. In each move a player places the symbol assigned to him/her in a
hitherto blank cell.
Let a track be defined as any row, column or diagonal on the board. Since the board is a square
matrix with 9 cells, all rows, columns and diagonals have exactly 3 cells. It can be easily observed that
there are 3 rows, 3 columns and 2 diagonals, and hence a total of 8 tracks on the board (Fig. 1). The goal
of the game is to fill all the three cells of any track on the board with the symbol assigned to one before
the opponent does the same with the symbol assigned to him/her. At any point of the game, if
there exists a track whose all three cells have been marked by the same symbol, then the player
to whom that symbol have been assigned wins and the game terminates. If there exist no track
whose cells have been marked by the same symbol when there is no more blank cell on the board then
the game is drawn.
Let the priority of a cell be defined as the number of tracks passing through it. The priorities of the
nine cells on the board according to this definition are tabulated in Table 1. Alternatively, let the
priority of a track be defined as the sum of the priorities of its three cells. The priorities of the eight
tracks on the board according to this definition are tabulated in Table 2. The prioritization of the cells
and the tracks lays the foundation of the heuristics to be used in this study. These heuristics are
somewhat similar to those proposed by Rich and Knight.
Strategy 1:
Algorithm:
2. Use the computed number as an index into Move-Table and access the vector stored there.
Procedure:
1) Elements of vector:
0: Empty
1: X
2: O
b) Element = A vector which describes the most suitable move from the
Comments:
1. A lot of space to store the Move-Table.
3. Difficult to extend
Stratergy 2:
Data Structure:
5: O
3) Turn of move: indexed by integer
1,2,3, etc
Function Library:
1. Make2:
IF (board[5] = 2)
RETURN 5; //the center cell.
ELSE
RETURN any cell that is not at the board’s corner;
// (cell: 2,4,6,8)
Algorithm:
1. Turn = 1: (X moves)
Go(1) //make a move at the left-top cell
2. Turn = 2: (O moves)
IF board[5] is empty THEN
Go(5)
ELSE
Go(1)
3. Turn = 3: (X moves)
IF board[9] is empty THEN
Go(9)
ELSE
Go(3).
4. Turn = 4: (O moves)
IF Posswin (X) <> 0 THEN
Go (Posswin (X))
//Prevent the opponent to win
ELSE Go (Make2)
5. Turn = 5: (X moves)
IF Posswin(X) <> 0 THEN
Go(Posswin(X))
//Win for X.
ELSE IF Posswin(O) <> THEN
Go(Posswin(O))
//Prevent the opponent to win
ELSE IF board[7] is empty THEN
Go(7)
ELSE Go(3).
Comments:
1. Not efficient in time, as it has to check several conditions before making each
move.
Searching Solutions:
To build a system to solve a problem:
1. Define the problem precisely
2. Analyze the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem
4. Choose the best problem-solving techniques and apply it to the particular problem.
Defining the problem as State Space Search:
The state space representation forms the basis of most of the AI methods.
• Formulate a problem as a state space search by showing the legal problem states, the legal
operators, and the initial and goal states.
• A state is defined by the specification of the values of all attributes of interest in the world
• An operator changes one state into the other; it has a precondition which is the value of certain
attributes prior to the application of the operator, and a set of effects, which are the attributes
altered by the operator
• The initial state is where you start
• The goal state is the partial description of the solution
Which search algorithm one should use will generally depend on the problem domain.
There are four important factors to consider:
2. Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution if there exists
more than one solution?
3. Time Complexity – The upper bound on the time required to find a solution, as a function of the
complexity of the problem.
4. Space Complexity – The upper bound on the storage space (memory) required at any point during the
search, as a function of the complexity of the problem.
Systematic Control Strategies (Blind searches):
Let us discuss these strategies using water jug problem. These may be applied to any search problem.
Generate all the offspring of the root by applying each of the applicable rules to the initial state.
Now for each leaf node, generate all its successors by applying all the rules that are appropriate.
8 Puzzle Problem.
The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the frame is always
empty thus making it possible to move an adjacent numbered tile into the empty cell. Such a puzzle is
illustrated in following diagram.
The program is to change the initial configuration into the goal configuration. A solution to the problem
is an appropriate sequence of moves, such as “move tiles 5 to the right, move tile 7 to the left, move tile
6 to the down, etc”.
Solution:
To solve a problem using a production system, we must specify the global database the rules, and the
control strategy. For the 8 puzzle problem that correspond to these three components. These elements
are the problem states, moves and goal. In this problem each tile configuration is a state. The set of all
configuration in the space of problem states or the problem space, there are only 3, 62,880 different
configurations o the 8 tiles and blank space. Once the problem states have been conceptually identified,
we must construct a computer representation, or description of them . this description is then used as
the database of a production system. For the 8-puzzle, a straight forward description is a 3X3 array of
matrix of numbers. The initial global database is this description of the initial problem state. Virtually
any kind of data structure can be used to describe states.
A move transforms one problem state into another state. The 8-puzzle is conveniently interpreted as
having the following for moves. Move empty space (blank) to the left, move blank up, move blank to the
right and move blank down,. These moves are modeled by production rules that operate on the state
descriptions in the appropriate manner.
The rules each have preconditions that must be satisfied by a state description in order for them to be
applicable to that state description. Thus the precondition for the rule associated with “move blank up”
is derived from the requirement that the blank space must not already be in the top row.
The problem goal condition forms the basis for the termination condition of the production system. The
control strategy repeatedly applies rules to state descriptions until a description of a goal state is
produced. It also keeps track of rules that have been applied so that it can compose them into sequence
representing the problem solution. A solution to the 8-puzzle problem is given in the following figure.
Example:- Depth – First – Search traversal and Breadth - First - Search traversal
Search is the systematic examination of states to find path from the start/root state to the goal state.
Many traditional search algorithms are used in AI applications. For complex problems, the traditional
algorithms are unable to find the solution within some practical time and space limits. Consequently,
many special techniques are developed; using heuristic functions. The algorithms that use heuristic
functions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to
be intelligent because they achieve better performance.
Heuristic algorithms aremore efficient because they take advantage of feedback from the data to direct
the search path.
Uninformed search
Also called blind, exhaustive or brute-force search, uses no information about the problem to guide the
search and therefore may not be very efficient.
Informed Search:
Also called heuristic or intelligent search, uses information about the problem to guide the search,
usually guesses the distance to a goal state and therefore efficient, but the search may not be always
possible.
• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
BFS illustrated:
Step 1: Initially fringe contains only one node corresponding to the source state A.
Figure 1
FRINGE: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put
at the back of fringe.
Figure 3
FRINGE: C D E
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.
Figure 4
FRINGE: D E D G
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the back
of fringe.
Figure 5
FRINGE: E D G C F
Figure 6
FRINGE: D G C F
Figure 7
FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the
path A C G by following the parent pointers of the node corresponding to G. The algorithm
terminates.
• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST
DFS illustrated:
Figure 1
FRINGE: A
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.
Figure 3
FRINGE: D E C
Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Figure 4
FRINGE: C F E C
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.
Figure 6
FRINGE: G F E C
Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the
search tree has infinite depth, the algorithm may not terminate. This can happen if the search space is
infinite. It can also happen if the search space contains cycles. The latter case can be handled by
checking for cycles in the algorithm. Thus Depth First Search is not complete.
Description:
• It is a search strategy resulting when you combine BFS and DFS, thus combining the advantages
of each strategy, taking the completeness and optimality of BFS and the modest memory
requirements of DFS.
• IDS works by looking for the best search depth d, thus starting with depth limit 0 and make a BFS
and if the search failed it increase the depth limit by 1 and try a BFS again with depth 1 and so
on – first d = 0, then 1 then 2 and so on – until a depth d is reached where a goal is found.
Artificial
Intelligence
Algorithm:
procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found
procedure
DLS(node,
depth) if
depth = 0
and node
is a goal
return
node
else if depth > 0
foreach child of node
found ← DLS(child, depth−1)
l
Performance Measure:
o Completeness: IDS is like BFS, is complete when the branching factor b is
finite.
o Optimality: IDS is also like BFS optimal when the steps are of the same cost.