0% found this document useful (0 votes)
27 views21 pages

Ai Unit1

Uploaded by

summapaesuvom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views21 pages

Ai Unit1

Uploaded by

summapaesuvom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1

DEPARTMENT OF COMPUTER APPLICATIONS

UNIT I INTELLIGENT AGENTS AND UNINFORMED SEARCH

Introduction - Foundations of AI - History of AI - The state of the art - Risks and


Benefits of AI -Intelligent Agents - Nature of Environment - Structure of Agent - Problem
Solving Agents -Formulating Problems - Uninformed Search - Breadth First Search -
Dijkstra's algorithm or uniform-cost search - Depth First Search - Depth Limited Search

What is Artificial Intelligence (AI)?

In today's world, technology is growing very fast, and we are getting in touch with different new
technologies day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is
ready to create a new revolution in the world by making intelligent machines.The Artificial
Intelligence is now all around us. It is currently working with a variety of sub fields, ranging from
general to specific, such as self-driving cars, playing chess, proving theorems, playing music,
Painting, etc.

The definitions of AI according to some text books are categorized into four
approaches and are summarized in the table below :

 Systems that think like human


 Systems that act like human
 Systems that think rationally
 Systems that act rationally

So, we can define AI as:

"It is a branch of computer science by which we can create intelligent machines

which can behave like a human, think like humans, and able to make

decisions."

1
The Foundations of Artificial Intelligence

The disciplines that contributed ideas, viewpoints, and techniques to AI. It is forced to concentrate

on a small number of people, events, and ideas and to ignore others that also were important. I’ll

explain and represent it through a series of questions

1. Philosophy

Formal rules be used to draw valid conclusions?

• How does the mind arise from a physical brain?

• Where does knowledge come from?

• How does knowledge lead to action?

✒️ Rationalism, Dualism, Materialism, Empiricism, Induction, Logical Positivism, Confirmation Theory.

2. Mathematics

 at are the formal rules to draw valid conclusions?

 What can be computed?

 How do we reason with uncertain information?

2
The main three fundamental areas are logic, computation and probability.

✒️ Algorithm, incompleteness theorem, computable, tractability, NP completeness, Non

deterministic polynomial and probability.

3. Economics

 w should we make decisions so as to maximize payoff?

 How should we do this when others may not go along?

 How should we do this when the payoff may be far in the future?

✒️ Utility, Decision Theory, Game Theory, Operations Research.

4. Neuroscience

 How do brains process information? Neuroscience is the study of the nervous system,

especially the brain. We are still a long way from understanding how cognitive processes

actually work. The truly amazing conclusion is that a collection of simple cells can lead to

thought, action, and consciousness or, brains cause minds.The only real alternative theory is

mysticism: that minds operate in some mystical realm that is beyond physical science.

5. Psychology

6. why do humans and animals think and act?✒️ Behaviourism, Cognitive psychology.

 The three key steps of a knowledge-based agent:

I. the stimulus must be translated into an internal representation

3
II. the representation is manipulated by cognitive processes to derive internal

representations

III. These are in turn re translated back into action.

6. Computer engineering

 How can we build an efficient computer?

✒️ Operational computer and operational programmable computer

AI has pioneered many ideas that have made their way back to mainstream computer science,

including time sharing, interactive interpreters, personal computers with windows and mice, rapid

development environments, the linked list data type, automatic storage management, and key

concepts of symbolic, functional, declarative, and object-oriented programming.

7. Linguistics

 How does language relate to thought?

 Verbal Behavior — behaviorist approach to language learning

✒️ Computational linguistics or natural language processing and knowledge representation.

8. Control theory and cybernetics

 How can artifacts operate under their own control?✒️ control Theory, Homeostatic and

objective function.
5

-------------------------------------------------------------------------------------------------

4
History of AI

Artificial Intelligence is not a new word and not a new technology for researchers. This technology
is much older than you would imagine. Even there are the myths of Mechanical men in Ancient
Greek and Egyptian Myths. Following are some milestones in the history of AI which defines the
journey from the AI generation to till date development.

1950 Alan Turing publishes "Computing Machinery and Intelligence"

1952 Arthur Samuel develops a self-learning program to play checkers

1956 Artificial Intelligence used by John McCarthy in a conference

1957 First programming language for numeric and scientific computing (FORTRAN)

1958 First AI programming language (LISP)

1959 Arthur Samuel used the term Machine Learning

1959 John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project

1961 First industrial Robot (Unimate) on the assembly line at General Motors

1965 ELIZA by Joseph Weizenbaum was the first program that could communicate on
any topic

1972 First logic programming language (PROLOG)

1991 U. S. forces uses DART (automated logistics planning and scheduling)


V. in the Gulf war

1997 Deep Blue (IBM) beats the world champion in chess

2002 The first robot cleaner (Roomba)

2005 Self-driving car (STANLEY) wins DARPA

2008 Breakthrough in speech recognition (Google)

2011 A neural network wins over humans in traffic sign recognition (99.46% vs 99.22%)

2011 Apple Siri

2011 Watson (IBM) wins Jeopardy!

2014 Amazon Alexa

5
2014 Microsoft Cortana

2014 Self-driving car (Google) passes a state driving test

2015 Google AlphaGo defeated various human champions in the board game Go

2016 The human robot Sofia by Hanson Robotics

8
-----------------------------------------------------------------------------------------------------------
The State of the Art or The Applications of AI
ROBOTIC VEHICLES:
The history of robotic vehicles stretches back to radio-controlled cars of the 1920s, but
the first demonstrations of autonomous road driving without special guides
occurred in the 1980s. A driverless robotic car named STANLEY outfitted with
cameras, radar, and laser rangefinders to sense the environment and onboard software
to command the steering, braking, and acceleration.
Speech recognition: A traveler calling United Airlines to book a flight can have the
entire conversation guided by an automated speech recognition and dialog
management system.
Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s
Remote Agent program became the first on-board autonomous planning program to
control the scheduling of operations for a spacecraft . REMOTE AGENT generated plans
from high-level goals specified from the ground and monitored the execution of those
plans—detecting, diagnosing, and recovering from problems as they occurred.
Successor program MAPGEN plans the daily operations for NASA’s Mars Exploration
Rovers, and MEXAR2 did mission planning—both logistics and science planning—for the
European Space Agency’s Mars Express mission in 2008. Game playing: IBM’s DEEP
BLUE became the first computer program to defeat the world champion in a chess
match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match.
Spam fighting: Each day, learning algorithms classify over a billion messages as spam,
saving the recipient from having to waste time deleting what, for many users, could
comprise 80% or 90% of all messages, if not classified away by algorithms. Because the
spammers are continually updating their tactics, it is difficult for a static programmed
approach to keep up, and learning algorithms work best
Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART to do automated logistics planning and
scheduling for transportation.
Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum
cleaners for home use. The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify
the location of snipers.
Machine Translation: A computer program automatically translates one language to another.
-----------------------------------------------------------------------------------------------------------

6
10

Risk and benefits of AI

The advantages range from streamlining, saving time, eliminating biases, and automating
repetitive tasks, just to name a few. The disadvantages are things like costly implementation,
potential human job loss, and lack of emotion and creativity.
Artificial Intelligence (AI) has now become a part of our day-to-day lives. The scope for
innovation and development in AI is enormous and will continue changing the world in diverse
ways in the future.As with everything else, there are positives and negatives that come with AI.
Here are some remarkable benefits and risks of Artificial Intelligence that continue reshaping the
world of today.

Benefits

 Automation
 Smart Decision Making
 Enhanced Customer Experience
 Data Analysis
 Managing Repetitive Tasks
Risks
 Job Loss
 Privacy Violation
 Malicious Use of AI
 Safety Considerations
 Fairness and Bias Concerns
-----------------------------------------------------------------------------------------------------------

1.2 Agents and Environments

An agent is anything that can be viewed as perceiving its environment through

7
sensors and acting upon that environment through actuators. This simple idea is
illustrated in Figure
Figure :Agents interact with environments through sensors and actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs,
vocal tract, and so on for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators.
 A software agent receives file contents, network packets, and human input
(keyboard/mouse/touchscreen/voice) as sensory inputs and acts on the
environment by writing files, sending network packets, and displaying
information or generating sounds.

The term percept to refer to the content an agent’s sensors are perceiving. An agent’s
percept sequence is the complete history of everything the agent has ever perceived.
An agent’s behavior is described by the agent function that maps any given percept
sequence to an action.

Consider a simple example—the vacuum-cleaner world, which consists of a robotic


vacuum- cleaning agent in a world consisting of squares that can be either dirty or
clean.
11
 The vacuum agent perceives which square it is in and whether there is dirt in the square.
The agent starts in square A . The available actions are to move to the right, move to the
left, suck up the dirt, or do nothing.

This leads to a definition of a rational agent:For each possible percept sequence, a


rational agent should select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has Omniscience, learning, and autonomy
---------------------------------------------------------------------------------------------------------
The Nature of Environments

Task environments are essentially the “problems” to which rational agents are the
“solutions.” The nature of the task environment directly affects the appropriate design
for the agent program. Specifying the task environment
When designing an agent, the first step is to specify the task environment as fully as possible
using
PEAS(Performance Measure Environment Actuator Sensor)
Example :PEAS description of the task environment for an automated taxi
driver. Performance Measure
Desirable qualities include getting to the correct destination; minimizing fuel

8
consumption and wear and tear; minimizing the trip time or cost; minimizing violations
of traffic laws and disturbances to other drivers; maximizing safety and passenger
comfort; maximizing profits.
Environment
Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban
alleys to 12- lane freeways. The roads contain other traffic, pedestrians, stray animals,
road works, police cars, puddles, and potholes.

The actuators for an automated taxi include those available to a human driver: control
over the engine through the accelerator and control over steering and braking. In
addition, it will need output to a display screen or voice synthesizer to talk back to the
passengers, and perhaps some way to communicate with other vehicles, politely or
otherwise.
The basic sensors for the taxi will include one or more video cameras and ultrasound
sensors to detect distances to other cars and obstacles.

Figure: PEAS description of the task environment for an


automated taxi.

9
Figure: Examples of agent types and their PEAS descriptions

10
14

Properties of task environments(Types of environment )

(i)FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE:


 If an agent’s sensors gives access to the complete state of the environment at
each point in time, then the task environment is fully observable.
 Fully observable environments are convenient because the agent need not
maintain any internal state to keep track of the world.
 An environment might be partially observable because of noisy and inaccurate
sensors or because parts of the state are simply missing from the sensor data
 For example, a vacuum agent with only a local dirt sensor cannot tell whether
there is dirt in other squares.
 If the agent has no sensors at all then the environment is
unobservable. (ii)SINGLE-AGENT VS. MULTIAGENT:
In a single agent environment there is well defined agent who takes the decision
For example an agent solving a crossword puzzle by itself is clearly in a single-agent
environment, In a multiagent agent environment a group of agents are involved in
taking the decision
Example an agent playing chess is in a two agent
environment. Multiagents are of two types
 Competitive and
 Cooperative

(iii) DETERMINISTIC VS. NONDETERMINISTIC.


 If the next state of the environment is completely determined by the current state
and the action executed by the agent(s), then the environment is deterministic;
otherwise, it is nondeterministic. Taxi driving is clearly nondeterministic in this
sense, because one can never predict the behavior of traffic exactly;
 The vacuum world is deterministic, but variations can include nondeterministic
elements such as randomly appearing dirt and an unreliable suction mechanism.
 The word stochastic is used as a synonym for “nondeterministic,” a distinction
between the two terms; A model of the environment is stochastic if it explicitly
deals with probabilities (e.g., “there’s a 25% chance of rain tomorrow”) and
“nondeterministic” if the possibilities are listed without being quantified (e.g.,
“there’s a chance of rain tomorrow”).

(iv) EPISODIC VS. SEQUENTIAL:


 In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode the agent receives a percept and then performs a single
action.
 The next episode does not depend on the actions taken in previous episodes.
 Many classification tasks are episodic. For example, an agent that has to spot

11
defective parts on an assembly line bases each decision on the current part,
regardless of previous decisions; moreover, the current decision doesn’t affect
whether the next part is defective.
 In sequential environments, on the other hand, the current decision could affect
all future decisions.
 Chess and taxi driving are sequential: in both cases, short-term actions can have
5
long-term consequences.

 Episodic environments are much simpler than sequential environments because


the agent does not need to think ahead.

(v) STATIC VS. DYNAMIC:


 If the environment can change while an agent is deliberating, then the
environment is dynamic for that agent; otherwise, it is static.
 Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about the
passage of time.
 Dynamic environments, on the other hand, are continuously asking the agent
what it wants to do; if it hasn’t decided yet, that counts as deciding to do nothing.
If the environment itself does not change with the passage of time but the
agent’s performance score does, then the environment is semidynamic.
 Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving
while the driving algorithm dithers about what to do next. Chess, when played
with a clock, is semidynamic. Crossword puzzles are static.

(vi) DISCRETE VS. CONTINUOUS:


 A discrete environment has a finite number of distinct states over time. Each state
has associated percepts and actions on the agent. Eg: Chess has a discrete set of
percepts and actions.
 In a continuous environment , the environment is not stable at any given point of
time and also it changes continuously. Eg: Taxi driving is a continuous-state and
continuous-time problem: the speed and location of the taxi and of the other
vehicles sweep through a range of continuous values

(vii) KNOWN VS. UNKNOWN:


 In a known environment, the outcomes (or outcome probabilities if the
environment is nondeterministic) for all actions are given. Obviously, if the
environment is unknown, the agent will have to learn how it works in order to
make good decisions.

12
Figure: Examples of task environments and their characteristics.

-----------------------------------------------------------------------------------------------------------

1.3 The Structure of Agents


The job of AI is to design an agent program that implements the agent function—the
mapping from percepts to actions.

Agent architecture is a program that runs on some sort of computing device with
physical sensors and actuators.
In general, the architecture makes the percepts from the sensors available to the
program, runs the program, and feeds the program’s action choices to the actuators as
they are generated.
agent = architecture + program

Agent programs
 The agent programs all have the same skeleton: they take the current
percept as input from the sensors and return an action to the actuators.

 The difference between the agent program and the agent function is, the agent
program takes the current percept as input, and the agent function, which may
depend on the entire percept history.

 The agent program has no choice but has to take just the current percept as
input because nothing more is available from the environment; if the agent’s
actions need to depend on the entire percept sequence, the agent will have to
remember the percepts.

 The agent programs are described using a simple pseudocode language

13
 For example, Figure shows a trivial agent program that keeps track of the
percept sequence and then uses it to index into a table of actions to
decide what to do.

The four basic kinds of agent programs that embody the principles underlying
almost all intelligent systems:
 Simple reflex agents;
 Model-based reflex agents;
 Goal-based agents; and
 Utility-based agents.

Problem Solving Agents

When the correct action to take is not immediately obvious, an agent may need to to
plan ahead: to consider a sequence of actions that form a path to a goal state. Such an
agent is called a problem- solving agent, and the computational process it undertakes
is called search.

The agent can follow this four-phase problem-solving process:


GOAL FORMULATION: Goals organize behavior by limiting the objectives and
hence the actions to be considered.

PROBLEM FORMULATION: The agent devises a description of the states and actions
necessary to reach the goal—an abstract model of the relevant part of the world.

SEARCH: Before taking any action in the real world, the agent simulates sequences of
actions in its model, searching until it finds a sequence of actions that reaches the goal.
Such a sequence is called a solution. The agent might have to simulate multiple
sequences that do not reach the goal, but eventually it will find a solution (such as going
from Arad to Sibiu to Fagaras to Bucharest), or it will find that no solution is possible.

EXECUTION: The agent can now execute the actions in the solution, one at a time.

It is an important property that in a fully observable, deterministic, known environment,


the solution to any problem is a fixed sequence of actions. If the model is correct, then

14
once the agent has found a solution, it can ignore its percepts while it is executing the
actions—because the solution is guaranteed to lead to the goal. Control theorists call
this an open-loop system: ignoring the percepts breaks the loop between agent and
environment. If there is a chance that the model is incorrect, or the environment is
nondeterministic, then the agent would be safer
using a closed-loop approach that monitors the percepts

-----------------------------------------------------------------------------------------------------------
Formulating problems

A model—an abstract mathematical description—and not the real thing. The process of
removing detail from a representation is called abstraction. A good problem formulation
has the right level of detail. The abstraction is valid if it can elaborate any abstract
solution into a solution in the more detailed world; The abstraction is useful if carrying
out each of the actions in the solution is easier than the original problem. The choice of a
good abstraction thus involves removing as much detail as possible while retaining
validity and ensuring that the abstract actions are easy to carry out.
Uninformed Search Strategies(Blind Search)
The term blind means that the strategies have no additional information about states
beyond that is provided in the problem definition. All they can do is generate successors
and distinguish a goal state from a non-goal state. All search strategies are distinguished
by the order in which nodes are expanded. Strategies that know whether one non-goal
state is “more promising” than another are called informed search or heuristic search
strategies
There are five uninformed search strategies as given below.
o Breadth-first search
o Uniform-cost search
o Depth-first search
o Depth-limited search

Breadth-First Search Algorithm?


Breadth-First Search Algorithm or BFS is the most widely utilized method.

BFS is a graph traversal approach in which you start at a source node and layer by layer through
the graph, analyzing the nodes directly related to the source node. Then, in BFS traversal, you
must move on to the next-level neighbor nodes.

According to the BFS, you must traverse the graph in a breadthwise direction:

 To begin, move horizontally and visit all the current layer's nodes.

 Continue to the next layer.


15
Breadth-First Search uses a queue data structure to store the node and mark it as "visited" until it
marks all the neighboring vertices directly related to it. The queue operates on the First In First
Out (FIFO) principle, so the node's neighbors will be viewed in the order in which it inserts them
in the node, starting with the node that was inserted first.

Following the definition of breadth-first search, you will look at why we need a breadth-first
search algorithm.

How Does the BFS Algorithm Work?

Breadth-First Search uses a queue data structure technique to store the vertices. And the queue
follows the First In First Out (FIFO) principle, which means that the neighbors of the node will be
displayed, beginning with the node that was put first.

The transverse of the BFS algorithm is approaching the nodes in two ways.

 Visited node

 Not visited node

16
How Does the Algorithm Operate?

 Start with the source node.

 Add that node at the front of the queue to the visited list.

 Make a list of the nodes as visited that are close to that vertex.

 And dequeue the nodes once they are visited.

 Repeat the actions until the queue is empty.

Pseudocode

BFS (G, s) //Where G is the graph and s is the source node


let Q be queue.
Q.enqueue( s ) //Inserting s in queue until all its neighbour vertices are marked.

mark s as visited.
while ( Q is not empty)
//Removing that vertex from queue,whose neighbour will be visited now
v = Q.dequeue( )

//processing all the neighbours of v


for all neighbours w of v in Graph G
if w is not visited
Q.enqueue( w ) //Stores w in Q to further visit its neighbour
mark w as visited.

----------------------------------------------------------------------------------------------------------

Dijkstra’s algorithm or uniform-cost search

Dijkstra's algorithm finds the shortest path from one vertex to all other vertices.

It does so by repeatedly selecting the nearest unvisited vertex and calculating the distance to all
the unvisited neighboring vertices.

Dijkstra's algorithm is often considered to be the most straightforward algorithm for solving the
shortest path problem.

17
Dijkstra's algorithm is used for solving single-source shortest path problems for directed or
undirected paths. Single-source means that one vertex is chosen to be the start, and the
algorithm will find the shortest path from that vertex to all other vertices.

Dijkstra's algorithm does not work for graphs with negative edges. For graphs with negative
edges, the Bellman-Ford algorithm that is described on the next page, can be used instead.

To find the shortest path, Dijkstra's algorithm needs to know which vertex is the source, it needs a
way to mark vertices as visited, and it needs an overview of the current shortest distance to each
vertex as it works its way through the graph, updating these distances when a shorter distance is
found.

Algorithm for Dijkstra’s Algorithm:


1. Mark the source node with a current distance of 0 and the rest with infinity.
2. Set the non-visited node with the smallest current distance as the current node.
3. For each neighbor, N of the current node adds the current distance of the adjacent node
with the weight of the edge connecting 0->1. If it is smaller than the current distance of
Node, set it as the new current distance of N.
4. Mark the current node 1 as visited.
5. Go to step 2 if there are any nodes are unvisited.

-----------------------------------------------------------------------------------------------------------

Depth-first search and the problem of memory


36
Depth-First Search in AI?
Depth-first search is a traversing algorithm used in tree and graph-like data structures. It
generally starts by exploring the deepest node in the frontier. Starting at the root node, the
algorithm proceeds to search to the deepest level of the search tree until nodes with no

18
successors are reached. Suppose the node with unexpanded successors is encountered then the
search backtracks to the next deepest node to explore alternative paths.

Depth-first search (DFS) explores a graph by selecting a path and traversing it as deeply as
possible before backtracking.
 Originally it starts at the root node, then it expands all of its one branch until it reaches a
dead end, then backtracks to the most recent unexplored node, repeating until all nodes are
visited or a specific condition is met. ( As shown in the above image, starting from node A,
DFS explores its successor B, then proceeds to its descendants until reaching a dead end at
node D. It then backtracks to node B and explores its remaining successors i.e E. )
 This systematic exploration continues until all nodes are visited or the search terminates.
(In our case after exploring all the nodes of B. DFS explores the right side node i.e C then F
and and then G. After exploring the node G. All the nodes are visited. It will terminate.

Step wise DFS Pseudo code Explanation


1. Initialize an empty stack and an empty set for visited vertices.
2. Push the start vertex onto the stack.
3. While the stack is not empty:
 Pop the current vertex.
 If it’s the goal vertex, return “Path found”.
 Add the current vertex to the visited set.
 Get the neighbors of the current vertex.
 For each neighbor not visited, push it onto the stack.
4. If the loop completes without finding the goal, return “Path not found”.
-------------------------------------------------------------------------------------------------

19
Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a predetermined limit.


Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

o Depth-Limited Search will restrict the search depth of the tree, thus, the algorithm will
require fewer memory resources than the straight BFS (Breadth-First Search) and IDDFS
(Iterative Deepening Depth-First Search). After all, this implies automatic selection of more
segments of the search space and the consequent why consumption of the resources. Due
to the depth restriction, DLS omits a predicament of holding the entire search tree within
memory which contemplatively leaves room for a more memory-efficient vice for solving a
particular kind of problems.
o When there is a leaf node depth which is as large as the highest level allowed, do not
describe its children, and then discard it from the stack.
o Depth-Limited Search does not explain the infinite loops which can arise in classical when
there are cycles in graph of cities.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.
o The effectiveness of the Depth-Limited Search (DLS) algorithm is largely dependent on the
depth limit specified. If the depth limit is set too low, the algorithm may fail to find the
solution altogether.

20
Example:

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ) where b is the branching factor of
the search tree, and l is the depth limit.

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ) where b is the branching factor
of the search tree, and l is the depth limit.Optimal: Depth-limited search can be viewed as a
special case of DFS, and it is also not optimal even if ℓ>d.

21

You might also like