AI Unit-II
AI Unit-II
Problem Solving
• Solving Problem by Searching
• Problem Solving Agents
• Example Problems
• Search Algorithms
• Uniformed Search Strategies
• Informed (Heuristic) Search Strategies
• Heuristic Functions
• Search in Complex Environment
• Local Search and Optimization Problems
In computer science, problem-solving refers to artificial intelligence techniques, including
various techniques such as forming efficient algorithms, heuristics, and performing root cause
analysis to find desirable solutions.
The basic purpose of artificial intelligence is to solve problems just like humans.
1.Perception: Problem-solving agents typically have the ability to perceive or sense their
environment to gather information about the current state of the world, often through sensors,
cameras, or other data sources.
2.Knowledge Base: These agents often possess some form of knowledge of the problem
domain. This knowledge can be encoded in various ways, such as rules, facts, or models,
depending on the specific problem.
5.Actuation: After determining the best course of action, problem-solving agents take actions to
interact with their environment. This can involve physical actions in the case of robotics or
making decisions in more abstract problem-solving domains.
6.Feedback: Problem-solving agents often receive feedback from their environment, which
they use to adjust their actions and refine their problem-solving strategies.
•Breadth-first search
•Depth-first search
•Depth-limited search
•Iterative deepening depth-first search
•Bidirectional search
•Uniform cost search
1. Breadth-first search
It is of the most common search strategies. It generally starts from the root node and examines
the neighbor nodes and then moves to the next level. It uses First-in First-out (FIFO) strategy
as it gives the shortest path to achieving the solution.
BFS is used where the given problem is very small and space complexity is not considered.
Now, consider the following tree.
Here, let’s take node A as the start state and node F as the goal state.
The BFS algorithm starts with the start state and then goes to the next level and visits the
node until it reaches the goal state.
In this example, it starts from A and then travel to the next level and visits B and C and
then travel to the next level and visits D, E, F and G. Here, the goal state is defined as F.
So, the traversal will stop at F.
• Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
• Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
• Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
• Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Disadvantages of BFS
•BFS stores all the nodes in the current level and then go to the next level. It requires a lot of
memory to store the nodes.
•BFS takes more time to reach the goal state which is far away.
Applications:-
• In AI, BFS is used in traversing a game tree to find the best move.
• It can be used to find the paths between two vertices.
• Breadth-first search can be used in the implementation of web crawlers to explore the links
on a website with depth or level of a tree limited.
• Shortest Path finding for unweighted graph: In an unweighted graph, With Breadth First,
we always reach a vertex from a given source using the minimum number of edges.
• GPS Navigation systems: Breadth First Search is used to find all neighboring locations.
2. Depth-first search
The depth-first search uses Last-in, First-out (LIFO) strategy and hence it can be
implemented by using stack. DFS uses backtracking. That is, it starts from the initial state
and explores each path to its greatest depth before it moves to the next path.
DFS will follow
Root node —-> Left node —-> Right node
Now, consider the same example tree mentioned above.
Here, it starts from the start state A and then travels to B and then it goes to D. After
reaching D, it backtracks to B. B is already visited, hence it goes to the next depth E and
then backtracks to B. as it is already visited, it goes back to A. A is already visited. So, it
goes to C and then to F. F is our goal state and it stops there.
• Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.
• Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.
• Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
T(n)= 1+ b2+ b3 +.........+ bm=O(bm)
• Where, m= maximum depth of search tree and this can be much larger than d (Shallowest
solution depth)
• Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Disadvantages of DFS
•DFS does not always guarantee to give a solution.
•As DFS goes deep down, it may get trapped in an infinite loop.
Applications:-
•It can be used to find the paths between two vertices.
• Depth-first search can be used in the implementation of web crawlers to explore the links on a
website.
•Backtracking: DFS can be used for backtracking in algorithms like the N-Queens problem or
Sudoku.
•Solving puzzles: DFS can be used to solve puzzles such as mazes, where the goal is to find a
path from the start to the end.
3. Depth-limited search
Depth-limited works similarly to depth-first search. The difference here is that depth-limited
search has a pre-defined limit up to which it can traverse the nodes. Depth-limited search
solves one of the drawbacks of DFS as it does not go to an infinite path.
DLS ends its traversal if any of the following conditions exits.
Standard Failure
It denotes that the given problem does not have any solutions.
• Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
Disadvantages of DLS
•DLS may not offer an optimal solution if the problem has more than one solution.
•DLS also encounters incompleteness.
Applications:-
• The algorithm can be used in the large search space to limit the depth of exploration in a
search tree, reducing the search space and improving the search time.
• Game Playing: DLS can be used in game playing to limit the search depth while evaluating
game states(Tree)to limit the depth of exploration and improve the search time.
• Natural Language Processing: DLS can be used in natural language processing, for the best
parse tree or translation, reducing the search space and improving the search time.
• Web Crawling: DLS can be used in web crawling to limit the number of pages crawled,
improving the crawling speed and reducing the load on the web server.
• Robotics: DLS can be used in robotics for path planning and obstacle avoidance, when
searching for the best path, reducing the search space and improving the path planning time.
4. Iterative deepening depth-first search
Iterative deepening depth-first search is a combination of depth-first search and breadth-
first search. IDDFS find the best depth limit by gradually adding the limit until the defined
goal state is reached.
Let us try to explain this with the same example tree.
Consider, A as the start node and E as the goal node. Let the maximum depth be 2.
The algorithm starts with A and goes to the next level and searches for E. If not found, it
goes to the next level and finds E.
• Optimal: IDDFS algorithm is optimal if path cost is a non- decreasing function of the
depth of the node.
• Time Complexity: Let's suppose b is the branching factor and depth is d then the worst-
case time complexity is O(bd).
Disadvantages of IDDFS
•It does all the works of the previous stage again and again.
Application
IDDFS might not be used directly in many applications of Computer Science, still it is
used in searching data of infinite space by incrementing the depth limit by progressing
iteratively. This is quite useful and has applications in AI and the emerging data
sciences industry.
5. Bidirectional search
The bidirectional search algorithm is completely different from all other search strategies.
It executes two simultaneous searches called forward-search and backwards-search and
reaches the goal state. Here, the graph is divided into two smaller sub-graphs. In one
graph, the search is started from the initial start state and in the other graph, the search is
started from the goal state with BFS search. When these two nodes intersect each other,
the search will be terminated.
Bidirectional search requires both start and goal start to be well defined and the branching
factor to be the same in the two directions.
Consider the below graph.
Here, the start state is E and the goal state is G. In one sub-graph, the search starts from E
and in the other, the search starts from G. E will go to B and then A. G will go to C and
then A. Here, both the traversal meets at A and hence the traversal ends.
Application:-
Bidirectional search can be combined with other search algorithms, such as Breadth-First
Search (BFS), Depth-First Search (DFS), or A* search. Combining bidirectional search
with these algorithms can lead to faster and more efficient searches, as it can leverage the
benefits of both search techniques.
6. Uniform cost search
Uniform cost search is considered the best search algorithm for a weighted graph or graph
with costs. It searches the graph by giving maximum priority to the lowest cumulative cost.
Uniform cost search can be implemented using a priority queue.
Consider the below graph where each node has a pre-defined cost.
Here, S is the start node and G is the goal node.
From S, G can be reached in the following ways.
S, A, E, F, G -> 19
S, B, E, F, G -> 18
S, B, D, F, G -> 19
S, C, D, F, G -> 23
Here, the path with the least cost is S, B, E, F, G.
Performance evaluation of UCS
•Completeness: UCS is complete if the branching factor b is finite.
•Optimality: Uniform-cost search is always optimal as it only selects a path with the lowest
path cost.
Time complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ε]).
•Space completeness:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).
Advantages of UCS
•This algorithm is optimal as the selection of paths is based on the lowest cost.
Disadvantages of UCS
•The algorithm does not consider how many steps it goes to reach the lowest path. This may
result in an infinite loop also.
•Application
•This algorithm is mainly used when the step costs are not the same, but we need the optimal
solution to the goal state. In such cases, we use Uniform Cost Search to find the goal and the
path, including the cumulative cost to expand each node from the root node to the goal node.
What is the Heuristic Function?
A heuristic is a function that determines how near a state is to the desired state. The majority
of AI problems revolve around a large amount of information, data, and constraints, and the
task is to find a way to reach the goal state.
OR
A heuristic function, also simply called a heuristic, is a function that ranks alternatives in
search algorithms at each branching step based on available information to decide which
branch to follow. For example, it may approximate the exact solution.
• Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.
• Time Complexity: The worst case time complexity of Greedy best first search is O(b m).
• Space Complexity: The worst case space complexity of Greedy best first search is O(b m).
Where, m is the maximum depth of the search tree.
Advantages of Greedy best-first search
•Greedy best-first search is more efficient compared with breadth-first search and depth-first
search.
Applications
•Pathfinding: Greedy BF Search is used to find the shortest path between two points in a graph.
•Game AI: To evaluate potential moves and chose the best one.
•Navigation: To find the shortest path between two locations.
•Machine Learning: Greedy Best-First Search can be used in machine learning algorithms to
find the most promising path through a search space.
•Natural Language Processing: For translation or speech recognition to generate the most
likely sequence of words.
2. A* search algorithm
A* search algorithm is a combination of both uniform cost search and greedy best-first search
algorithms. It uses the advantages of both with better memory usage. It uses a heuristic
function to find the shortest path. A* search algorithm uses the sum of both the cost and
heuristic of the node to find the best path.
Consider the following graph with the heuristics values as follows.
Let A be the start node and H be the goal node.
Applications
The A* algorithm is widely used in various domains for pathfinding and optimization
problems. It has applications in robotics, video games, route planning, logistics, and artificial
intelligence. In robotics, A* helps robots navigate obstacles and find optimal paths.
Uninformed Search( Blind Search) Informed Search(Heuristic Search)
No information about the path, cost, from the current state The path cost from current state to goal state is calculated, to
to the goal state. It doesn't use domain specific knowledge select the minimum cost path as the next state. It uses domain
for searching process. specific knowledge for the searching process.
It finds solution slow as compared to informed search. It finds solution more quickly.
No suggestion is given regarding the solution In it. Problem It provides the direction regarding the solution. Additional
to be solved with the given information only. information can be added as assumption to solve the problem.
The travelling salesman problem is a graph computational problem where the salesman
needs to visit all cities (represented using nodes in a graph) in a list just once and the
distances (represented using edges in the graph) between all these cities are known. The
solution that is needed to be found for this problem is the shortest possible route with
minimum cost in which the salesman visits all the cities and returns to the origin city.
A solution to the problem is an appropriate sequence of moves, such as “move tile 5 to the
right, move tile 7 to the left, move tile 6 to the down” etc…In this problem each tile
configuration is a state. A move transforms one problem state into another state to reach the
goal state with minimum no. of moves. The goal condition forms the basis for the termination.
The control strategy repeatedly applies rules to state descriptions until a description of a goal
state is produced.
It also keeps track of rules that have been applied so that it can compose them into sequence
representing the problem solution.
N= 3 Green disk
Shift ‘N-1’ disks from ‘A’ to ‘B’, using C. (Only one
disk can be moved among the towers at any given
time.) So again step 2 for the same operation.
Shift last disk from ‘A’ to ‘C’. (Only one disk can be
moved among the towers at any given time.) So again
step 4 for the same operation.
Shift last disk from ‘A’ to ‘C’.
The Tower of Hanoi is constructed from tower A to tower C with minimum no. of steps
i.e. 2N−1=7, for N = 3
Water Jug Problem
The Water Jug Problem is a classic puzzle in artificial intelligence involving two jugs, one
with a capacity of ‘x’ liters and the other ‘y’ liters, and a water source. The goal is to
measure a specific ‘z’ liters of water using these jugs, with no volume markings. It’s a test of
problem-solving and state space search, where the initial state is both jugs empty and the
goal is to reach a state where one jug holds ‘z’ liters. Various operations like filling,
emptying, and pouring between jugs are used to find an efficient sequence of steps to
achieve the desired water measurement.
Using State Space Search
State space search is a fundamental concept in AI that involves
exploring possible states of a problem to reach a desired goal
state. Each state represents a specific configuration of water in
the jugs. The initial state is when both jugs are empty, and the
goal state is when you have ‘z’ liters of water in one of the
jugs. The search algorithm explores different states by applying
various operations like filling a jug, emptying it, or pouring
water from one jug into the other.
Production Rules for Water Jug Problem
In AI, production rules are often used to represent knowledge and make decisions. In the
case of the Water Jug Problem, production rules define the set of operations that can be
applied to transition from one state to another. These rules include:
Fill Jug X: Fill jug X to its full capacity.
Fill Jug Y: Fill jug Y to its full capacity.
Empty Jug X: Empty the jug X.
Empty Jug Y: Empty the Jug Y.
Pour from X to Y: Pour water from jug X to jug Y unless you get an empty jug X or full
jug Y.
Pour from Y to X: Pour water from jug Y to jug X until either jug Y is empty or jug X
is full.
The above listed 6 production rules are extended up to 10 production rules in the tabular
form given to get any amount for water as a goal into any water jug, either X or Y.
S.No. Initial Condition Final state Description of action taken
State
1. (x,y) If x<4 (4,y) Fill the 4 gallon jug completely
2. (x,y) if y<3 (x,3) Fill the 3 gallon jug completely
3. (x,y) If x>0 (0,y) Empty the 4 gallon jug
4. (x,y) If y>0 (x,0) Empty the 3 gallon jug
5. (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from the 3 gallon jug to fill the 4 gallon
jug
6. (x,y) If (x+y)<7 (x-[3-y],y) Pour some water from the 4 gallon jug to fill the 3 gallon
jug.
7. (x,y) If x>0 (x-d,y) Pour some part from the 4 gallon jug
8. (x,y) If y>0 (x,y-d) Pour some part from the 3 gallon jug
9. (x,y) If (x+y)<4 (x+y,0) Pour all water from 3 gallon jug to the 4 gallon jug
10. (x,y) if (x+y)<3 (0, x+y) Pour all water from the 4 gallon jug to the 3 gallon jug
Example:-In the water jug problem, there are two water jugs: one having the capacity X to
hold 4 gallons of water and the other has the capacity Y to hold 3 gallons of water. There is no
other measuring equipment available and the jugs also do not have any kind of marking on
them. The agent’s task here is to fill the 4-gallon jug with 2 gallons of water by using only
these two jugs and no other material. Initially, both our jugs are empty, (X=0,Y=0).
The listed production rules contain all the actions that could be performed by the agent in
transferring the contents of jugs. But, to solve the water jug problem in a minimum number of
moves, following set of rules in the given sequence should be performed:
Solution of water jug problem according to the production rules
Sr. No. 4 gallon jug contents( X Jug) 3 gallon jug contents (Y Jug) Actions according to
rule to reach a goal.
Initial state
1. 0 gallon 0 gallon
Fill the 3 gallon jug
2. 0 gallon 3 gallons completely (Rule no.2)
Pour all water from 3 gallon
3. 3 gallons 0 gallon jug to the 4 gallon jug ( Rule
no. 9)
Fill the 3 gallon jug
4. 3 gallons 3 gallons completely (Rule no. 2)
Finally 2 gallons of water is in the X Jug, i.e. the final goal that was to be reached-out .(Rules are
written, just for understating from the previous table, no need to be written, only states are to be mentioned).
• Search in Complex Environments
Earlier addressed problems is fully observable, deterministic, static, known environments where
the solution is a sequence of actions. In this section, those constraints are relaxed. Now begin
with the problem of finding a good state without worrying about the path to get there, covering
both discrete and continuous states. In a nondeterministic world, the agent will need a
conditional plan and carry out different actions depending on what it observes—for example,
stopping if the light is red and going if it is green. With partial observability, the agent will also
need to keep track of the possible states it might be in.
The basic working principle of a local search algorithm involves the following steps:
•Initialization: Start with an initial solution, which can be generated randomly or through
some heuristic method.
•Evaluation: Evaluate the quality of the initial solution using an objective function or a
fitness measure. This function quantifies how close the solution is to the desired outcome.
•Neighbor Generation: Generate a set of neighboring solutions by making minor changes to
the current solution. These changes are typically referred to as "moves."
•Selection: Choose one of the neighboring solutions based on a criterion, such as the
improvement in the objective function value. This step determines the direction in which the
search proceeds.
•Termination: Continue the process iteratively, moving to the selected neighboring solution,
and repeating steps 2 to 4 until a termination condition is met. This condition could be a
maximum number of iterations, reaching a predefined threshold, or finding a satisfactory
solution.
Local Search Algorithms
Several local search algorithms are commonly used in AI and optimization problems.
1. Hill Climbing
Hill climbing is a straightforward local search algorithm that starts with an initial solution and
iteratively moves to the best neighboring solution that improves the objective function. Here's
how it works:
•Initialization: Begin with an initial solution, often generated randomly or using a heuristic
method.
•Evaluation: Calculate the quality of the initial solution using an objective function or fitness
measure.
•Neighbor Generation: Generate neighboring solutions by making small changes (moves) to
the current solution.
•Selection: Choose the neighboring solution that results in the most significant improvement in
the objective function.
•Termination: Continue this process until a termination condition is met (e.g., reaching a
maximum number of iterations or finding a satisfactory solution).
Hill-climbing search
Consider the states of a problem laid out in a state-space landscape, as shown in Figure below.
Each point (state) in the landscape has an “elevation,” defined by the value of the objective
function. If elevation corresponds to an objective function, then the aim is to find the highest
peak—a global maximum—and we call the process hill climbing. If elevation corresponds to
cost, then the aim is to find the lowest valley—a global minimum—and we call it gradient
descent.
The hill-climbing search algorithm keeps
track of one current state and on each iteration
moves to the neighboring state with highest
value and terminates when it reaches a “peak”
where no neighbor has a higher value. One way
to use hill-climbing search is to use the
negative of a heuristic cost function as the
A one-dimensional state-space landscape in which
objective function; that will climb locally to the elevation corresponds to the objective function. The
state with smallest heuristic distance to the aim is to find the global maximum.
• Local maxima: A local maximum is a peak that is higher than each of its neighboring states
but lower than the global maximum.
• Ridges: A ridge is top of global maxima. Ridges result in a sequence of local maxima that is
very difficult for greedy algorithms to navigate.
• Plateaus: A plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is possible.
The hill-climbing search algorithm, which is the most basic local search technique. At each
step the current node is replaced by the best neighbor.
Hill climbing algorithm for the 8-queens problem.
We will use a complete-state formulation, which means that every state has all the components
of a solution, but they might not all be in the right place. In this case every state has 8 queens
on the board, one per column. The initial state is chosen at random, and the successors of a state
are all possible states generated by moving a single queen to another square in the same column
(so each state has 8×7=56 successors). The heuristic cost function ℎ is the number of pairs of
queens that are attacking each other; this will be zero only for solutions. (It counts as an attack
if two pieces are in the same line, even if there is an intervening piece between them.) Figure
(b) shows a state that has ℎ=17. The figure also shows the ℎ values of all its successors.
(a) The 8-queens problem: place 8
queens on a chess board so that no
queen attacks another. (A queen
attacks any piece in the same row,
column, or diagonal.) This position
is almost a solution, except for the
two queens in the fourth and seventh
columns that attack each other along
the diagonal.
(b) An 8-queens state with heuristic
cost estimate ℎ=17. The board shows
the value of ℎ for each possible
successor obtained by moving a
queen within its column. There are 8
moves that are tied for best,
with ℎ=12. The hill climbing-
algorithm will pick one of these.
Board to play anyone move for heuristic value h=12 Board with a move played for heuristic value h=12
Hill climbing can make rapid progress toward a solution because it is usually quite easy to
improve a bad state. For example, from the state in Figure (b), it takes just five steps to reach
the state in Figure(a), which has ℎ=1 and is very nearly a solution.
Hill climbing has a limitation in that it can get stuck in local optima, which are solutions that
are better than their neighbors but not necessarily the best overall solution. To overcome this
limitation, variations of hill climbing algorithms have been developed, such as stochastic hill
climbing and simulated annealing.