0% found this document useful (0 votes)
8 views74 pages

AI Unit-II

The document discusses problem-solving techniques in artificial intelligence, focusing on various search algorithms such as uninformed and informed search strategies, including breadth-first search, depth-first search, and uniform cost search. It outlines the characteristics of problem-solving agents, the properties of search algorithms, and performance evaluations based on completeness, optimality, time, and space complexity. Additionally, it provides examples of AI problems and applications of different search techniques in real-world scenarios.

Uploaded by

Amit Pujari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views74 pages

AI Unit-II

The document discusses problem-solving techniques in artificial intelligence, focusing on various search algorithms such as uninformed and informed search strategies, including breadth-first search, depth-first search, and uniform cost search. It outlines the characteristics of problem-solving agents, the properties of search algorithms, and performance evaluations based on completeness, optimality, time, and space complexity. Additionally, it provides examples of AI problems and applications of different search techniques in real-world scenarios.

Uploaded by

Amit Pujari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

UNIT – II

Problem Solving
• Solving Problem by Searching
• Problem Solving Agents
• Example Problems
• Search Algorithms
• Uniformed Search Strategies
• Informed (Heuristic) Search Strategies
• Heuristic Functions
• Search in Complex Environment
• Local Search and Optimization Problems
In computer science, problem-solving refers to artificial intelligence techniques, including
various techniques such as forming efficient algorithms, heuristics, and performing root cause
analysis to find desirable solutions.
The basic purpose of artificial intelligence is to solve problems just like humans.

Examples of Problems in Artificial Intelligence


Artificial intelligence techniques are used widely to automate systems that can use the resource
and time efficiently. Some of the well-known problems experienced in everyday life are games
and puzzles, resolved by AI are;
•Travelling Salesman Problem
•Tower of Hanoi Problem
•Water-Jug Problem
•N-Queen Problem
•Chess
•Sudoku
•Crypt-arithmetic Problems
•Magic Squares
•Logical Puzzles and so on.
• Problem Solving Agents
In artificial intelligence, a problem-solving agent refers to a type of intelligent agent designed to
address and solve complex problems in its environment. These agents are a fundamental
concept in AI and are used in various applications, from game-playing algorithms to robotics
and decision-making systems. The important characteristics and components of a problem-
solving agent:

1.Perception: Problem-solving agents typically have the ability to perceive or sense their
environment to gather information about the current state of the world, often through sensors,
cameras, or other data sources.

2.Knowledge Base: These agents often possess some form of knowledge of the problem
domain. This knowledge can be encoded in various ways, such as rules, facts, or models,
depending on the specific problem.

3.Reasoning: Problem-solving agents employ reasoning mechanisms to make decisions and


select actions based on their perception and knowledge. This involves processing information,
making inferences, and selecting the best course of action.
4.Planning: For many complex problems, problem-solving agents engage in planning. They
consider different sequences of actions to achieve their goals and decide on the most suitable
action plan.

5.Actuation: After determining the best course of action, problem-solving agents take actions to
interact with their environment. This can involve physical actions in the case of robotics or
making decisions in more abstract problem-solving domains.

6.Feedback: Problem-solving agents often receive feedback from their environment, which
they use to adjust their actions and refine their problem-solving strategies.

7.Learning: Some problem-solving agents incorporate machine learning techniques to improve


their performance over time. They can learn from experience, adapt their strategies, and become
more efficient at solving similar problems in the future.
Problem Solving Techniques

In artificial intelligence, problems can be solved by using searching algorithms,


evolutionary computations, knowledge representations, etc.
In this article, I am going to discuss the various searching techniques that are used to solve
a problem.

In general, searching is referred to as finding information one needs.


The process of problem-solving using searching consists of the following steps.

•Define the problem


•Analyze the problem
•Identification of possible solutions
•Choosing the optimal solution
•Implementation
Properties of search algorithms (Performance Evaluation)

• Completeness:-A search algorithm is said to be complete when it gives a solution or returns


any solution for a given random input.
• Optimality:-If a solution found is best (lowest path cost) among all the solutions identified,
then that solution is said to be an optimal one.
• Time complexity:-The time taken by an algorithm to complete its task is called time
complexity. If the algorithm completes a task in a lesser amount of time, then it is an
efficient one.
• Space complexity:-It is the maximum storage or memory taken by the algorithm at any time
while searching.

Important parameters to be considered before evaluating search technique


1.Maximum number of successors of any state of the search tree.i.e., the branch factor b of the
search tree.
2.Minimal length of the path in the state space between the initial and a goal state.
3.Depth d of the shallowest goal node in the search tree.
4.Maximum depth m of the state space.
Types of search algorithms
Based on the search problems, we can classify the search algorithm as
•Uninformed search
•Informed search

Uninformed search algorithms


The uninformed search algorithm does not have any domain knowledge such as closeness,
location of the goal state, etc. It only knows the information about how to traverse the given
tree and how to find the goal state. This algorithm is also known as the Blind search algorithm

The uninformed search strategies are of six types.

•Breadth-first search
•Depth-first search
•Depth-limited search
•Iterative deepening depth-first search
•Bidirectional search
•Uniform cost search
1. Breadth-first search
It is of the most common search strategies. It generally starts from the root node and examines
the neighbor nodes and then moves to the next level. It uses First-in First-out (FIFO) strategy
as it gives the shortest path to achieving the solution.

BFS is used where the given problem is very small and space complexity is not considered.
Now, consider the following tree.
Here, let’s take node A as the start state and node F as the goal state.
The BFS algorithm starts with the start state and then goes to the next level and visits the
node until it reaches the goal state.
In this example, it starts from A and then travel to the next level and visits B and C and
then travel to the next level and visits D, E, F and G. Here, the goal state is defined as F.
So, the traversal will stop at F.

The path of traversal is:


A —-> B —-> C —-> D —-> E —-> F
Performance evaluation of BFS

• Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.

• Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.

• Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)

• Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

• Time and space complexity of number of generated nodes is O (b(d+1))

• In terms of Vertices V and Edges E, time and space complexity is O(|V+E|)


Advantages of BFS
•BFS will never be trapped in any unwanted nodes.
•If the graph has more than one solution, then BFS will return the optimal solution which
provides the shortest path.

Disadvantages of BFS
•BFS stores all the nodes in the current level and then go to the next level. It requires a lot of
memory to store the nodes.
•BFS takes more time to reach the goal state which is far away.

Applications:-
• In AI, BFS is used in traversing a game tree to find the best move.
• It can be used to find the paths between two vertices.
• Breadth-first search can be used in the implementation of web crawlers to explore the links
on a website with depth or level of a tree limited.
• Shortest Path finding for unweighted graph: In an unweighted graph, With Breadth First,
we always reach a vertex from a given source using the minimum number of edges.
• GPS Navigation systems: Breadth First Search is used to find all neighboring locations.
2. Depth-first search
The depth-first search uses Last-in, First-out (LIFO) strategy and hence it can be
implemented by using stack. DFS uses backtracking. That is, it starts from the initial state
and explores each path to its greatest depth before it moves to the next path.
DFS will follow
Root node —-> Left node —-> Right node
Now, consider the same example tree mentioned above.
Here, it starts from the start state A and then travels to B and then it goes to D. After
reaching D, it backtracks to B. B is already visited, hence it goes to the next depth E and
then backtracks to B. as it is already visited, it goes back to A. A is already visited. So, it
goes to C and then to F. F is our goal state and it stops there.

The path of traversal is:


A —-> B —-> D —-> E —-> C —-> F
Performance evaluation of DFS

• Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.

• Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.

• Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
T(n)= 1+ b2+ b3 +.........+ bm=O(bm)
• Where, m= maximum depth of search tree and this can be much larger than d (Shallowest
solution depth)

• Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

• In terms of Vertices V and Edges E, time and space complexity is O(|V+E|)


Advantages of DFS
•It takes lesser memory as compared to BFS.
•The time complexity is lesser when compared to BFS.
•DFS does not require much more search.

Disadvantages of DFS
•DFS does not always guarantee to give a solution.
•As DFS goes deep down, it may get trapped in an infinite loop.

Applications:-
•It can be used to find the paths between two vertices.
• Depth-first search can be used in the implementation of web crawlers to explore the links on a
website.
•Backtracking: DFS can be used for backtracking in algorithms like the N-Queens problem or
Sudoku.
•Solving puzzles: DFS can be used to solve puzzles such as mazes, where the goal is to find a
path from the start to the end.
3. Depth-limited search
Depth-limited works similarly to depth-first search. The difference here is that depth-limited
search has a pre-defined limit up to which it can traverse the nodes. Depth-limited search
solves one of the drawbacks of DFS as it does not go to an infinite path.
DLS ends its traversal if any of the following conditions exits.

Standard Failure
It denotes that the given problem does not have any solutions.

Cut off Failure Value


It indicates that there is no solution for the problem within the given limit.
Now, consider the same example.
Let’s take A as the start node and C as the goal state and limit as 1.
The traversal first starts with node A and then goes to the next level 1 and the goal state C is
there. It stops the traversal.

The path of traversal is:


A —-> C
If we give C as the goal node and the limit as 0, the algorithm will not return any path as the
goal node is not available within the given limit.
If we give the goal node as F and limit as 2, the path will be A, C, F.
Performance evaluation of DLS

• Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.

• Optimal: Depth-limited search can be viewed as a special case of DFS, and it is


also not optimal even if ℓ>d.

• Time Complexity: Time complexity of DLS algorithm is O(bℓ).

• Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

where ℓ= depth limit


Advantages of DLS
•It takes lesser memory when compared to other search techniques.

Disadvantages of DLS
•DLS may not offer an optimal solution if the problem has more than one solution.
•DLS also encounters incompleteness.

Applications:-
• The algorithm can be used in the large search space to limit the depth of exploration in a
search tree, reducing the search space and improving the search time.
• Game Playing: DLS can be used in game playing to limit the search depth while evaluating
game states(Tree)to limit the depth of exploration and improve the search time.
• Natural Language Processing: DLS can be used in natural language processing, for the best
parse tree or translation, reducing the search space and improving the search time.
• Web Crawling: DLS can be used in web crawling to limit the number of pages crawled,
improving the crawling speed and reducing the load on the web server.
• Robotics: DLS can be used in robotics for path planning and obstacle avoidance, when
searching for the best path, reducing the search space and improving the path planning time.
4. Iterative deepening depth-first search
Iterative deepening depth-first search is a combination of depth-first search and breadth-
first search. IDDFS find the best depth limit by gradually adding the limit until the defined
goal state is reached.
Let us try to explain this with the same example tree.
Consider, A as the start node and E as the goal node. Let the maximum depth be 2.
The algorithm starts with A and goes to the next level and searches for E. If not found, it
goes to the next level and finds E.

The path of traversal is


A —-> B —-> E
Performance evaluation of IDDFS

• Completeness: This algorithm is complete is if the branching factor is finite.

• Optimal: IDDFS algorithm is optimal if path cost is a non- decreasing function of the
depth of the node.

• Time Complexity: Let's suppose b is the branching factor and depth is d then the worst-
case time complexity is O(bd).

• Space Complexity: The space complexity of IDDFS will be O(bd).


Advantages of IDDFS
•IDDFS has the advantages of both BFS and DFS.
•It offers fast search and uses memory efficiently.

Disadvantages of IDDFS
•It does all the works of the previous stage again and again.

Application
IDDFS might not be used directly in many applications of Computer Science, still it is
used in searching data of infinite space by incrementing the depth limit by progressing
iteratively. This is quite useful and has applications in AI and the emerging data
sciences industry.
5. Bidirectional search
The bidirectional search algorithm is completely different from all other search strategies.
It executes two simultaneous searches called forward-search and backwards-search and
reaches the goal state. Here, the graph is divided into two smaller sub-graphs. In one
graph, the search is started from the initial start state and in the other graph, the search is
started from the goal state with BFS search. When these two nodes intersect each other,
the search will be terminated.
Bidirectional search requires both start and goal start to be well defined and the branching
factor to be the same in the two directions.
Consider the below graph.
Here, the start state is E and the goal state is G. In one sub-graph, the search starts from E
and in the other, the search starts from G. E will go to B and then A. G will go to C and
then A. Here, both the traversal meets at A and hence the traversal ends.

The path of traversal is


E —-> B —-> A —-> C —-> G
Performance evaluation of BDS

• Completeness: Bidirectional Search is complete if we use BFS in both searches.

• Time Complexity: Time complexity of bidirectional search using BFS is O(bd/2).

• Space Complexity: Space complexity of bidirectional search is O(bd/2).

• Optimal: Bidirectional search is Optimal.


Advantages of bidirectional search
•This algorithm searches the graph fast.
•It requires less memory to complete its action.

Disadvantages of bidirectional search


•The goal state should be pre-defined.
•The graph is quite difficult to implement.

Application:-

Bidirectional search can be combined with other search algorithms, such as Breadth-First
Search (BFS), Depth-First Search (DFS), or A* search. Combining bidirectional search
with these algorithms can lead to faster and more efficient searches, as it can leverage the
benefits of both search techniques.
6. Uniform cost search
Uniform cost search is considered the best search algorithm for a weighted graph or graph
with costs. It searches the graph by giving maximum priority to the lowest cumulative cost.
Uniform cost search can be implemented using a priority queue.
Consider the below graph where each node has a pre-defined cost.
Here, S is the start node and G is the goal node.
From S, G can be reached in the following ways.
S, A, E, F, G -> 19
S, B, E, F, G -> 18
S, B, D, F, G -> 19
S, C, D, F, G -> 23
Here, the path with the least cost is S, B, E, F, G.
Performance evaluation of UCS
•Completeness: UCS is complete if the branching factor b is finite.
•Optimality: Uniform-cost search is always optimal as it only selects a path with the lowest
path cost.

Time complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ε]).
•Space completeness:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).
Advantages of UCS
•This algorithm is optimal as the selection of paths is based on the lowest cost.

Disadvantages of UCS
•The algorithm does not consider how many steps it goes to reach the lowest path. This may
result in an infinite loop also.

•Application
•This algorithm is mainly used when the step costs are not the same, but we need the optimal
solution to the goal state. In such cases, we use Uniform Cost Search to find the goal and the
path, including the cumulative cost to expand each node from the root node to the goal node.
What is the Heuristic Function?
A heuristic is a function that determines how near a state is to the desired state. The majority
of AI problems revolve around a large amount of information, data, and constraints, and the
task is to find a way to reach the goal state.

OR

A heuristic function, also simply called a heuristic, is a function that ranks alternatives in
search algorithms at each branching step based on available information to decide which
branch to follow. For example, it may approximate the exact solution.

Heuristics is a method of problem-solving where the goal is to come up with a workable


solution in a feasible amount of time. Heuristic techniques take efforts for a rapid solution that
stays within an appropriate accuracy range rather than a perfect solution.
What are the types of heuristic functions?
Types of heuristic functions in ai are:
Breadth-First Search (BFS) and Depth First Search (DFS), Bidirectional Search, A* search
Simulated Annealing, Hill Climbing, Best First search, and Beam search.

What is the heuristic function formula?


f(n)= g(n) + h(n), where
f(n)= estimated cost of the cheapest solution
g(n)= cost to reach node n from the start state
h(n)= cost to reach from node n to goal node

Where is the heuristic function used?


Informed Search uses a heuristic function to identify the most feasible path. It estimates how
far away the agent is from the goal using the agent's current condition as input.

What is a heuristic function to give an example?


A heuristic function estimates the approximate cost of solving a task. Determining the shortest
driving distance to a particular location can be one example.
Informed search algorithms
The informed search algorithm is also called heuristic search or directed search. In
contrast to uninformed search algorithms, informed search algorithms require details
such as distance to reach the goal, steps to reach the goal, cost of the paths which makes
this algorithm more efficient.
Here, the goal state can be achieved by using the heuristic function.
The heuristic function is used to achieve the goal state with the lowest cost possible.
This function estimates how close a state is to the goal.

1. Greedy best-first search algorithm


Greedy best-first search uses the properties of both depth-first search and breadth-first
search. Greedy best-first search traverses the node by selecting the path which appears
best at the moment. The closest path is selected by using the heuristic function.
Consider the below graph with the heuristic values.

Here, A is the start node and H is the goal node.


Greedy best-first search first starts with A and then examines the next neighbour B and C.
Here, the heuristics of B is 12 and C is 4. The best path at the moment is C and hence it goes
to C. From C, it explores the neighbours F and G. the heuristics of F is 8 and G is 2. Hence it
goes to G. From G, it goes to H whose heuristic is 0 which is also our goal state.
The path of traversal is
A —-> C —-> G —-> H
Performance evaluation of Greedy best-first search

• Complete: Greedy best-first search is also incomplete, even if the given state space is
finite.

• Optimal: Greedy best first search algorithm is not optimal.

• Time Complexity: The worst case time complexity of Greedy best first search is O(b m).

• Space Complexity: The worst case space complexity of Greedy best first search is O(b m).
Where, m is the maximum depth of the search tree.
Advantages of Greedy best-first search
•Greedy best-first search is more efficient compared with breadth-first search and depth-first
search.

Disadvantages of Greedy best-first search


•In the worst-case scenario, the greedy bfs algorithm may behave like an unguided DFS.
•There are some possibilities for greedy best-first to get trapped in an infinite loop.
•The algorithm is not an optimal one.

Applications
•Pathfinding: Greedy BF Search is used to find the shortest path between two points in a graph.
•Game AI: To evaluate potential moves and chose the best one.
•Navigation: To find the shortest path between two locations.
•Machine Learning: Greedy Best-First Search can be used in machine learning algorithms to
find the most promising path through a search space.
•Natural Language Processing: For translation or speech recognition to generate the most
likely sequence of words.
2. A* search algorithm
A* search algorithm is a combination of both uniform cost search and greedy best-first search
algorithms. It uses the advantages of both with better memory usage. It uses a heuristic
function to find the shortest path. A* search algorithm uses the sum of both the cost and
heuristic of the node to find the best path.
Consider the following graph with the heuristics values as follows.
Let A be the start node and H be the goal node.

First, the algorithm will start with A. From A, it can go to B, C, H.


Note the point that A* search uses the sum of path cost and heuristics value to determine the
path.
Here, from A to B, the sum of cost and heuristics is 1 + 3 = 4.
From A to C, it is 2 + 4 = 6.
From A to H, it is 7 + 0 = 7.
Here, the lowest cost is 4 and the path A to B is chosen. The other paths will be on hold.
Now, from B, it can go to D or E.
From A to B to D, the cost is 1 + 4 + 2 = 7.
From A to B to E, it is 1 + 6 + 6 = 13.
The lowest cost is 7. Path A to B to D is chosen and compared with other paths which are on
hold.
Here, path A to C is of less cost. That is 6.
Hence, A to C is chosen and other paths are kept on hold.
From C, it can now go to F or G.
From A to C to F, the cost is 2 + 3 + 3 = 8.
From A to C to G, the cost is 2 + 2 + 1 = 5.
The lowest cost is 5 which is also lesser than other paths which are on hold. Hence, path A to
G is chosen.
From G, it can go to H whose cost is 2 + 2 + 2 + 0 = 6.
Here, 6 is lesser than other paths cost which is on hold.
Also, H is our goal state. The algorithm will terminate here.

The path of traversal is


A —-> C —-> G —-> H
Performance evaluation of A* algorithm
Complete: A* algorithm is complete as long as:
• Branching factor is finite.
• Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:
• Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
• Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least cost
path.
Time Complexity: The time complexity is O(bd), where b is the branching factor.
Space Complexity: The space complexity of A* search algorithm is O(bd).
Advantages of A* search algorithm
•This algorithm is best when compared with other algorithms.
•This algorithm can be used to solve very complex problems also it is an optimal one.

Disadvantages of A* search algorithm


•The A* search is based on heuristics and cost. It may not produce the shortest path.
•The usage of memory is more as it keeps all the nodes in the memory.

Applications
The A* algorithm is widely used in various domains for pathfinding and optimization
problems. It has applications in robotics, video games, route planning, logistics, and artificial
intelligence. In robotics, A* helps robots navigate obstacles and find optimal paths.
Uninformed Search( Blind Search) Informed Search(Heuristic Search)

No information about the path, cost, from the current state The path cost from current state to goal state is calculated, to
to the goal state. It doesn't use domain specific knowledge select the minimum cost path as the next state. It uses domain
for searching process. specific knowledge for the searching process.

It finds solution slow as compared to informed search. It finds solution more quickly.

Less efficient More efficient

Cost is high. Cost is low

No suggestion is given regarding the solution In it. Problem It provides the direction regarding the solution. Additional
to be solved with the given information only. information can be added as assumption to solve the problem.

Examples are: Examples are:


• Depth First Search, • Best first search
• Breadth First Search, • Greedy search
• Depth limited search, • A* search
• Iterative Deepening DFS,
• Bi-directional search
Advantages and limitations of search algorithms in AI
 An approach that is methodical and well-organized: Search algorithms offer a structured
way of quickly looking through enormous volumes of data, ensuring a methodical retrieval
procedure.
 Effective and precise retrieval: These algorithms make it possible to identify pertinent
information quickly, enhancing the effectiveness and precision of information retrieval.
 Handling complex search spaces: Search algorithms have the ability to handle complex
search spaces, which enables them to travel through complex data structures or issue
domains.
 Adaptability to different challenges: Search algorithms are flexible and adaptable in
numerous AI disciplines since they may be used to solve different kinds of problems.
AI search algorithm limitations
 Performance problems with overly large or complex search spaces: When the search
space is too big or complicated, search algorithms may run into performance problems,
which might slow down retrieval or demand more computation.
 Dependence on heuristics and input data quality: The effectiveness of the search results
is significantly influenced by the precision of the used heuristics and the input data quality
 Non-guarantee of optimal solutions: In dynamic or incomplete search spaces, search
algorithms may not always guarantee finding the optimal solution.
Applications of search algorithms in AI
 Natural language processing: Search algorithms are used for information extraction, query
resolution, sentiment analysis, and other language processing tasks in natural language using
AI.
 Image recognition: These algorithms help in the search for pertinent photos using user
queries or matching visual cues.
 Recommendation systems: Search engines make personalized suggestions possible by
matching user preferences with products that are appropriate.
 Robotics: To ensure effective exploration and effective mobility, search algorithms are
essential to robot path planning and navigation.
 Data mining: These methods make it easier to do tasks like grouping, classification, &
anomaly detection by extracting useful insights and patterns from vast datasets.
Travelling Salesman Problem

The travelling salesman problem is a graph computational problem where the salesman
needs to visit all cities (represented using nodes in a graph) in a list just once and the
distances (represented using edges in the graph) between all these cities are known. The
solution that is needed to be found for this problem is the shortest possible route with
minimum cost in which the salesman visits all the cities and returns to the origin city.

Travelling Salesperson Algorithm


As the definition for greedy approach states, we need to find the best optimal solution
locally to figure out the global optimal solution. The inputs taken by the algorithm are the
graph G {V, E}, where V is the set of vertices and E is the set of edges. The shortest path of
graph G starting from one vertex returning to the same vertex is obtained as the output.
Algorithm
•Travelling salesman problem takes a graph G {V, E} as an input and declare another graph as
the output (say G’) which will record the path the salesman is going to take from one node to
another.
•The algorithm begins by sorting all the edges in the input graph G from the least cost to the
largest distance.
•The first edge selected is the edge with least cost, and one of the two vertices (say A and B)
being the origin node (say A).
•Then among the adjacent edges of the node other than the origin node (B), find the least cost
edge and add it onto the output graph.
•Continue the process with further nodes making sure there are no cycles(repetitive nodes) in
the output graph and the path reaches back to the origin node A.
•However, if the origin is mentioned in the given problem, then the solution must always start
from that node only. Let us look at some example problems to understand this better.
Example
Consider the following graph with six cities and the distances between them −

From the given graph, since the origin is


already mentioned, the solution must always
start from that node. Among the edges leading
from A, A → B has the shortest distance.
Then, B → C has the shortest and only
edge between, therefore it is included in
the output graph.

There’s only one edge between C → D,


therefore it is added to the output graph.

There are two outward edges from D.


Even though, D → B has lower distance
than D → E, B is already visited once
and it would form a cycle if added to the
output graph. Therefore, D → E is added
into the output graph.
There’s only one edge from e, that is E → F.
Therefore, it is added into the output graph.

Again, even though F → C has lower distance


than F → A, F → A is added into the output
graph in order to avoid the cycle that would
form and C is already visited once.
The shortest path that originates and ends at A
is A → B → C → D → E → F → A
The cost of the path is: 16 + 21 + 12 + 15 +
16 + 34 = 114.
8 puzzle problem
The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the
frame is always empty thus making it possible to move an adjacent numbered tile into the
empty cell. Such a puzzle is illustrated in following diagram.
The program is to change the initial configuration into the goal configuration.

A solution to the problem is an appropriate sequence of moves, such as “move tile 5 to the
right, move tile 7 to the left, move tile 6 to the down” etc…In this problem each tile
configuration is a state. A move transforms one problem state into another state to reach the
goal state with minimum no. of moves. The goal condition forms the basis for the termination.

The control strategy repeatedly applies rules to state descriptions until a description of a goal
state is produced.

It also keeps track of rules that have been applied so that it can compose them into sequence
representing the problem solution.

A solution to the 8-puzzle problem is given in the figure shown.


Figure. Solution of 8
Puzzle problem
What is Tower of Hanoi
Tower of Hanoi, is a mathematical puzzle which consists of three towers and more than one
rings is as shown,

These rings are of different sizes


and stacked upon in an ascending
order, i.e. the smaller one sits over
the larger one. There are other
variations of the puzzle where the
number of disks increase, but the
tower count remains the same.
Rules
The mission is to move all the disks to some another tower without violating the sequence of
arrangement. A few rules to be followed for Tower of Hanoi are,
 Only one disk can be moved among the towers at any given time.
 Only the "top" disk can be removed.
 No large disk can sit over a small disk.

Tower of Hanoi puzzle with N disks


can be solved in minimum 2N−1 steps.
This presentation shows that a puzzle
with 3 disks has taken 23 - 1 = 7 steps.
Example
Tower of Hanoi using Recursion:
The idea is to use the helper node to reach the destination using recursion. Below is the
pattern for this problem:
 Shift ‘N-1’ disks from ‘A’ to ‘B’, using C.
 Shift last disk from ‘A’ to ‘C’.
 Shift ‘N-1’ disks from ‘B’ to ‘C’, using A.

Step 0:- Original Hanoi tower


N-2=1 Orange disk
N-1=2 White disk

N= 3 Green disk
 Shift ‘N-1’ disks from ‘A’ to ‘B’, using C. (Only one
disk can be moved among the towers at any given
time.) So again step 2 for the same operation.

 Shift ‘N-1’ disks from ‘A’ to ‘B’, using C.

 Shift last disk from ‘A’ to ‘C’. (Only one disk can be
moved among the towers at any given time.) So again
step 4 for the same operation.
 Shift last disk from ‘A’ to ‘C’.

 Shift ‘N-1’ disks from ‘B’ to ‘C’, using A. (Only


one disk can be moved among the towers at any
given time.) So again step 6 for the same operation.

 Shift ‘N-1’ disks from ‘B’ to ‘C’, using A.


 Shift first disk from ‘A’ to ‘C’.

The Tower of Hanoi is constructed from tower A to tower C with minimum no. of steps
i.e. 2N−1=7, for N = 3
Water Jug Problem
The Water Jug Problem is a classic puzzle in artificial intelligence involving two jugs, one
with a capacity of ‘x’ liters and the other ‘y’ liters, and a water source. The goal is to
measure a specific ‘z’ liters of water using these jugs, with no volume markings. It’s a test of
problem-solving and state space search, where the initial state is both jugs empty and the
goal is to reach a state where one jug holds ‘z’ liters. Various operations like filling,
emptying, and pouring between jugs are used to find an efficient sequence of steps to
achieve the desired water measurement.
Using State Space Search
State space search is a fundamental concept in AI that involves
exploring possible states of a problem to reach a desired goal
state. Each state represents a specific configuration of water in
the jugs. The initial state is when both jugs are empty, and the
goal state is when you have ‘z’ liters of water in one of the
jugs. The search algorithm explores different states by applying
various operations like filling a jug, emptying it, or pouring
water from one jug into the other.
Production Rules for Water Jug Problem
In AI, production rules are often used to represent knowledge and make decisions. In the
case of the Water Jug Problem, production rules define the set of operations that can be
applied to transition from one state to another. These rules include:
 Fill Jug X: Fill jug X to its full capacity.
 Fill Jug Y: Fill jug Y to its full capacity.
 Empty Jug X: Empty the jug X.
 Empty Jug Y: Empty the Jug Y.
 Pour from X to Y: Pour water from jug X to jug Y unless you get an empty jug X or full
jug Y.
 Pour from Y to X: Pour water from jug Y to jug X until either jug Y is empty or jug X
is full.

The above listed 6 production rules are extended up to 10 production rules in the tabular
form given to get any amount for water as a goal into any water jug, either X or Y.
S.No. Initial Condition Final state Description of action taken
State
1. (x,y) If x<4 (4,y) Fill the 4 gallon jug completely
2. (x,y) if y<3 (x,3) Fill the 3 gallon jug completely
3. (x,y) If x>0 (0,y) Empty the 4 gallon jug
4. (x,y) If y>0 (x,0) Empty the 3 gallon jug
5. (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from the 3 gallon jug to fill the 4 gallon
jug
6. (x,y) If (x+y)<7 (x-[3-y],y) Pour some water from the 4 gallon jug to fill the 3 gallon
jug.
7. (x,y) If x>0 (x-d,y) Pour some part from the 4 gallon jug
8. (x,y) If y>0 (x,y-d) Pour some part from the 3 gallon jug
9. (x,y) If (x+y)<4 (x+y,0) Pour all water from 3 gallon jug to the 4 gallon jug
10. (x,y) if (x+y)<3 (0, x+y) Pour all water from the 4 gallon jug to the 3 gallon jug
Example:-In the water jug problem, there are two water jugs: one having the capacity X to
hold 4 gallons of water and the other has the capacity Y to hold 3 gallons of water. There is no
other measuring equipment available and the jugs also do not have any kind of marking on
them. The agent’s task here is to fill the 4-gallon jug with 2 gallons of water by using only
these two jugs and no other material. Initially, both our jugs are empty, (X=0,Y=0).

The listed production rules contain all the actions that could be performed by the agent in
transferring the contents of jugs. But, to solve the water jug problem in a minimum number of
moves, following set of rules in the given sequence should be performed:
Solution of water jug problem according to the production rules
Sr. No. 4 gallon jug contents( X Jug) 3 gallon jug contents (Y Jug) Actions according to
rule to reach a goal.
Initial state
1. 0 gallon 0 gallon
Fill the 3 gallon jug
2. 0 gallon 3 gallons completely (Rule no.2)
Pour all water from 3 gallon
3. 3 gallons 0 gallon jug to the 4 gallon jug ( Rule
no. 9)
Fill the 3 gallon jug
4. 3 gallons 3 gallons completely (Rule no. 2)

Pour some water from the 3


5. 4 gallons 2 gallons gallon jug to fill the four
gallon jug (Rule no. 7)
Empty the 4 gallon jug (Rule
6. 0 gallon 2 gallons no. 5)

Pour all water from 3 gallon


7. 2 gallons 0 gallon jug to the 4 gallon jug (Rule
no. 9)

Finally 2 gallons of water is in the X Jug, i.e. the final goal that was to be reached-out .(Rules are
written, just for understating from the previous table, no need to be written, only states are to be mentioned).
• Search in Complex Environments
Earlier addressed problems is fully observable, deterministic, static, known environments where
the solution is a sequence of actions. In this section, those constraints are relaxed. Now begin
with the problem of finding a good state without worrying about the path to get there, covering
both discrete and continuous states. In a nondeterministic world, the agent will need a
conditional plan and carry out different actions depending on what it observes—for example,
stopping if the light is red and going if it is green. With partial observability, the agent will also
need to keep track of the possible states it might be in.

• Local Search and Optimization Problems


In computer science, local search is a heuristic method for solving computationally hard
optimization problems. Local search can be used on problems that can be formulated as finding
a solution maximizing a criterion among a number of candidate solutions.
What is Local Search in AI?
Local search in AI refers to a family of optimization algorithms that are used to find the best
possible solution within a given search space. Unlike global search methods that explore the
entire solution space, local search algorithms focus on making incremental changes to improve
a current solution until they reach a locally optimal or satisfactory solution.
Working of a Local Search Algorithm

The basic working principle of a local search algorithm involves the following steps:

•Initialization: Start with an initial solution, which can be generated randomly or through
some heuristic method.
•Evaluation: Evaluate the quality of the initial solution using an objective function or a
fitness measure. This function quantifies how close the solution is to the desired outcome.
•Neighbor Generation: Generate a set of neighboring solutions by making minor changes to
the current solution. These changes are typically referred to as "moves."
•Selection: Choose one of the neighboring solutions based on a criterion, such as the
improvement in the objective function value. This step determines the direction in which the
search proceeds.
•Termination: Continue the process iteratively, moving to the selected neighboring solution,
and repeating steps 2 to 4 until a termination condition is met. This condition could be a
maximum number of iterations, reaching a predefined threshold, or finding a satisfactory
solution.
Local Search Algorithms
Several local search algorithms are commonly used in AI and optimization problems.

1. Hill Climbing
Hill climbing is a straightforward local search algorithm that starts with an initial solution and
iteratively moves to the best neighboring solution that improves the objective function. Here's
how it works:
•Initialization: Begin with an initial solution, often generated randomly or using a heuristic
method.
•Evaluation: Calculate the quality of the initial solution using an objective function or fitness
measure.
•Neighbor Generation: Generate neighboring solutions by making small changes (moves) to
the current solution.
•Selection: Choose the neighboring solution that results in the most significant improvement in
the objective function.
•Termination: Continue this process until a termination condition is met (e.g., reaching a
maximum number of iterations or finding a satisfactory solution).
Hill-climbing search
Consider the states of a problem laid out in a state-space landscape, as shown in Figure below.
Each point (state) in the landscape has an “elevation,” defined by the value of the objective
function. If elevation corresponds to an objective function, then the aim is to find the highest
peak—a global maximum—and we call the process hill climbing. If elevation corresponds to
cost, then the aim is to find the lowest valley—a global minimum—and we call it gradient
descent.
The hill-climbing search algorithm keeps
track of one current state and on each iteration
moves to the neighboring state with highest
value and terminates when it reaches a “peak”
where no neighbor has a higher value. One way
to use hill-climbing search is to use the
negative of a heuristic cost function as the
A one-dimensional state-space landscape in which
objective function; that will climb locally to the elevation corresponds to the objective function. The
state with smallest heuristic distance to the aim is to find the global maximum.
• Local maxima: A local maximum is a peak that is higher than each of its neighboring states
but lower than the global maximum.
• Ridges: A ridge is top of global maxima. Ridges result in a sequence of local maxima that is
very difficult for greedy algorithms to navigate.
• Plateaus: A plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is possible.

The hill-climbing search algorithm, which is the most basic local search technique. At each
step the current node is replaced by the best neighbor.
Hill climbing algorithm for the 8-queens problem.

We will use a complete-state formulation, which means that every state has all the components
of a solution, but they might not all be in the right place. In this case every state has 8 queens
on the board, one per column. The initial state is chosen at random, and the successors of a state
are all possible states generated by moving a single queen to another square in the same column
(so each state has 8×7=56 successors). The heuristic cost function ℎ is the number of pairs of
queens that are attacking each other; this will be zero only for solutions. (It counts as an attack
if two pieces are in the same line, even if there is an intervening piece between them.) Figure
(b) shows a state that has ℎ=17. The figure also shows the ℎ values of all its successors.
(a) The 8-queens problem: place 8
queens on a chess board so that no
queen attacks another. (A queen
attacks any piece in the same row,
column, or diagonal.) This position
is almost a solution, except for the
two queens in the fourth and seventh
columns that attack each other along
the diagonal.
(b) An 8-queens state with heuristic
cost estimate ℎ=17. The board shows
the value of ℎ for each possible
successor obtained by moving a
queen within its column. There are 8
moves that are tied for best,
with ℎ=12. The hill climbing-
algorithm will pick one of these.
Board to play anyone move for heuristic value h=12 Board with a move played for heuristic value h=12
Hill climbing can make rapid progress toward a solution because it is usually quite easy to
improve a bad state. For example, from the state in Figure (b), it takes just five steps to reach
the state in Figure(a), which has ℎ=1 and is very nearly a solution.
Hill climbing has a limitation in that it can get stuck in local optima, which are solutions that
are better than their neighbors but not necessarily the best overall solution. To overcome this
limitation, variations of hill climbing algorithms have been developed, such as stochastic hill
climbing and simulated annealing.

You might also like