AI - Unit 02
AI - Unit 02
Implementation
import java.util.*;
// DFS algorithm that returns a list of actions from start to goal state
public List<String> search(Problem problem) {
Set<State> visited = new HashSet<>();
Stack<State> stack = new Stack<>();
stack.push(problem.getInitialState());
while (!stack.empty()) {
State state = stack.pop();
visited.add(state);
if (problem.isGoalState(state)) {
return getPath(state);
}
for (Action action : problem.getActions(state)) {
State successor = problem.getResult(state, action);
if (!visited.contains(successor)) {
stack.push(successor);
}
}
}
return null;
}
// helper method that returns a list of actions from start to the given state
private List<String> getPath(State state) {
List<String> path = new ArrayList<>();
while (state.getParent() != null) {
path.add(state.getAction());
state = state.getParent();
}
Collections.reverse(path);
return path;
}
}
In this implementation, we have a search method that takes a Problem object as input and
returns a list of actions from the start state to the goal state. The algorithm maintains a set
of visited states and a stack of unexplored states. It starts by pushing the initial state onto
the stack and entering a loop that continues until either the stack is empty or a goal state is
found. In each iteration of the loop, the algorithm pops a state from the stack, adds it to the
set of visited states, and checks if it is a goal state. If it is, it calls a helper method getPath to
retrieve the list of actions that lead to this state. Otherwise, it generates all successor states
of the current state using the getActions and getResult methods of the Problem object, and
pushes those that have not been visited onto the stack. If no goal state is found, the
algorithm returns null.
The getPath method is a helper method that takes a goal state as input and returns a list of
actions that lead to this state from the start state. It does this by following the parent field
of the input state backwards until the start state is reached, and adding the action that was
taken to reach each state to a list. The list is then reversed to obtain the correct order of
actions.
Note that this implementation assumes that the Problem class has the following methods:
getInitialState: returns the start state of the problem
isGoalState: takes a state as input and returns true if it is a goal state
getActions: takes a state as input and returns a list of possible actions from that state
getResult: takes a state and an action as input and returns the resulting state after taking
that action.
06. Breadth-first search :- Breadth-First Search (BFS) is a search algorithm that explores
the search space level by level. It starts from the root node and visits all the nodes at the
current level before moving on to the next level. BFS is often used to find the shortest path
between two nodes in a graph.
The algorithm works as follows:
1. Start at the root node and mark it as visited.
2. Add the root node to the queue.
3. While the queue is not empty:
Dequeue the first node from the queue.
Visit all the unvisited neighbors of the dequeued node.
Mark each visited neighbor as visited and add it to the queue.
4. If the goal node is found during the search, stop the algorithm and return the path to
the goal node.
5. If the queue becomes empty and the goal node has not been found, the algorithm
fails.
BFS guarantees that it will find the shortest path between the starting node and the goal
node if such a path exists. However, it may not be the most efficient algorithm in terms of
time or space complexity.
Additionally, IDDFS ensures that the shortest path to the goal node is found, even if the goal
node is deep in the search tree. However, IDDFS may repeat some of the search work
between iterations, which can slow down the algorithm for large trees.
11. Bi-directional search Informed (Heuristic) Search Strategies :-
Bi-directional search and Informed (Heuristic) Search Strategies are two different search
techniques in AI:
1. Bi-Directional Search:
Bi-directional search is a search technique that starts the search from both the start
and the goal nodes and works towards the middle.
It can be applied to both uninformed and informed search algorithms such as
Breadth-First Search, Depth-First Search, and A* Search.
Bi-directional search is useful when the search space is large and the shortest path
between the start and goal nodes is not known.
It can reduce the search space and improve the efficiency of the search algorithm.
2. Informed (Heuristic) Search Strategies:
Informed search strategies use heuristic functions to guide the search towards the
goal node.
They are more efficient than uninformed search strategies such as Breadth-First
Search and Depth-First Search because they use additional information to guide the
search.
The most commonly used informed search algorithm is A* Search, which uses both
the actual cost and the estimated cost to the goal node to evaluate the nodes in the
search space.
The heuristic function used in A* Search should be admissible, meaning it never
overestimates the actual cost to the goal node, and consistent, meaning the
estimated cost of reaching a neighboring node plus the cost of getting from that
node to the goal node is never greater than the estimated cost of getting directly to
the goal node.
In summary, bi-directional search and informed search strategies are two different
techniques in AI that can be used to improve the efficiency and effectiveness of the search
algorithms.
Bi-directional search starts the search from both the start and goal nodes, while informed
search strategies use heuristic functions to guide the search towards the goal node. A*
Search is a commonly used informed search algorithm that uses both actual cost and
estimated cost to evaluate the nodes in the search space.
12. Greedy best-first search A* search :- Greedy Best-First Search and A* Search are
two informed search algorithms in AI:
1. Greedy Best-First Search:
Greedy Best-First Search is an informed search algorithm that uses heuristic
functions to guide the search towards the goal node.
It evaluates the nodes in the search space based on the estimated cost to the goal
node, without considering the actual cost of reaching the node.
Greedy Best-First Search is an example of a greedy algorithm, which means it
chooses the node that appears to be the best choice at each step without
considering the consequences of that choice.
Greedy Best-First Search can be efficient for small search spaces, but it may not find
the optimal path to the goal node.
2. A* Search:
A* Search is an informed search algorithm that uses both the actual cost and the
estimated cost to the goal node to evaluate the nodes in the search space.
It combines the benefits of both uniform-cost search and greedy best-first search
algorithms, making it more efficient than both algorithms.
A* Search uses a heuristic function that estimates the cost of reaching the goal node
from each node in the search space, and it adds this estimated cost to the actual cost
of getting to each node to evaluate its priority.
The heuristic function used in A* Search should be admissible, meaning it never
overestimates the actual cost to the goal node, and consistent, meaning the
estimated cost of reaching a neighboring node plus the cost of getting from that
node to the goal node is never greater than the estimated cost of getting directly to
the goal node.
A* Search is guaranteed to find the optimal path to the goal node if the heuristic
function used is admissible.
In summary, Greedy Best-First Search and A* Search are two informed search algorithms
that use heuristic functions to guide the search towards the goal node. Greedy Best-First
Search evaluates the nodes based on the estimated cost to the goal node, while A* Search
combines the actual cost and the estimated cost to evaluate the nodes. A* Search is more
efficient than Greedy Best-First Search and guaranteed to find the optimal path if the
heuristic function used is admissible.
14. Conditions for optimality :- Optimality in AI refers to finding the best possible
solution to a problem. The following conditions must be met for a search algorithm to be
optimal:
1. Admissibility: An admissible heuristic never overestimates the actual cost of
reaching the goal node. If a heuristic is admissible, then the search algorithm will
always find the optimal solution.
2. Consistency: A consistent heuristic satisfies the triangle inequality. If a heuristic is
consistent, then the search algorithm is guaranteed to find the optimal solution.
3. Monotonicity: A monotonic heuristic is a consistent heuristic that is also non-
decreasing. If a heuristic is monotonic, then the cost of reaching a node from its
parent plus the estimated cost of reaching the goal node from that node is never less
than the estimated cost of reaching the goal node from the parent.
4. Optimal Path Property: The optimal path property states that if a search algorithm
finds a path to a node n that has a total estimated cost f(n), then the total estimated
cost of the optimal path to n is not less than f(n).
5. Complete: A search algorithm is complete if it is guaranteed to find a solution if one
exists.
Overall, a search algorithm that satisfies these conditions is guaranteed to find the optimal
solution to a problem.
15. Admissibility and consistency :- Admissibility and consistency are two important
properties of heuristic functions used in informed (heuristic) search algorithms in artificial
intelligence. Here are their definitions and key characteristics:
1. Admissibility: A heuristic function is admissible if it never overestimates the actual
cost of reaching the goal state from the current state.
Key characteristics of admissible heuristics:
They are always optimistic (i.e., they never overestimate the cost of reaching the
goal state).
They may be less informed than other heuristics, but they are guaranteed to be
admissible.
Examples of admissible heuristics include the number of misplaced tiles in the 8-
puzzle game, and the shortest distance between two cities on a map (assuming no
obstacles).
2. Consistency (or monotonicity): A heuristic function is consistent if the estimated cost
of reaching a neighboring state plus the cost of getting from the current state to that
neighboring state is always less than or equal to the estimated cost of reaching the
goal state from the current state. In other words, the heuristic function satisfies the
triangle inequality. Consistent heuristics are important because they guarantee that
the search algorithm will find the optimal solution, even if the search space is
infinite.
Key characteristics of consistent heuristics:
They are always admissible (i.e., they never overestimate the cost of reaching the
goal state).
They are more informed than admissible heuristics, but they may be more
computationally expensive.
Examples of consistent heuristics include the Euclidean distance between two points
on a map, and the Manhattan distance between two points on a grid.
16. Optimality of A* :- Here are the key points that explain the optimality of A*:
1. Completeness: A* is a complete algorithm, meaning that it is guaranteed to find a
solution if one exists in a finite search space.
2. Admissibility: A* uses a heuristic function that is admissible, meaning that it never
overestimates the cost of reaching the goal state from the current state. This
ensures that A* will always find the optimal solution to the problem, as long as the
search space is finite.
3. Consistency: A* also requires that the heuristic function be consistent, meaning that
it satisfies the triangle inequality. This ensures that A* will always find the optimal
solution to the problem, even if the search space is infinite.
4. Optimality: Because A* uses an admissible and consistent heuristic function, it is
guaranteed to find the optimal solution to the problem by exploring the search
space in a way that is both efficient and effective.
5. Time and space complexity: The time and space complexity of A* depends on the
quality of the heuristic function used. In the best case scenario, where the heuristic
function is perfect, A* has a time complexity of O(d), where d is the depth of the
optimal solution. In the worst case scenario, where the heuristic function is poor, A*
may have to explore all nodes in the search space and has a time complexity of
O(b^d), where b is the branching factor of the search tree.
Overall, the optimality of A* is based on its ability to combine information about the
cost of the path from the start state to the current state and the estimated cost of the
path from the current state to the goal state to efficiently explore the search space and
find the optimal solution.
17. Memory-bounded heuristic search :- Here are the key points that explain the
memory-bounded heuristic search:
1. Memory constraints: Memory-bounded heuristic search is designed to work within a
specified memory constraint, which limits the amount of memory that can be used
to store the search tree.
2. Best-first search: Memory-bounded heuristic search is based on the best-first search
algorithm, which explores the most promising nodes in the search space first.
3. Heuristic function: Memory-bounded heuristic search uses a heuristic function to
estimate the cost of the path from the current state to the goal state.
4. Limited-depth search: In order to stay within the memory constraint, memory-
bounded heuristic search performs a limited-depth search, which means that it only
explores nodes up to a certain depth in the search tree.
5. Priority queue: Memory-bounded heuristic search uses a priority queue to store the
nodes in the search tree, ordered by their estimated cost to the goal state.
6. Iterative deepening: If a solution is not found within the memory constraint and
limited-depth search, memory-bounded heuristic search can use iterative deepening
to gradually increase the depth of the search until a solution is found or the memory
constraint is exceeded.
7. Trade-off between memory and optimality: Memory-bounded heuristic search
trades off optimality for memory usage. The search may not always find the optimal
solution to the problem, but it will find the best solution possible within the memory
constraint.
Overall, memory-bounded heuristic search is a useful approach when memory usage is a
concern, and the search must be able to operate within a specified memory constraint.
The trade-off between optimality and memory usage is an important consideration
when choosing between different search algorithms.
22. Beyond Classical Search :- Classical search algorithms are effective for solving
many types of problems, but they have limitations in certain situations. Here are some
ways that search algorithms can be extended beyond classical search:
1. Adversarial Search: Adversarial search is used for two-player games, such as chess
or go, where each player takes turns making moves. Minimax algorithm with alpha-
beta pruning is a popular adversarial search algorithm.
2. Stochastic Search: Stochastic search algorithms are used when the outcomes of
actions are uncertain, such as in robotics or financial planning. Examples of
stochastic search algorithms include Monte Carlo Tree Search and particle swarm
optimization.
3. Evolutionary Search: Evolutionary search algorithms are inspired by natural
selection and genetics. These algorithms use population-based methods to evolve a
set of solutions to a problem, such as genetic algorithms and genetic programming.
4. Heuristic Search: Heuristic search algorithms use domain-specific knowledge to
guide the search process, rather than relying solely on the structure of the search
space. Examples include A* search and beam search.
5. Reinforcement Learning: Reinforcement learning is a type of machine learning that
involves an agent learning to make decisions based on feedback in the form of
rewards or penalties. This approach is commonly used for problems involving
decision-making, such as game playing or robotics.
6. Online Search: Online search algorithms are designed to operate in dynamic
environments where the search space changes over time. Examples include
incremental search and real-time search.
7. Hierarchical Search: Hierarchical search algorithms decompose a problem into a
hierarchy of sub-problems, allowing for more efficient search by exploiting the
structure of the problem. Examples include hierarchical task networks and subgoal
graphs.
Overall, extending search algorithms beyond classical search involves adapting the
algorithm to the specific characteristics of the problem being solved. By choosing the
appropriate search algorithm, it is possible to significantly improve the efficiency and
effectiveness of the search process.
25. Local beam search :- Local beam search is a local search algorithm that is used to
solve optimization problems by iteratively improving the quality of a set of candidate
solutions. Here is a point-wise explanation of how local beam search works:
1. Initialization: The algorithm starts with k randomly generated candidate solutions,
also known as the beam, where k is a user-defined parameter.
2. Evaluation: The objective function is evaluated for each candidate solution in the
beam. The solutions are then sorted in descending order of their objective function
values.
3. Generation of neighbors: For each solution in the beam, a set of neighboring
solutions is generated by applying some transformation, such as flipping a bit or
swapping two elements. These neighbors are then evaluated using the objective
function.
4. Selection: The k best solutions, including the initial beam, are selected from the set
of neighbors based on their objective function values.
5. Termination: If the best solution in the beam is the optimal solution, the algorithm
terminates. Otherwise, the beam is updated with the new set of solutions, and the
process is repeated from step 2 until a termination criterion is met.
Some key points about local beam search include:
The algorithm maintains a set of k candidate solutions, known as the beam, at each
iteration.
The beam is updated by selecting the k best solutions from the set of neighbors
generated for each solution in the current beam.
Local beam search is a variant of beam search, a tree search algorithm that explores
a fixed number of candidate solutions, known as the beam width, at each level of the
search.
Local beam search is a stochastic algorithm, and the quality of the solutions it
produces depends on the initial beam, the transformation used to generate
neighbors, and the selection criterion used to update the beam.
In summary, local beam search is a local search algorithm that maintains a set of candidate
solutions, known as the beam, and iteratively improves them by generating and evaluating
neighboring solutions. The algorithm is simple to implement and can be effective for solving
optimization problems when the search space is large and the optimal solution is not known
in advance.
26. Genetic algorithms :- Genetic algorithms are a class of optimization algorithms that
use principles inspired by natural evolution to find high-quality solutions to optimization
problems. Here is a point-wise explanation of how genetic algorithms work:
1. Initialization: The algorithm starts with a population of randomly generated
candidate solutions, where each solution is represented as a string of genes.
2. Evaluation: The objective function is evaluated for each candidate solution in the
population, and the solutions are sorted in descending order of their objective
function values.
3. Selection: A subset of the population is selected for reproduction based on their
fitness, which is a measure of their objective function value. Common selection
methods include tournament selection, roulette wheel selection, and rank-based
selection.
4. Reproduction: The selected solutions are used to create offspring solutions through
a combination of crossover and mutation operations. Crossover involves exchanging
genetic material between two parent solutions to create a new offspring solution,
while mutation involves randomly changing a gene in a single solution to create a
new offspring solution.
5. Replacement: The offspring solutions are evaluated using the objective function and
added to the population, replacing some of the existing solutions. This ensures that
the population size remains constant.
6. Termination: The algorithm terminates when a stopping criterion is met, such as a
maximum number of generations or a satisfactory level of solution quality.
Some key points about genetic algorithms include:
The algorithm maintains a population of candidate solutions, which evolves over
time through selection, reproduction, and replacement.
The genetic operators of crossover and mutation are used to create new solutions by
combining the genetic material of existing solutions.
Genetic algorithms are stochastic algorithms, and the quality of the solutions they
produce depends on the initial population, the selection criterion, and the genetic
operators used.
Genetic algorithms can be effective for solving optimization problems when the
search space is large and the optimal solution is not known in advance.
In summary, genetic algorithms are a class of optimization algorithms that use principles
inspired by natural evolution to find high-quality solutions to optimization problems. The
algorithm maintains a population of candidate solutions that evolves over time through
selection, reproduction, and replacement, and uses genetic operators such as crossover and
mutation to create new solutions.
29. AND-OR search trees :- AND-OR search trees are a type of search tree used in
artificial intelligence to represent and solve problems that involve both conjunctions (AND)
and disjunctions (OR).
In an AND-OR search tree, each node represents a proposition that is either an AND node or
an OR node. An AND node represents a proposition that is true only if all its children are
true, while an OR node represents a proposition that is true if any of its children is true.
To search for a solution in an AND-OR search tree, we start at the root node and recursively
explore the tree by applying the following rules:
If the current node is an AND node, we evaluate all of its children, and if all of them
are true, we continue exploring the tree from each child node. Otherwise, we
backtrack and continue exploring from the parent node.
If the current node is an OR node, we evaluate its children one at a time, and if any
of them is true, we continue exploring the tree from that child node. Otherwise, we
backtrack and continue exploring from the parent node.
The goal of the search is to find a path from the root node to a leaf node that represents a
solution to the problem being solved. The leaf nodes of an AND-OR search tree represent
the possible solutions to the problem.
AND-OR search trees are commonly used in game-playing algorithms, natural language
processing, and other areas of artificial intelligence.
30. Searching with Partial Observations :- Searching with partial observations refers
to the process of searching for a solution in a problem where not all information is available
at all times. Here are some key points about searching with partial observations:
Partial observations are common in real-world problems, where not all information
is available at all times. Examples include navigation, robotics, and sensor networks.
In such problems, the agent may have to make decisions based on incomplete
information, which can lead to uncertainty and risk.
One approach to searching with partial observations is to use probabilistic models,
such as Bayesian networks or Markov decision processes, which can incorporate
uncertain and incomplete information into the search process.
Another approach is to use heuristic search algorithms, such as A* or beam search,
which can use partial information to guide the search towards a solution.
In some cases, it may be necessary to use techniques such as filtering or smoothing
to infer missing information from partial observations.
Searching with partial observations can be computationally expensive, as the search
space can be very large and the uncertainty can lead to many possible paths.
Therefore, it is important to use efficient algorithms and representations to manage
the complexity of the search.
Finally, searching with partial observations requires careful consideration of the
trade-offs between exploration and exploitation, as the agent must balance the need
for new information with the need to make decisions based on existing information.