Problem solving
Problem solving
• 1. Ignorable Problems
• These are problems or errors that have minimal or no impact on the overall performance of the AI system. They
are minor and can be safely ignored without significantly affecting the outcome.
• Examples:
• Slight inaccuracies in predictions that do not affect the larger goal (e.g., small variance in image pixel values
during image classification).
• Minor data preprocessing errors that don’t alter the results significantly.
• Handling: These problems often don’t require intervention and can be overlooked in real-time systems without
adverse effects.
• 2. Recoverable Problems
• Recoverable problems are those where the AI system encounters an issue, but it can recover from the error,
either through manual intervention or built-in mechanisms, such as error-handling functions.
• Examples:
• Incorrect or biased training data that can be retrained or corrected during the process.
• Description: These are critical problems that lead to permanent failure or incorrect outcomes in AI systems. Once
encountered, the system cannot recover, and these problems can cause significant damage or misperformance.
• Examples:
• Complete corruption of the training dataset leading to irreversible bias or poor performance.
• Security vulnerabilities in AI models that allow for adversarial attacks, rendering the system untrustworthy.
• Overfitting to the extent that the model cannot generalize to new data.
• Handling: These problems often require a complete overhaul or redesign of the system, including retraining the
model, rebuilding the dataset, or addressing fundamental issues in the AI architecture.
1. Problem Definition: This initial step involves clearly specifying the inputs and acceptable solutions for the
system. A well-defined problem lays the groundwork for effective analysis and resolution.
2. Problem Analysis: In this step, the problem is thoroughly examined to understand its components, constraints,
and implications. This analysis is crucial for identifying viable solutions.
3. Knowledge Representation: This involves gathering detailed information about the problem and defining all
potential techniques that can be applied. Knowledge representation is essential for understanding the problem’s
context and available resources.
4. Problem Solving: The selection of the best techniques to address the problem is made in this step. It often
involves comparing various algorithms and approaches to determine the most effective method.
• Components of Problem Solving in AI
• Initial State: This represents the starting point for the AI agent,
establishing the context in which the problem is addressed. The initial state
may also involve initializing methods for problem-solving.
• Action: This stage involves selecting functions associated with the initial
state and identifying all possible actions. Each action influences the
progression toward the desired goal.
• Transition: This component integrates the actions from the previous stage,
leading to the next state in the problem-solving process. Transition modeling
helps visualize how actions affect outcomes.
• Goal Test: This stage verifies whether the specified goal has been achieved
through the integrated transition model. If the goal is met, the action
ceases, and the focus shifts to evaluating the cost of achieving that goal.
• 1. Search Algorithms
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal State. A function that looks at the current state returns whether or
not it is the goal state.
• The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
• The search algorithms in this section have no additional information on the goal node other than the one
provided in the problem definition. The plans to reach the goal state from the start state differ only by the order
and/or length of actions. Uninformed search is also called Blind search. These algorithms can only generate
the successors and differentiate between the goal state and non goal state.
• A problem graph, containing the start node S and the goal node G.
• A strategy, describing the manner in which the graph will be traversed to get to G.
• A fringe, which is a data structure used to store all the possible states (nodes) that you can go from the current
states.
Example:
Question. Which solution would DFS find to move from node S to node G if run on the graph below?
= the depth of the search tree = the number of levels of the search tree.
= number of nodes in level .
Example:
Question. Which solution would BFS find to move from
node S to node G if run on the graph below?
• Solution. The equivalent search tree for the above graph is as follows. As BFS traverses the tree “shallowest node
first”, it would always pick the shallower branch until it reaches the solution (or it runs out of nodes, and goes to the
next branch). The traversal is shown in blue arrows
• .
Optimality: BFS is optimal as long as the costs of all edges are equal.
• Uniform Cost Search Algorithm (UCS)
• Uniform Cost Search (UCS) explores graphs by
expanding nodes from the start to the goal based on
edge costs. It finds the lowest-cost path, essential when
step costs vary for optimal solutions.
• In the above figure, consider S to be the start node and G to be
the goal state.
• From node S we look for a node to expand, and we have nodes A
and G, but since it’s a uniform cost search, it’s expanding the node
with the lowest step cost, so node A becomes the successor rather
than our required goal node G.
• From A we look at its children nodes B and C. Since C has the
lowest step cost, it traverses through node C.
• Then we look at the successors of C, i.e., D and G. Since the cost
to D is low, we expand along with node D.
• Since D has only one child G which is our required goal state we
finally reach the goal state D by implementing UFS Algorithm.
• If we have traversed this way, definitely our total path cost from S
to G is just 6 even after traversing through many nodes rather
than going to G directly where the cost is 12 and 6<<12(in terms
of step cost). But this may not work with all cases.
• Depth-Limited Search Algorithm:
• A depth-limited search algorithm is similar to depth-first
search with a predetermined limit. Depth-limited search
can solve the drawback of the infinite path in the Depth-
first search. In this algorithm, the node at the depth
limit will treat as it has no successor nodes further.
• Depth-limited search can be terminated with two
Conditions of failure:
• Standard failure value: It indicates that problem does
not have any solution.
• Cutoff failure value: It defines no solution for the
problem within a given depth limit.
• Advantages
1.Prevents Infinite Loops in Infinite or Cyclic Graphs:
1. By limiting the depth, LDFS avoids getting stuck in infinite loops that can occur in graphs with
cycles or infinite branching.
2.Memory Efficiency:
1. Like standard DFS, LDFS uses less memory compared to breadth-first search (BFS) since it doesn't
need to store all the nodes at a given level, only the nodes along the current path.
3.Targeted Exploration:
1. If the solution is within the limited depth, it can be found faster as LDFS prioritizes depth over
breadth and avoids exploring unnecessary deeper paths.
• Disadvantages
1.Incomplete for Deep Solutions:
1. If the solution lies beyond the specified depth, LDFS will fail to find it, making it incomplete in
such cases.
2.Risk of Missing Optimal Solutions:
1. Since LDFS prioritizes depth over breadth, it may not find the shallowest solution within the
depth limit, and it cannot guarantee the optimality of the solution.
Completeness: DLS search algorithm is complete if the solution is above the depth-
limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ) where b is the
branching factor of the search tree, and l is the depth limit.
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ) where b is the
branching factor of the search tree, and l is the depth limit.
• . Bidirectional Search Algorithm:
• Bidirectional search algorithm runs two simultaneous searches, one form
initial state called as forward-search and other from goal node called as
backward-search, to find the goal node. Bidirectional search replaces one
single search graph with two small subgraphs in which one starts the search
from an initial vertex and other starts from goal vertex. The search stops
when these two graphs intersect each other.
• Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
• Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
• The graph can be extremely helpful when it is very large in size and there is no way to
make it smaller. In such cases, using this tool becomes particularly useful.
• The cost of expanding nodes can be high in certain cases. In such scenarios, using this
approach can help reduce the number of nodes that need to be expanded.
• Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
• Finding an efficient way to check if a match exists between search trees can be tricky,
which can increase the time it takes to complete the task.
• Example:
• In the below search tree, bidirectional search algorithm is
applied. This algorithm divides one graph/tree into two sub-
graphs. It starts traversing from node 1 in the forward direction
and starts from goal node 16 in the backward direction.
• The algorithm terminates at node 9 where two searches meet.