Unit 3 Problem Solving and Search Algorithms
Unit 3 Problem Solving and Search Algorithms
Problem solving is a systematic search through a range of possible actions in order to reach some predefined goal or
solution.
For Problem solving, a kind of goal based agent called problem solving agents are used
This agent first formulates a goal and a problem, searches for a sequence of actions that would solve the problem,
and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.
– Intelligent agent maximize their performance measure by adapting a goal and aim at
satisfying it.
– Goal help organize behavior by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider.
– Goal are the set of world states in which the goal is satisfied.
– Therefore, during goal formulation step, specify what are the successful world states.
– Problem formulation is the process of deciding what actions and states to consider, given a
goal.
– Therefore, the agent’s task is to find out how to act, now and in the future, so that it reaches
a goal state.
– Before it can do this, it needs to decide(or we need to decide on its behalf) what sorts of
actions and states it should consider.
Search a solution:
– The process of looking for a sequence of actions that reaches the goal is called a searching.
– search algorithm takes a problem as a input and returns a solution in the form of an action
sequence.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
Execution:
– once a solution is found, the actions it recommends can be carried out. This is called the execution phase.
– once a solution has been executed , the agent will formulate a new goal
Problem: This is the task or challenge that needs to be solved. It could be anything from finding the shortest path between
two points to playing a game optimally.
State Space :A state space is a mathematical representation of a problem that defines all possible states that the
problem can be in.
Search: Searching the state space involves exploring different states to find a solution to the problem.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
Problem Formulation
– Initial state
– Actions
– Transition model
– Goal Test
– Path Cost
• Actions: A description of the possible actions available to the agent. During problem
formulation we should specify the all possible actions available for each state ‘s’.
• Transition model: A description of what each action does is called the transition model. For
formulating transition model in problem formulation we take state ‘s’ and action ‘a’ for
that state and then specify the resulting state ‘s’.
• Goal Test: Determine whether the given state is goal state or not.
• Path Cost: Sum of cost of each path from initial state to the given state
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
Well-defined Problems
A well-defined problem is a specific type of problem where the initial state or starting position, the allowable
operations, and the goal state are clearly specified, and a unique solution can be shown to exist.
•Initial state
•Operator or successor function - for any state x returns s(x), the set of states reachable from x with one action
•State space - all states reachable from initial by any sequence of actions
•Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs of individual actions along the
path
For example, imagine a maze solving problem. The state could be the current location of the
agent in the maze. The operator function would then tell you which neighboring cells the agent
can move to based on the possible actions (like move up, down, left, or right).
Search strategies in Artificial Intelligence (AI) are methods used to navigate through the problem space to find the
most efficient path from the initial state to the goal state.
Each of these strategies has its own strengths and weaknesses, and the choice of which one to use will depend on
the specific problem at hand.
In a broader sense, search strategies in AI can be categorized into two main types:
1. Uninformed Search Strategies (Blind Search): These strategies do not have any additional information about
states beyond that provided in the problem definition. All they can do is generate successors and distinguish a goal
state from a non-goal state. Examples include Breadth-First Search, Depth-First Search, and Uniform Cost Search.
2. Informed Search Strategies (Heuristic Search): These strategies have some additional information about the
problem. They use heuristic functions to estimate how close a state is to a goal. The heuristic function helps the
algorithm make decisions about which path to follow. Examples include Greedy Best-First Search and A* Search.
– Completeness: An algorithm is said to be complete if it definitely finds solution to the problem, if exist.
– Time complexity: How long does it take to find a solution? Usually measured in terms of the number of nodes expanded
during the search.
– Space Complexity: How much space is used by the algorithm? Usually measured in terms of the maximum number of nodes
in memory at a time
– Optimality/Admissibility: If a solution is found, is it guaranteed to be an optimal one? For example, is it the one with
minimum cost?
• Blind search do not have additional information about state beyond the problem definition to search a solution
in the search space.
3. Bidirectional Search
• Breadth First search is a simple strategy in which the root node is expanded first, then all the successors of the
root node are expanded next, then their successors, and so on.
• In general, All nodes are expended at a given depth in the search tree before any nodes at the next level are
expanded until the goal reached. – I.e., Expand shallowest unexpended node.
• The search begins at root node. The search continues by visiting the next node which has the least total
cost from the root node. Nodes are visited in this manner until a goal is reached.
• Now goal node has been generated, but uniform cost search keeps going, choosing a node (with less total
cost from the root node to that node than the previously obtained goal path cost) for expansion
and adding a second path.
• Now the algorithm checks to see if this new path is better than the old one; if it is so the old one is
discarded and new one is selected for expansion and the solution is returned.
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge. The primary goal of
the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost.
Uniform-cost search expands nodes according to their path costs form the root node. It can be
used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search
algorithm is implemented by the priority queue. It gives maximum priority to the lowest
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is
the same.
Expand A to B,C,D
• The graph below shows the step-costs for different paths going from the start (S) to the goal (G).
• Use uniform cost search to find the optimal path to the goal.
• Looks for the goal node among all the children of the current node before using the sibling
of this node
• i.e. expand deepest unexpanded node(expand most recently generated deepest node first.).
B
C
D E F G
• The depth limit solves the infinite path problem of DFS by placing limit on the depth.
• Yet it introduces another source of problem if we are unable to find good guess of L. Let d is the depth of
shallowest solution.
– If L < d then incompleteness results because no solution within the depth limit.
– If L > d then not optimal.
Let we don’t know the depth of M then the Iterative deepening search proceeds as follows:
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
Iterative deepening search l =0
• This search is used when a problem has a single goal state that is given explicitly and all the
node generation operators have inverses.
• So it is used to find shortest path from an initial node to goal node instead of goal itself along
with path.
• It works by searching forward from the initial node and backward from the goal node
simultaneously, by hoping that two searches meet in the middle.
• Check at each stage if the nodes of one have been generated by the other,. i.e., they meet in
the middle.
• If so, the path concatenation is the solution.
-Only slight modification of DFS and BFS can be done to perform this search.
– Theoretically effective than unidirectional search.
Disadvantage:
Inefficiency in complex problems: Uninformed searches explore all possibilities blindly, which can be very
time-consuming for problems with vast search spaces.
Not guaranteed to find optimal solution: Since uninformed searches don't consider the "cost" of moving
between states, they might find a solution, but not necessarily the most efficient one.
Getting stuck in loops: Certain uninformed search algorithms, like depth-first search, can get stuck
in infinite loops if the problem has cycles or dead ends.
Heuristic Search Uses domain-dependent (heuristic) information beyond the definition of the problem itself in
order to search the space more efficiently.
– Deciding which node to expand next, instead of doing the expansion in a strictly breadth-first or depth-
first order;
– In the course of expanding a node, deciding which successor or successors to generate, instead of
blindly generating all possible successors at one time;
– Deciding that certain nodes should be discarded, or pruned, from the search space.
• Informed Search Define a heuristic function, h(n), that estimates the "goodness" of a node n.
• The heuristic function is an estimate, based on domain-specific information that is computable from the
current state description, of how close we are to a goal.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
• Specifically, h(n) = estimated cost (or distance) of minimal cost path from state „n‟ to a goal state.
– A key component of f(n) is a heuristic function, h(n),which is a additional knowledge of the problem.
– Based on the evaluation function best first search can be categorized into the following categories:
– Evaluation function based on Heuristic function is used to estimate which node is closest to the goal node.
– Therefore, Evaluation function f(n) =heuristic function h(n)= estimated cost of the path from node n to the goal
node.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
– E.g., hSLD(n) = straight-line distance from n to goal – Note: g(root)= 0 and h(goal) = 0
Initialize a tree with the root node being the start node in the open list.
If the open list is empty, return a failure, otherwise, add the current node to the closed list.
Remove the node with the lowest h(x) value from the open list for exploration.
If a child node is the target, return a success. Otherwise, if the node has not been in either the open or closed list,
add it to the open list for exploration.
F(n) = h(n)
h(n)<=h*(n)[actual cost]
Completeness -> no
Optimality -> no
Time complexity -> O(bm)
Space complexity -> O(bm)
– It evaluates nodes by using the following evaluation function • f(n) = h(n) + g(n) = estimated cost of the cheapest
solution through n.
• Where, g(n): the actual shortest distance traveled from initial node to current node , It helps to avoid
expanding paths that are already expensive
• h(n): the estimated (or "heuristic") distance from current node to goal, it estimate which node is closest
to the goal node.
– Nodes are visited in this manner until a goal is reached.
• Here, the candidate nodes for expansion are{ E, C,G and C}.
•Focus on current state: They operate on a single current state or a few states at a time.
•Neighborhood exploration: They explore neighboring states, which are valid configurations with
small changes from the current state.
•Iterative improvement: They keep searching for better states in the neighborhood until a
stopping condition is met.
•No path retention: The paths taken during the search are not stored or analyzed.
•Low memory usage: Due to their focus on the current state and its neighbors, they require
minimal memory compared to algorithms exploring the entire state space.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
• Hill Climbing Search: (local search, greedy approach, No backtrack)
It is a local search algorithm, It has only knowledge abut local domain not a complete global
domain.
It stops if it does not get best move.
If it does not get best move it can't backtrack.
– Hill climbing can be used to solve problems that have many solutions, some of which are
better than others.
– It starts with a random (potentially poor) solution, and iteratively makes small changes
to the solution, each time improving it a little. When the algorithm cannot see any
improvement anymore, it terminates.
– Ideally, at that point the current solution is close to optimal, but it is not guaranteed that
hill climbing will ever come close to the optimal solution.
1. Evaluate the initial state, if it is goal state then terminate. Otherwise, current state is initial
state.
2. Select a new operator for this state and generate a new state.
1. If the current state is goal state or no new operators available, Terminate. Otherwise repeat
from 2.
6 5 4
1 2 4 1 2 4 1 2 4
5 6 7 5 7 . 5 7
3 8 3 6 8 3 6 8
5
5
1 2 4
2 4
3 5 7
1 5 7
6 8
3 6 8
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
• Hill climbing suffers from the following problems:
– Local maximum is a state which is better than all of its neighbors but is not better than some other
states which are farther away.
– Solution: Backtrack to some earlier node and try going to different direction.
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
• The Plateau problem:
– Plateau is a flat area of the search space in which a whole set of neighboring states have the same value.
– On plateau, it is not possible to determine the best direction in which to move by making local comparison.
– Solution: Make a big jump in some direction to try to get a new section of the search space .
– Ridge is an area of the search space which is higher than the surrounding areas and that itself has a slope.
– Due to the steep slopes the search direction is not towards the top but towards the side(oscillates from
side to side).
• Solution: Apply two or more rules such as bi-direction search before doing the test.
In previous topics, we have studied the search strategies which are only associated with a
single agent that aims to find the solution.
The environment with more than one agent is termed as multi-agent environment, in which
each agent is an opponent of other agent and playing against each other.
Competitive Scenarios with Conflicting Goals:
•Each agent strives to maximize their own utility or minimize their own loss.
•The actions of one agent directly influence the outcomes and goals of other agents.
•The agents may have incomplete information about each other's strategies, leading to strategic
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
uncertainty.
Minimax Search:
• Mini-max search algorithm is a game search algorithm with the application of DFS procedure.
• It assumes:
• Both the player play optimally from there to the end of the game.
• A suitable value of static evaluation(utility) is available on the leaf nodes.
• Given the value of the terminal nodes, the value of each node (Max and MIN) is determined by (back up from)
the values of its children until a value is computed for the root node.
– Mini-max search traverse the entire search tree but it is not always feasible to traverse entire tree.
– Time limitations
Performance Measure :
•Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
•Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
•Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum
depth of the tree.
•Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
is O(bm).
• We can improve on the performance of the mini-max algorithm through alpha-beta pruning.
• Basic idea: If a move is determined worse than another move already examined, then there
is no need for further examination of the node.
– Max player :
• Val> Alpha?(val is Value back up form the children of max player)
– Update Alpha
• Is Alpha>= Beta?
– Prune (called alpha cutoff)
• Return Alpha
– MIN player:
• Val< Beta? (val is Value back up form the children of min player)
– Update Beta
• Is Alpha>= Beta?
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
– Prune ( called beta cutoff)
• Return Beta
A CSP problem is usually represented as an undirected graph called constraints graph where
the nodes are the variables and the edges are the binary constrains.
A B
C D
X={A,B,C,D)
D={R,G,B)
AI-chapter-3: problem Solving by searching Sandesh Shiwakoti 9815945474
C={A!=B, A!=C, A!=D, B!=D, C!=D}