Problem Solving
Problem Solving
PROBLEM-SOLVING
INTRODUCTION
In this module we see how an agent can find a sequence of actions that
achieves its goals when no single action will do.
A Problem is an issue that comes across any system.
A solution is method that is used to solve a particular problem.
Technically, a problem is defined by its and their
.
To provide a formal description of a problem, we need to understand
the following terms:
INTRODUCTION
4. Can a computer that is simply given the problem return the solution,
or will the solution of or will the solution of the problem require
interaction between the computer and a person?
5. Is a large amount of knowledge absolutely required to solve the
problem or is knowledge important only for the constrains?
6. Is a good solution to the problem universally predictable?
PROBLEM-SOLVING
Decomposition:
a. Break Down: Divide the problem into smaller subproblems or
components.
b. Hierarchy: Create a hierarchy of elements (e.g., functions, modules,
data structures).
iii. Patterns and Relationships:
a. Patterns: Look for recurring patterns, similarities, or regularities.
b. Dependencies: Understand how different components depend on each
other.
Steps in Problem Solving AI
3. IDENTIFICATION OF SOLUTIONS
3. Creativity:
Think outside the box.
Consider unconventional approaches.
4. Evaluation:
Assess feasibility, effectiveness, and trade-offs.
Prioritize based on impact and resources.
Steps in Problem Solving AI
4. CHOOSING THE SOLUTION
Choosing the right solution is a critical step in problem-solving.
a. Evaluate Options:
i. Consider all potential solutions generated during brainstorming.
ii. Evaluate each option based on feasibility, effectiveness, and alignment
with goals.
b. Trade-offs:
i. Identify trade-offs associated with each solution.
ii. Weigh pros and cons. Some solutions may be faster but less accurate,
while others may require more resources.
Steps in Problem Solving AI
5. IMPLEMENTATION:
Implementation refers to the process of putting a solution or plan into action.
It involves translating theoretical concepts or designs into practical code,
systems, or tangible outcomes.
b. Testing:
i. Test the implementation thoroughly.
ii. Use test cases to verify correctness and identify issues.
c. Integration:
i. Integrate the solution into the existing system or environment.
ii. Ensure compatibility with other components.
Steps in Problem Solving AI
d. Documentation:
i. Document the implementation details (code comments, user manuals).
ii. Explain how to use and maintain the solution.
e. Deployment:
i. Deploy the solution to production or the intended environment.
ii. Monitor its performance and address any issues
PROBLEM SOLVING AGENTS
A problem-solving agent refers to a type of intelligent agent designed
to address and solve complex problems or tasks in its environment.
These agents are the fundamental concepts in AI and are used in
various applications, from game-playing algorithms to robotics and
decision-making systems.
A problem solving agent is a kind of goal-based agent.
Problem-solving agents use atomic representations.
PROBLEM SOLVING AGENTS
Key Characteristics and Components of Problem- Solving Agents
1. Perception:-
Problem-solving agents typically can perceive or sense their
environment. They gather information about the current state of the
world through sensors, cameras, or other data sources.
2. Knowledge Base:-
These agents often possess some form of knowledge representation
of the problem domain. This knowledge can be encoded in various ways
such as rules, facts, or models, depending on the specific problem.
PROBLEM SOLVING AGENTS
3. Reasoning:-
Problem-solving agents employ reasoning mechanisms to make
decisions and select actions based on their perception and knowledge.
This involves processing information, making inferences, and selecting
the best course of action.
4. Planning:-
For many complex problems, problem solving agents engaging in
planning. They consider different sequences of actions to achieve their
goals and decide on the most suitable action plan.
PROBLEM SOLVING AGENTS
5. Actuation:-
After determining the best course of action, problem solving-agent take
actions to interact with their environment.
6. Feedback:-
Problem-solving agents often receive feedback from their environment
which they use to adjust actions and refine their problem solving agents.
7. Learning:-
Agents incorporate with machine learning techniques to improve their
performance overtime through learning from experience etc.
PROBLEM SOLVING AGENTS
Goal-based agents that use more advanced factored or structured
representations are usually called planning agents.
Goal formulation, based on the current situation and the
performance measure, is the first step in problem solving.
A goal can also be considered as a set of world states exactly those
states in which the goal is satisfied.
The task is to find out how to act, so that it reaches a goal state.
Problem formulation is the process of deciding what actions and states
to consider, given a goal.
PROBLEM SOLVING AGENTS
The process of looking for a sequence of actions that reaches the goal is
called search.
A search algorithm takes a problem as input and returns a solution in the
form of an action sequence.
Once a solution is found, the actions it recommends can be carried out. This
is called the execution phase.
After formulating a goal and a problem to solve, the agent calls a search
procedure to solve it. It then uses the solution to guide its actions, doing
whatever the solution recommends as the next thing to do typically, the
first action of the sequence and then removing that step from the sequence.
Once the solution has been executed, the agent will formulate a new goal.
PROBLEM SOLVING AGENTS
Well-defined problems and solutions
Well-defined problems and solutions
A description of what each action does; the formal name for this is the
transition model, specified by a function RESULT(s, a) that returns
the state that results from doing action a in state s. Successor refers to
any state reachable from a given state by a single action.
Well-defined problems and solutions
Together, the initial state, actions, and transition model implicitly
define the state space of the problem the set of all states reachable
from the initial state by any sequence of actions. The state space
forms a directed network or graph in which the nodes are states and
the links between nodes are actions.
The map of Romania shown in Figure 3.2 can be interpreted as a
state-space graph if we view each road as standing for two driving
actions, one in each direction.
A path in the state space is a sequence of states connected by a
sequence of actions.
Well-defined problems and solutions
The goal test, which determines whether a given state is a goal
state. Sometimes there is an explicit set of possible goal states,
and the test simply checks whether the given state is one of them.
For example, in chess, the goal is to reach a state called
where the king is under attack and
escape.
A path cost function that assigns a numeric cost to each path.
The problem-solving agent chooses a cost function that reflects
its performance measure
Well-defined problems and solutions
For the agent trying to get to Bucharest, time is of the essence, so the cost of
a path might be its length in kilometers. The cost of a path can be described
as the of the costs of the individual actions along the path. The step cost of
taking action a in state to reach state is denoted by c(s, a, .
The preceding elements define a problem and can be gathered into a single
data structure that is given as input to a problem-solving algorithm.
A solution to a problem is an action sequence that leads from the initial state
to a goal state.
Solution quality is measured by the path cost function, and an optimal
solution has the lowest path cost among all solutions.
Formulating problems
Toy problems
4. Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost
square, and Sucking in a clean square has no effect.
5. Goal test: This checks whether all the squares are clean.
6. Path cost: Each step costs 1, so the path cost is the number of steps in
the path.
EXAMPLE PROBLEMS
Toy problems
Second example is 8-puzzle. An instance of 8-puzzle is shown in Figure 3.4
A tile adjacent to the blank space can slide into the space. The object is to
reach a specified goal state, such as the one shown on the right of the figure.
1. States: A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.
2. Initial state: Any state can be designated as the initial state. Note that
any given goal can be reached from exactly half of the possible initial
states
EXAMPLE PROBLEMS
Toy problems
Second example is 8-puzzle. An instance of 8-puzzle is shown in Figure 3.4
EXAMPLE PROBLEMS
Toy problems
Second example is 8-puzzle. An instance of 8-puzzle is shown in Figure 3.4
. Actions: The simplest formulation defines the actions as movements of the
blank space Left, Right, Up, or Down. Different subsets of these are possible
depending on where the blank is.
4. Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has the
5 and the blank switched.
5. Goal test: This checks whether the state matches the goal configuration shown
in Figure
6. Path cost: Each step costs 1, so the path cost is the number of steps in the path.
EXAMPLE PROBLEMS
Toy problems
Second example is 8-puzzle. An instance of 8-puzzle is shown in Figure 3.4
The 8-puzzle belongs to the family of sliding-block puzzles. This family is
known to be NP-complete.
The 8-puzzle has 9!/2=181, 440 reachable states and is easily solved. The
15-puzzle (on a 4 4 board) has around 1.3 trillion states, and random
instances can be solved optimally in a few milliseconds by the best search
algorithms.
The 24-puzzle (on a 5 5 board) has around 1025 states, and random
instances take several hours to solve optimally.
EXAMPLE PROBLEMS
Toy problems
Our next example is the 8-queens problem.
The goal of the 8-queens problem is to place eight queens on a chessboard
such that no queen attacks any other.
A queen attacks any piece in the same row, column or diagonal.
Figure 3.5 shows an attempted solution that fails: the queen in the rightmost
column is attacked by the queen at the top left.
EXAMPLE PROBLEMS
Toy problems
Our next example is the 8-queens problem.
EXAMPLE PROBLEMS
Toy problems
Our next example is the 8-queens problem.
There are two main kinds of formulation.
An incremental formulation involves operators that augment the state
description, starting with an empty state; for the 8- queens problem, this
means that each action adds a queen to the state.
A complete state formulation starts with all 8 queens on the board and moves
them around.
EXAMPLE PROBLEMS
Toy problems
Our next example is the 8-queens problem.
States: Any arrangement of 0 to 8 queens on the board is a state.
Initial state: No queens on the board.
Actions: Add a queen to any empty square.
Transition model: Returns the board with a queen added to the specified
square.
Goal test: 8 queens are on the board, none attacked.
EXAMPLE PROBLEMS
Toy problems
Our final toy problem was devised by Donald Knuth and illustrates how
infinite state spaces can arise.
Knuth claimed that, starting with the number 4, a sequence of factorial,
square root, and floor operations will reach any desired positive integer.
For example, we can reach 5 from 4 as follows:
EXAMPLE PROBLEMS
Toy problems
The problem definition is very simple:
States: Positive numbers.
Initial state: 4.
Actions: Apply factorial, square root, or floor operation (factorial for
integers only).
Transition model: As given by the mathematical definitions of the
operations.
Goal test: State is the desired positive integer.
Real-world problems
Route-finding algorithms are used in a variety of applications. Consider the
airline travel problems that must be solved by a travel-planning Web site:
States: Each state obviously includes a location (e.g., an airport) and the
current time.
Initial state: This is specified by the query.
Actions: Take any flight from the current location, in any seat class,
leaving after the current time, leaving enough time for within-airport
transfer if needed.
Transition model: The state resulting from taking a flight will have the
destination as the current location and the arrival time as
the current time.
Real-world problems
VLSI layout
A VLSI layout problem requires positioning millions of components and
connections on a chip to minimize area, minimize circuit delays,
minimize stray capacitances, and maximize manufacturing yield.
The layout problem comes after the logical design phase and is usually
split into two parts: cell layout and channel routing.
In cell layout, the primitive components of the circuit are grouped into
cells, each of which performs some recognized function.
Real-world problems
VLSI layout
Each cell has a fixed footprint (size and shape) and requires a certain
number of connections to each of the other cells.
The aim is to place the cells on the chip so that they do not overlap and
so that there is room for the connecting wires to be placed between the
cells.
Channel routing finds a specific route for each wire through the gaps
between the cells.
These search problems are extremely complex, but definitely worth
solving.
Real-world problems
Robot navigation
Is a generalization of the route-finding problem described earlier.
Rather than following a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions
and states.
For a circular robot moving on a flat surface, the space is essentially
two-dimensional.
When the robot has arms and legs or wheels that must also be
controlled, the search space becomes many-dimensional.
Advanced techniques are required just to make the search space finite
Real-world problems
Robot navigation
Is a generalization of the route-finding problem described earlier.
Rather than following a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions
and states.
For a circular robot moving on a flat surface, the space is essentially
two-dimensional.
When the robot has arms and legs or wheels that must also be
controlled, the search space becomes many-dimensional.
Advanced techniques are required just to make the search space finite
Real-world problems
Automatic assembly sequencing
Automatic assembly sequencing of complex objects by a robot was first
demonstrated by Freddy.
In assembly problems, the aim is to find an order in which to assemble the parts
of some object.
If the wrong order is chosen, there will be no way to add some part later in the
sequence without undoing some of the work already done.
Checking a step in the sequence for feasibility is a difficult geometrical search
problem closely related to robot navigation.
Thus, the generation of legal actions is the expensive part of assembly
sequencing.
Another important assembly problem is protein design, in which the goal is to
find a sequence of amino acids that will fold into a three-dimensional protein
with the right properties to cure some diseases.
Real-world problems
Automatic assembly sequencing
The aim is to find an order in which to assemble the parts of some object.
If the wrong order is chosen, there will be no way to add some parts later in the sequence
without undoing some of the work already done.
Checking a step in the sequence for feasibility is a difficult geometrical search problem closely
related to robot navigation.
Thus, the generation of legal actions is the expensive part of assembly sequencing.
Any practical algorithm must avoid exploring all but a tiny fraction of the state space. protein
design is an automatic assembly problem in which the goal is to find a sequence of amino
acids that will fold into a three-dimensional protein with the right properties to cure some
disease.
Searching for solution
A solution is an action sequence, so search algorithms work by considering
various possible action sequences.
The possible action sequences starting at the initial state form a search tree
with the initial state at the root
The branches are actions and the nodes correspond to states in the state space
of the problem.
The root node of the tree corresponds to the initial state.
The first step is to test whether this is a goal state. Then we need to consider
taking various actions.
We do this by expanding the current state
Searching for solution
Searching for solution
Searching for solution
Loopy path: path from Arad to Sibiu and back to Arad again! We say that
In(Arad) is a repeated state in the search tree, generated in this case by a
loopy path.
Considering such loopy paths means that the complete search tree for
Romania is infinite because there is no limit to how often one can traverse a
loop loops can cause certain algorithms to fail, making otherwise solvable
problems unsolvable. there is no need to consider loopy paths.
Redundant paths: exist whenever there is more than one way to get from
one state to another eg, the paths Arad Sibiu (140 km long) and Arad
Zerind Oradea Sibiu (297 km long).
Searching for solution
TREE-SEARCH algorithm
With a data structure called the explored set (also known as the closed list), which
remembers every expanded node.
Newly generated nodes that match previously generated nodes ones in the explored
set or the frontier can be discarded instead of being added to the frontier
Searching for solution
TREE-SEARCH algorithm
Searching for solution
GRAPH-SEARCH algorithm
Each state appears in the graph only once.
But, it may appear in the tree multiple times contains at most one copy of
each state, so we can think of it as growing a tree directly on the state-space
graph, add a node if its state has already been expanded or a node
pointing to the same state is already in the frontier.
So that every path from the initial state to an unexplored state has to pass
through a state in the frontier.
As every step moves a state from the frontier into the explored region while
moving some states from the unexplored region into the frontier, we see that
the algorithm is systematically examining the states in the state space, one
by one, until it finds a solution.
Searching for solution
GRAPH-SEARCH algorithm
Searching for solution
GRAPH-SEARCH algorithm
Infrastructure for search algorithms
Search algorithms require a data structure to keep track of the search
tree that is being constructed. For each node n of the tree, we have a
structure that contains four components:
n.STATE: the state in the state space to which the node corresponds;
n.PARENT: the node in the search tree that generated this node;
n.ACTION: the action that was applied to the parent to generate the
node;
n.PATH-COST: the cost, traditionally denoted by g(n), of the path
from the initial state to the node, as indicated by the parent pointers.
Infrastructure for search algorithms
Given the components for a parent node, it is easy to see how to
compute the necessary components for a child node. The function
CHILD-NODE takes a parent node and an action and returns the
resulting child node:
Search Algorithm using Queue Data Structure
The frontier needs to be stored in such a way that the search algorithm can
easily choose the next node to expand according to its preferred strategy The
operations on a queue are as follows:
EMPTY?(queue) returns true only if there are no more elements in the
queue.
POP(queue) removes the first element of the queue and returns it.
INSERT(element, queue) inserts an element and returns the resulting queue
Three common variants of queue are
1. first-in, first-out or FIFO queue, which pops the oldest element of the
queue;
2. last-in, first-out or LIFO queue (also known as a stack), which pops the
newest element of the queue.
3. priority queue, which pops the element of the queue with the highest
priority according to some ordering function
Measuring problem-solving performance
Time and space complexity are always expressed in terms of three quantities:
b, the branching factor or maximum number of successors of any node; d, the
depth of the shallowest goal node (i.e., the number of steps along the path
from the root); and m, the maximum length of any path in the state space.
Time is often measured in terms of the number of nodes generated during the
search, and space in terms of the maximum number of nodes stored in
memory. To assess the effectiveness of a search algorithm, we can consider
just the search cost which typically depends on the time complexity but can
also include a term for memory usage or we can use the total cost, which
combines the search cost and the path cost of the solution found
UNINFORMED SEARCH STRATEGIES
The uninformed search strategy is also known as blind search. It means that
the strategies have no additional information about states beyond that
provided in the problem definition.
All they can do is generate successors and distinguish a goal state from a
non-goal state.
All search strategies are distinguished by the order in which nodes are
expanded.
Strategies that know whether one non-goal state is than
another are called informed search or heuristic search strategies.
Breadth-first search
Space complexity: For any kind of graph search, which stores every
expanded node in the explored set, the space complexity is always within a
factor of b of the time complexity. For breadth-first graph search in
particular, every node generated remains in memory. So the space
complexity is also O( b^d).