0% found this document useful (0 votes)
57 views119 pages

Unit-2 Ai

Uploaded by

2111cs030057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views119 pages

Unit-2 Ai

Uploaded by

2111cs030057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 119

UNIT-2

CONTENTS
• Beyond Classical Search:
• Hill-climbing search
• Simulated annealing search
• Local Search in Continuous Spaces
• Searching with Non-Deterministic Actions
• Searching with Partial Observations
• Online Search Agents and Unknown Environment
• Constraint Satisfaction Problems:
• Defining Constraint Satisfaction Problems
• Constraint Propagation
• Backtracking Search for CSPs
• Local Search for CSPs
• the Structure of Problems
Hill Climbing Algorithm
• Hill climbing is a simple optimization algorithm used in Artificial Intelligence
(AI) to find the best possible solution for a given problem, from a set of
possible solutions.

• Hill Climbing is a heuristic search used for mathematical optimization


problems in the field of Artificial Intelligence.

• Given a large set of inputs and a good heuristic function, it tries to find a
sufficiently good solution to the problem.

• This solution may not be the global optimal maximum.

• In the above definition, mathematical optimization problems imply that hill-


climbing solves the problems where we need to maximize or minimize a given
real function by choosing values from the given inputs.

• Example-Travelling salesman problem where we need to minimize the


distance traveled by the salesman.
• ‘Heuristic search’ means that this search algorithm may not
find the optimal solution to the problem.
• However, it will give a good solution in a reasonable time.

• A heuristic function is a function that will rank all the


possible alternatives at any branching step in the search
algorithm based on the available information.

• It helps the algorithm to select the best route out of possible


routes.
• Hill climbing algorithm is a local search algorithm
• which continuously moves in the direction of
increasing elevation/value
• to find the peak of the mountain or best solution to
the problem.

• It terminates when it reaches a peak value where no neighbor


has a higher value.

• It is also called greedy local search as it only looks to its good


immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which
are
• state and
• value

• In this algorithm, we don't need to maintain and handle the


search tree or graph as it only keeps a single current state.
Features of Hill Climbing:

Generate and Test variant:


• Hill Climbing is a variant of generating and testing algorithms.
• The generate and test algorithm is as follows :
1. Generate possible solutions.
2. Test to see if this is the expected solution.
3. If the solution has been found quit, else go to step 1.

Hence we call Hill climbing a variant of generating and test algorithm


as it takes the feedback from the test procedure.
Then this feedback is utilized by the generator in deciding the next
move in the search space.
• No backtracking:

• It does not backtrack the search space, as it does not remember


the previous states.

• Greedy approach:

• At any point in state space, the search moves in that direction


only which optimizes the cost of function with the hope of finding
the optimal solution at the end.
State-space Diagram for Hill Climbing:
• The state-space diagram is a graphical representation of the set
of states our search algorithm can reach vs the value of our
objective function(the function which we wish to maximize).

• X-axis: denotes the state space i.e., states or configuration our algorithm
may reach.
• Y-axis: denotes the values of objective function corresponding to a
particular state.

• The best solution will be a state space where the objective


function has a maximum value(global maximum).
Different regions in the State Space Diagram:
• Local maximum:
• It is a state which is better than its neighboring state however there exists
a state which is better than it(global maximum).
• This state is better because here the value of the objective function is
higher than its neighbors.

• Global maximum:
• It is the best possible state in the state space diagram.
• This is because, at this stage, the objective function has the highest value.

• Plateau/flat local maximum:


• It is a flat region of state space where neighboring states have the same
value.
• Ridge:
• It is a region that is higher than its
neighbors but itself has a slope.
• It is a special kind of local maximum.
• Current state:
• The region of the state space
diagram where we are currently
present during the search.
• Shoulder:
• It is a plateau that has an uphill edge.
Types of Hill Climbing Algorithm:

• Simple hill Climbing

• Steepest-Ascent hill-climbing

• Stochastic hill Climbing


1. Simple hill Climbing
• It examines the neighboring nodes one by one and selects the first
neighboring node, which optimizes current cost and set it as a
current state.
• It only checks it's one successor state, and if it finds better than the
current state, then move else be in the same state.
• This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
• Step 1 Evaluate the initial state, if it is goal state then return
success and Stop.
• Step 2 Loop Until a solution is found or there is no new
operator left to apply.
• Step 3 Select and apply an operator to the current state.
• Step 4 Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a
current state.
• Else if not better than the current state, then return to step2.
• Step 5 Exit.
Problems in different regions in Hill climbing
• Hill climbing cannot reach the optimal/best state(global maximum) if it
enters any of the following regions :
1. Local maximum:
• At a local maximum all neighboring states have a value that is worse than the
current state.
• Since hill-climbing uses a greedy approach, it will not move to the worse state and
terminate itself.
• The process will end even though a better solution may exist.
To overcome the local maximum
problem:
Utilize the backtracking technique.
Maintain a list of visited states.
If the search reaches an undesirable
state, it can backtrack to the
previous configuration and explore
a new path.
2. Plateau:

• On the plateau, all neighbors have the


same value.

• Hence, it is not possible to select the


best direction.

• To overcome plateaus:
• Make a big jump.
• Randomly select a state far away from the current state.
• Chances are that we will land in a non-plateau region.
3. Ridge:

• Any point on a ridge can look like a peak


because movement in all possible
directions is downward.

• Hence the algorithm stops when it


reaches this state.

• To overcome Ridge:
• In this kind of obstacle, use two or more rules before
testing.
• It implies moving in several directions at once.
2. Steepest-Ascent hill-climbing
• The steepest-Ascent algorithm is a variation of simple hill
climbing algorithm.

• This algorithm examines all the neighboring nodes of the


current state and selects one neighbor node which is
closest to the goal state.

• This algorithm consumes more time as it searches for


multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
• Step 1 Evaluate the initial state, if it is goal state then return
success and stop, else make current state as initial state.
• Step 2 Loop until a solution is found or the current state does
not change.
• Let SUCC be a state such that any successor of the current state will
be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
• Step 5 Exit.
3. Stochastic hill Climbing
• Same as Steepest Ascent.

• But considers any random neighbor from set of neighbors.

• It does not examine all the neighboring nodes before deciding which
node to select.

• It just selects a neighboring node at random and decides (based on the


amount of improvement in that neighbor) whether to move to that
neighbor or to examine another.
1. Evaluate the initial state.
1. If it is a goal state then stop and return success.
2. Otherwise, make the initial state the current state.

2. Repeat these steps until a solution is found or the current state does
not change.
a. Select a state that has not been yet applied to the current state.
b. Apply the successor function to the current state and generate all the
neighbor states.
c. Among the generated neighbor states which are better than the current
state choose a state randomly (or based on some probability function).
d. If the chosen state is the goal state, then return success, else make it the
current state and repeat step 2 of the second point.

4. Exit from the function.


Simulated Annealing Search
• A hill-climbing algorithm which never makes a move towards a lower
value guaranteed to be incomplete because it can get stuck on a local
maximum.

• And if algorithm applies a random walk, by moving a successor, then it


may complete but not efficient.

• Simulated Annealing is an algorithm which yields both efficiency and


completeness.
• In mechanical term Annealing is a process of hardening a metal or glass
to a high temperature then cooling gradually, so this allows the metal to
reach a low-energy crystalline state.
• The same process is used in simulated annealing in which the algorithm
picks a random move, instead of picking the best move.
• If the random move improves the state, then it follows the same path.
• Otherwise, the algorithm follows the path which has a probability of less than 1
or it moves downhill and chooses another path.
• Is the change of distance from current state to the new state.
Local Search in Continuous Spaces
Local Search Algorithms in Continuous Spaces

• Local Search
• Local search algorithms are methods used in artificial intelligence to
find solutions by iteratively exploring the neighboring points in the
search space.
• These algorithms start with an initial solution and then move to a
neighboring solution.
• If the new solution is better, it becomes the new current solution.
• This process is repeated until no better solutions are found.
• Continuous Spaces
• In AI, a continuous space is a search space where variables can take
any value within a given range, as opposed to discrete spaces where
variables can only take specific values.
• Example: Optimizing the settings of a thermostat where the
temperature can be set to any value within a range, say 16°C to 28°C.
Local Search Algorithms in Continuous Spaces
1. Hill Climbing:
1. Basic Idea: Move in the direction of increasing value (or decreasing
cost).
2. Challenges: Getting stuck at local maxima, plateaus, or ridges.
2. Simulated Annealing:
1. Incorporates randomness to escape local maxima.
2. Analogy: Annealing in metallurgy where controlled heating and cooling
alter the structure of a material.
3. Gradient Descent:
1. Used for minimizing a function by moving in the direction of the
steepest descent as defined by the negative of the gradient.
2. Common in machine learning for training models.
Local search in continuous spaces is a fundamental concept in AI,
particularly useful for optimization problems.
• Nondeterministic Actions:
• In some AI problems, actions do not have a guaranteed outcome.
• This uncertainty in results is called nondeterminism.
• Partial Observations:
• This refers to scenarios where the agent does not have complete information
about the state of the environment.
Searching with Non-Deterministic Actions
Algorithms for Nondeterministic Search

• Contingency Planning:
• Creates a plan that specifies actions for every possible contingency.
• Typically represented as a tree where branches represent different possible
outcomes.

• And-Or Graph Search:


• Used for problems where actions can have multiple possible outcomes.
• And-nodes represent choice points for the agent, Or-nodes represent
environmental responses.
• There are 2 rooms let A,
B.
• Initial states are 1,2
and goal states are 7,8.
Searching with Partial Observations
Contingency Planning
Belief States
• An agent knows its current state and goal state, but doesn't know how to reach (the path) to goal
state.
• In Maze game, start and stop is known
• In Automatic taxi, source and destination is known.
• In Hill climbing, only local maxima is known.
Understanding Partially Observable Environments:
• In a partially observable environment, an AI agent doesn't have
complete information about the state of the environment.
• This is in contrast to a fully observable environment, where the agent
knows all aspects of the environment.
• Examples might include
• a robot navigating a house with closed doors,
• a player in a card game who can't see the opponents' cards,
• or stock market prediction where not all factors affecting stock
prices are known.
Challenges in Partially Observable Environments:

• Uncertainty: The agent must make decisions with incomplete


information, leading to uncertainty.

• Decision Making: The agent must predict or infer the missing


information based on its current knowledge and past experiences.

• Adaptability: The agent must be able to adapt its strategy as


new information is discovered.
Strategies for Dealing with Partial Observability
(Representation of Knowledge)
• Probabilistic Reasoning: Using probabilities to make educated guesses
about the unknown parts of the environment.
• Contingency Planning: How agents plan for various possibilities.
• Belief States: Represent the set of possible actual states the environment
might be in.
• Sensors and Perception: Utilizing sensors or other means to gather more
information about the environment.
• Agents use sensory information to update their belief states.
• Examples: How a robot might plan paths based on different possible room
layouts it infers from sounds or other sensory data.
Algorithms and Techniques:
• Markov Decision Processes (MDPs): A framework for modeling
decision-making where outcomes are partly random and partly
under the control of a decision maker.
• Partially Observable Markov Decision Processes (POMDPs):
An extension of MDPs for situations where the agent doesn’t have
complete information about the state of the environment.
• Ex: Navigating a maze with limited visibility.
• Sometimes exact solutions are impractical, and heuristics or
approximation methods are used.
• Bayesian Networks: Used for probabilistic inference, helping the
agent to make decisions under uncertainty.
Applications:

1. In robotics, for navigation and interaction in environments with


incomplete information.

2. In strategic games, like poker ,where players must make


decisions based on limited information about opponents’
strategies or the game state.

3. In real-world scenarios like disaster response, where the


situation is constantly changing and full information is rarely
available.

4. Natural Language Processing: Challenges in understanding


human language due to missing context.
Belief States
• It's a probability distribution over all states.
• Example: Imagine a robot in a room with two doors, one leading to
Room A and the other to Room B. The robot doesn't know which
door leads to which room. The belief state here represents the
robot's uncertainty about its location - it assigns a probability to
being in Room A and Room B.
1. Initial Belief: Initially, the robot might have equal belief in both possibilities
(50% chance of being in Room A, 50% in Room B).
2. Update with Sensory Information: If the robot hears a noise coming from
Room A, it updates its belief state to increase the probability of being near
Room A.
POMDPs (Partially Observable Markov Decision Processes)
Example: Consider how a robot might plan paths based on
different possible room layouts it infers from sounds or other
sensory data., which now has to decide whether to go through one
of the doors.
1. States: The possible rooms (Room A, Room B).
2. Actions: Choices like 'Enter Room A', 'Enter Room B'.
3. Transition Model: Probability of ending up in a certain state after taking
an action.
4. Observation Model: Probability of receiving a certain observation in
each state.
5. Reward Function: Rewards assigned to states (e.g., finding a charging
station in a room).
In a POMDP, the robot uses its belief state to make decisions.

For instance, if its belief state strongly suggests it's near


Room A, it might choose to enter Room A, expecting a
Online Search Agents and Unknown Environment
Online Search Agents :
• Online search agents operate in an environment where they have to
make decisions with incomplete information.
• Unlike offline agents, they don't have the luxury of knowing the entire
environment beforehand.
• They learn and adapt as they explore the environment.
• Example:
• Consider a robot in a maze. The robot doesn't have a map of the maze but
must find its way out.
• It makes decisions at each intersection based on its immediate surroundings
Online Search Agents and Unknown Environment
Unknown Environments:

• An unknown environment is one where the agent doesn’t have prior


knowledge about the state and structure of the environment.

• Dynamic, unpredictable, and complex.

• Example: An AI agent playing a new video game for the first time,
where it learns the rules and objectives as it plays.
Strategies for Online Search

• Reactive Approach: The agent reacts to its current situation without


considering past actions.
• Example: A vacuum cleaning robot that changes direction when it hits an
obstacle.

• Planning Approach: The agent tries to plan based on current and past
information.
• Example: An AI in a strategy game that decides its next move based on the
moves it has seen its opponent make.
Exploration vs. Exploitation

• Exploration: The agent tries new actions to discover more about the
environment.
• Example: A scientist AI exploring different chemical combinations to discover a
new reaction.

• Exploitation: The agent uses known information to make decisions.


• Example: A recommendation system suggesting movies based on a user's past
viewing habits.
Learning in Unknown Environments

• Machine Learning Integration: Many online agents use machine


learning techniques to improve their performance over time.
• Example: A chatbot that improves its responses as it interacts with more users.

• Reinforcement Learning: A specific type of learning where agents learn


by receiving rewards or penalties.
• Example: A robotic arm learning to pick up objects, getting better each time it
successfully picks one up.
Challenges and Limitations

• Incomplete Information: Making optimal decisions is difficult when


the agent doesn't know the entire state of the environment.

• Dynamic Changes: The environment might change in unpredictable


ways.

• Limited Computation Time: Real-time decisions often mean limited


time to calculate the best move.
Real-World Applications

• Autonomous Vehicles: Navigating roads with unpredictable


conditions and traffic.

• Healthcare: AI in diagnostics exploring unknown medical data to


identify diseases.

• E-commerce: AI systems that adapt to changing market trends and


consumer behaviours.
Future Directions

• Improved Adaptability: Making AI more capable of handling diverse


and dynamic environments.

• Ethical Considerations: Ensuring AI decisions in unknown


environments are ethical and safe.

• Enhanced Learning Capabilities: Integrating more advanced learning


techniques for better performance.
Constraint Satisfaction Problems
Defining Constraint Satisfaction Problems,
Constraint Propagation,
Backtracking Search for CSPs,
Local Search for CSPs,
the Structure of Problems.
• Constraint satisfaction problems, or CSPs, are problems that
must be solved within constraints.

• The goal of a CSP is to assign values to all the variables in such


a way that all the constraints are satisfied.

• Solving a CSP can range from relatively simple (like filling out a
crossword puzzle) to extremely complex (like scheduling flights
for an airline), depending on the number of variables, the size of
their domains, and the complexity of the constraints.
Key Elements:
1. Variables
2. Domains
3. Constraints

• Variables: Items that need to be assigned values.


• Example: X, Y.
• Domains: Set of possible values that each variable can take.
• Example: Domain of X could be {1, 2, 3}.
• Constraints: Rules that restrict the values the variables can
simultaneously take.
• Example: X ≠ Y.
Variables:
• The things that need to be determined are variables.
• Variables in a CSP are the objects that must have values assigned to
them in order to satisfy a particular set of constraints.
• Boolean, integer, and categorical variables are just a few examples of
the various types of variables.
• Variables, for instance, could stand in for the many puzzle cells that
need to be filled with numbers in a “sudoku” puzzle.
Domains:
• The range of potential values that a variable can have is represented
by domains.
• Depending on the issue, a domain may be finite or limitless.
• For instance, in Sudoku, the set of numbers from 1 to 9 can serve as
the domain of a variable representing a problem cell.
Constraints:
• The guidelines that control how variables relate to one another are
known as constraints.
• Constraints in a CSP define the ranges of possible values for variables.
• Unary constraints, binary constraints, and higher-order constraints
are only a few examples of the various sorts of constraints.
• For instance, in a sudoku problem, the restrictions might be that each
row, column, and 3×3 box can only have one instance of each number
from 1 to 9.
Common Examples of CSPs
• Sudoku:
• Variables: Each cell in the Sudoku grid.
• Domains: Numbers 1 to 9.
• Constraints: Each row, column, and 3x3 subgrid must have unique numbers from 1 to
9.
• Map Coloring:
• Variables: Regions or countries on a map.
• Domains: Colors (Red, Green, Blue, etc.).
• Constraints: Adjacent regions cannot have the same color.
• Timetable Scheduling:
• Variables: Time slots for classes.
• Domains: Different time periods in a week.
• Constraints: No teacher or student can be in two places at once; some classes require
specific rooms.
There are various methods to solve CSPs, including:

1. Backtracking
2. Forward Checking
3. Constraint Propagation
4. Heuristic Method
1. Backtracking Search for CSPs
• Backtracking search is a recursive, depth-first approach to solving CSPs.
• If a variable assignment violates a constraint, the algorithm backtracks and tries a
different value.
• A trial-and-error method where variables are assigned values from their
domains, and the algorithm backtracks when a variable has no valid values
left to assign.
Steps:
• Choose a variable.
• Select a value from its domain.
• Check if the current assignments violate any constraints.
• If yes, backtrack and try a different value.
• If no, move to the next variable.
• Example:
• In Sudoku, if assigning 3 to a cell violates a row constraint, we backtrack and try a different
number.
• Imagine you are trying to complete a puzzle.

• You place a piece down, and if it fits, you move on to the next
piece.
• If you reach a point where no remaining pieces fit, you know
you've made a mistake somewhere.
• So you start taking pieces off, going back to the last correct piece,
and try a different piece instead.
• Backtracking in CSPs works similarly, where each "piece" is a
value assignment for a variable, and the "fit" is whether the
constraints are satisfied.
2. Forward Checking
• Forward checking is a technique used in the backtracking
algorithm to reduce the number of possible variable assignments
and thereby prune the search space.

• It is particularly effective in (CSPs) because it helps to identify


dead ends early on, which reduces the number of unnecessary
searches.
• Imagine you're filling out a crossword puzzle, and you've just
placed a word horizontally.

• With forward checking, you'd immediately look at the vertical


words that intersect with this new word to check if you've now
made it impossible to complete any of them.

• If so, you'd erase the horizontal word and try a different one
before proceeding.
3. Constraint Propagation
• Constraint propagation is a key technique in solving CSPs.
• It involves the use of constraints to reduce the number of legal
values for a variable, which in turn reduces the search space.
• It involves inferring new constraints and hence reducing the possible
values further, making the problem easier to solve.
• Example:
• If a variable X has a domain {1, 2, 3} and it must be different from variable Y
which is already assigned the value 2, we can remove 2 from X's domain.
• Similarly, constraint propagation uses the rules (constraints) of the
CSP to eliminate possibilities (values) that cannot possibly be part
of a solution.
• In a game like "Guess Who?", each question you ask eliminates several
possibilities until you narrow down to the right character.

• When you assign a value to a variable, this assignment can affect


the possible values of other variables.
• For instance, if you're scheduling classes and you schedule one class in a
particular room at a certain time, no other class can be in that room at that
time.

• Constraint propagation systematically goes through the constraints


and variables, reducing the domain of possibilities for each
variable.
• For example, in Sudoku, if a '5' is placed in a row, the domains of other
4. Heuristic Methods
• Heuristic methods in the context of CSPs, are strategies that
improve the efficiency of the search for a solution.
• Instead of randomly choosing the next variable to assign or the
next value to try, heuristics make these choices in a way that is
likely to lead to a solution more quickly.
• Using heuristics doesn't guarantee the fastest solution or even
that the path taken will be the shortest, but they generally lead to
a solution more quickly than a blind search.
• Think of it like navigating a maze with some knowledge of what turns are
dead ends, which saves you from having to explore every path.
• An analogy from everyday life could be deciding what line to
stand in at a grocery store.
• You might use a heuristic of choosing the line with the fewest people
(Minimum Remaining Values,MRV) or the line where people have the
least number of items in their carts (Least Constraining Value,LCV).
• Heuristics are based on experience and intuition about the
problem domain.
• They are not foolproof, but they often significantly reduce the
amount of exploration needed to find a solution to a CSP.
• These methods are particularly useful in large problems where
exhaustive search is impractical.
Local Search for CSPs
• Local search algorithms start with an incomplete or incorrect solution and
iteratively make small changes to improve it.
• This approach is useful when the search space is too large for backtracking.
Steps:
• Start with a possible solution (which may not satisfy all constraints).
• Select a conflicted variable.
• Choose a value that minimizes conflict.
• Repeat until all constraints are satisfied or a maximum number of iterations is
reached.
• Example:
• In a scheduling problem, swap the times of two events to resolve a conflict and continue
until all time slots are conflict-free.
The Structure of Problems
• The structure of CSPs can greatly affect the difficulty and methods used
to solve them.
• The structure refers to how variables and constraints are
interconnected.
• Dense Structure: Many constraints connect many variables, leading to a
complex problem.
• Sparse Structure: Few constraints with few connections between
variables, making the problem easier to solve.
• Example:
• A highly interconnected timetable (dense) is more challenging to solve than one
with few overlapping classes (sparse).
Real-World Applications

• Resource Allocation: Assigning resources like classrooms, instructors,


and times for school schedules.
• Routing Problems: Finding the most efficient route for delivery trucks,
considering constraints like road capacities and delivery times.
• AI in Games: Solving puzzles or creating scenarios that adhere to
specific rules and constraints.
Understanding the concepts with an
example
Example:

• Variables: X, Y (representing, say, colors of two adjacent regions on a


map)

• Domains: X = {Red, Green}, Y = {Red, Green, Blue}

• Constraint: X ≠ Y (the regions X and Y must have different colors)


Constraint Propagation
• This is the process of deducing new constraints from existing ones to reduce the
search space.

Example:

• Given the constraints X ≠ Y and Y ≠ Z, if X is assigned the value Red, we can deduce
that Y ≠ Red.

• If Y is subsequently assigned Blue, we can deduce that Z ≠ Blue.


Backtracking Search for CSPs

• A common method for solving CSPs.

• It involves choosing a variable, assigning a value from its domain, and recursively
repeating this process.

• If a constraint is violated, the process backtracks to the previous step to try a


different value.

Example:

• Assign Red to X.

• If Y is assigned Red, it violates the constraint X ≠ Y, so the algorithm backtracks and


tries the next color for Y.
Local Search for CSPs

• This approach starts with a complete assignment and then iteratively changes the
values of the variables to reduce the number of constraint violations.

Example:

• Begin with all regions colored Red.

• Since this violates many constraints, change the color of one region at a time to
reduce the number of violations.
The Structure of Problems
• Understanding the structure of a CSP can help in choosing the most
appropriate solving strategy.
• The structure can be based on the variables and constraints' complexity,
connectivity, and density.
Example:
• A problem where every variable is connected to every other (like in a
fully connected graph) can be more challenging to solve compared to
one where the connections are sparse.

You might also like