Unit-2 Ai
Unit-2 Ai
CONTENTS
• Beyond Classical Search:
• Hill-climbing search
• Simulated annealing search
• Local Search in Continuous Spaces
• Searching with Non-Deterministic Actions
• Searching with Partial Observations
• Online Search Agents and Unknown Environment
• Constraint Satisfaction Problems:
• Defining Constraint Satisfaction Problems
• Constraint Propagation
• Backtracking Search for CSPs
• Local Search for CSPs
• the Structure of Problems
Hill Climbing Algorithm
• Hill climbing is a simple optimization algorithm used in Artificial Intelligence
(AI) to find the best possible solution for a given problem, from a set of
possible solutions.
• Given a large set of inputs and a good heuristic function, it tries to find a
sufficiently good solution to the problem.
• Greedy approach:
• X-axis: denotes the state space i.e., states or configuration our algorithm
may reach.
• Y-axis: denotes the values of objective function corresponding to a
particular state.
• Global maximum:
• It is the best possible state in the state space diagram.
• This is because, at this stage, the objective function has the highest value.
• Steepest-Ascent hill-climbing
• To overcome plateaus:
• Make a big jump.
• Randomly select a state far away from the current state.
• Chances are that we will land in a non-plateau region.
3. Ridge:
• To overcome Ridge:
• In this kind of obstacle, use two or more rules before
testing.
• It implies moving in several directions at once.
2. Steepest-Ascent hill-climbing
• The steepest-Ascent algorithm is a variation of simple hill
climbing algorithm.
• It does not examine all the neighboring nodes before deciding which
node to select.
2. Repeat these steps until a solution is found or the current state does
not change.
a. Select a state that has not been yet applied to the current state.
b. Apply the successor function to the current state and generate all the
neighbor states.
c. Among the generated neighbor states which are better than the current
state choose a state randomly (or based on some probability function).
d. If the chosen state is the goal state, then return success, else make it the
current state and repeat step 2 of the second point.
• Local Search
• Local search algorithms are methods used in artificial intelligence to
find solutions by iteratively exploring the neighboring points in the
search space.
• These algorithms start with an initial solution and then move to a
neighboring solution.
• If the new solution is better, it becomes the new current solution.
• This process is repeated until no better solutions are found.
• Continuous Spaces
• In AI, a continuous space is a search space where variables can take
any value within a given range, as opposed to discrete spaces where
variables can only take specific values.
• Example: Optimizing the settings of a thermostat where the
temperature can be set to any value within a range, say 16°C to 28°C.
Local Search Algorithms in Continuous Spaces
1. Hill Climbing:
1. Basic Idea: Move in the direction of increasing value (or decreasing
cost).
2. Challenges: Getting stuck at local maxima, plateaus, or ridges.
2. Simulated Annealing:
1. Incorporates randomness to escape local maxima.
2. Analogy: Annealing in metallurgy where controlled heating and cooling
alter the structure of a material.
3. Gradient Descent:
1. Used for minimizing a function by moving in the direction of the
steepest descent as defined by the negative of the gradient.
2. Common in machine learning for training models.
Local search in continuous spaces is a fundamental concept in AI,
particularly useful for optimization problems.
• Nondeterministic Actions:
• In some AI problems, actions do not have a guaranteed outcome.
• This uncertainty in results is called nondeterminism.
• Partial Observations:
• This refers to scenarios where the agent does not have complete information
about the state of the environment.
Searching with Non-Deterministic Actions
Algorithms for Nondeterministic Search
• Contingency Planning:
• Creates a plan that specifies actions for every possible contingency.
• Typically represented as a tree where branches represent different possible
outcomes.
• Example: An AI agent playing a new video game for the first time,
where it learns the rules and objectives as it plays.
Strategies for Online Search
• Planning Approach: The agent tries to plan based on current and past
information.
• Example: An AI in a strategy game that decides its next move based on the
moves it has seen its opponent make.
Exploration vs. Exploitation
• Exploration: The agent tries new actions to discover more about the
environment.
• Example: A scientist AI exploring different chemical combinations to discover a
new reaction.
• Solving a CSP can range from relatively simple (like filling out a
crossword puzzle) to extremely complex (like scheduling flights
for an airline), depending on the number of variables, the size of
their domains, and the complexity of the constraints.
Key Elements:
1. Variables
2. Domains
3. Constraints
1. Backtracking
2. Forward Checking
3. Constraint Propagation
4. Heuristic Method
1. Backtracking Search for CSPs
• Backtracking search is a recursive, depth-first approach to solving CSPs.
• If a variable assignment violates a constraint, the algorithm backtracks and tries a
different value.
• A trial-and-error method where variables are assigned values from their
domains, and the algorithm backtracks when a variable has no valid values
left to assign.
Steps:
• Choose a variable.
• Select a value from its domain.
• Check if the current assignments violate any constraints.
• If yes, backtrack and try a different value.
• If no, move to the next variable.
• Example:
• In Sudoku, if assigning 3 to a cell violates a row constraint, we backtrack and try a different
number.
• Imagine you are trying to complete a puzzle.
• You place a piece down, and if it fits, you move on to the next
piece.
• If you reach a point where no remaining pieces fit, you know
you've made a mistake somewhere.
• So you start taking pieces off, going back to the last correct piece,
and try a different piece instead.
• Backtracking in CSPs works similarly, where each "piece" is a
value assignment for a variable, and the "fit" is whether the
constraints are satisfied.
2. Forward Checking
• Forward checking is a technique used in the backtracking
algorithm to reduce the number of possible variable assignments
and thereby prune the search space.
• If so, you'd erase the horizontal word and try a different one
before proceeding.
3. Constraint Propagation
• Constraint propagation is a key technique in solving CSPs.
• It involves the use of constraints to reduce the number of legal
values for a variable, which in turn reduces the search space.
• It involves inferring new constraints and hence reducing the possible
values further, making the problem easier to solve.
• Example:
• If a variable X has a domain {1, 2, 3} and it must be different from variable Y
which is already assigned the value 2, we can remove 2 from X's domain.
• Similarly, constraint propagation uses the rules (constraints) of the
CSP to eliminate possibilities (values) that cannot possibly be part
of a solution.
• In a game like "Guess Who?", each question you ask eliminates several
possibilities until you narrow down to the right character.
Example:
• Given the constraints X ≠ Y and Y ≠ Z, if X is assigned the value Red, we can deduce
that Y ≠ Red.
• It involves choosing a variable, assigning a value from its domain, and recursively
repeating this process.
Example:
• Assign Red to X.
• This approach starts with a complete assignment and then iteratively changes the
values of the variables to reduce the number of constraint violations.
Example:
• Since this violates many constraints, change the color of one region at a time to
reduce the number of violations.
The Structure of Problems
• Understanding the structure of a CSP can help in choosing the most
appropriate solving strategy.
• The structure can be based on the variables and constraints' complexity,
connectivity, and density.
Example:
• A problem where every variable is connected to every other (like in a
fully connected graph) can be more challenging to solve compared to
one where the connections are sparse.