0% found this document useful (0 votes)
54 views32 pages

Week 5

Uploaded by

Anjali Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views32 pages

Week 5

Uploaded by

Anjali Malik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Topic:BCA–508 Unit -2 Artificial

Intelligence (Week 5)

Prepared By:
Dr Tejinder Kaur
Associate Professor
Programme: BCA -50
8

Cours
e: Artificial Intelligen
ce

Contents
• Hill Climbing
• Problem Reduction Search
• AO*
• Population Based Search:
• Ant Colony Optimization
• Genetic Algorithm
Programme: BCA -50
8

Cours
e: Artificial Intelligen
ce

Hill Climbing
• Hill climbing is a simple optimization algorithm used in Artificial
Intelligence (AI) to find the best possible solution for a given problem. It
belongs to the family of local search algorithms and is often used in
optimization problems where the goal is to find the best solution from a set
of possible solutions.
• In Hill Climbing, the algorithm starts with an initial solution and then
iteratively makes small changes to it in order to improve the solution.
These changes are based on a heuristic function that evaluates the quality
of the solution. The algorithm continues to make these small changes until
it reaches a local maximum, meaning that no further improvement can be
made with the current set of moves.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• Hill climbing is a simple optimization algorithm used in Artificial
Intelligence (AI) to find the best possible solution for a given problem. It
belongs to the family of local search algorithms and is often used in
optimization problems where the goal is to find the best solution from a set
of possible solutions.
• In Hill Climbing, the algorithm starts with an initial solution and then
iteratively makes small changes to it in order to improve the solution.
These changes are based on a heuristic function that evaluates the quality
of the solution. The algorithm continues to make these small changes until
it reaches a local maximum, meaning that no further improvement can be
made with the current set of moves.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• Advantages of Hill Climbing algorithm:
• Hill Climbing is a simple and intuitive algorithm that is easy to understand
and implement.
• It can be used in a wide variety of optimization problems, including those
with a large search space and complex constraints.
• Hill Climbing is often very efficient in finding local optima, making it a
good choice for problems where a good solution is needed quickly.
• The algorithm can be easily modified and extended to include additional
heuristics or constraints.
Programme: BCA -
508

Disadvantages of Hill Course: Artificial


Intelligence

Climbing algorithm:
• Disadvantages of Hill Climbing algorithm:
• Hill Climbing can get stuck in local optima, meaning that it may not find
the global optimum of the problem.
• The algorithm is sensitive to the choice of initial solution, and a poor
initial solution may result in a poor final solution.
• Hill Climbing does not explore the search space very thoroughly, which
can limit its ability to find better solutions.
• It may be less effective than other optimization algorithms, such as genetic
algorithms or simulated annealing, for certain types of problems.
Programme: BCA -
508

Course: Artificial
Intelligence
Types of Hill Climbing

• 1. Simple Hill climbing: Evaluate the initial state. If it is


a goal state then stop and return success. Otherwise,
make the initial state as the current state.
• Loop until the solution state is found or there are no new
operators present which can be applied to the current
state.
Steepest-Ascent Hill climbing:
• Evaluate the initial state. If it is a goal state then stop and
return success. Otherwise, make the initial state as the
current state.
• Repeat these steps until a solution is found or the current
state does not change
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Different regions in the State Space Diagram:
• Local maximum: It is a state which is better than its neighbouring state however
there exists a state which is better than it(global maximum). This state is better
because here the value of the objective function is higher than its neighbours.

• Global maximum: It is the best possible state in the state space diagram. This is
because, at this stage, the objective function has the highest value.
• Plateau/flat local maximum: It is a flat region of state space where neighbouring
states have the same value.
• Ridge: It is a region that is higher than its neighbours but itself has a slope. It is a
special kind of local maximum.
• Current state: The region of the state space diagram where we are currently
present during the search.
• Shoulder: It is a plateau that has an uphill edge.
Programme: BCA -
508

Course: Artificial
Intelligence

Problem Reduction Search


Problem reduction is an algorithm design technique that takes a complex
problem and reduces it to a simpler one. The simpler problem is then
solved and the solution of the simpler problem is then transformed to the
solution of the original problem.
• Approach 1:
• To solve the problem one can iterate through the multiples of the bigger
element (say X) until that is also a multiple of the other element. This can
be written as follows:
• Select the bigger element (say X here).
• Iterate through the multiples of X:
▫ If this is also a multiple of Y, return this as the answer.
▫ Otherwise, continue the traversal.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• Algorithm:
• Algorithm LCM(X, Y):
if Y > X:
swap X and Y
end if
for i = 1 to Y:
if X*i is divisible by Y
return X*i
end if
end for
Programme: BCA -
508

AO* Course: Artificial


Intelligence

AO* Algorithm is based on problem decomposition (Breakdown


problem into small pieces). Its an efficient method to explore
a solution path. AO* is often used for the common path
finding problem in applications such as video games, but was
originally designed as a general graph traversal algorithm.
Best-first search is what the AO* algorithm does. The AO*
method divides any given difficult problem into a smaller
group of problems that are then resolved using the AND-
OR graph concept. AND OR graphs are specialized graphs
that are used in problems that can be divided into smaller
problems. The AND side of the graph represents a set of
tasks that must be completed to achieve the main goal, while
the OR side of the graph represents different methods for
accomplishing the same main goal
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• In the above figure, the buying of a car may be broken down into smaller
problems or tasks that can be accomplished to achieve the main goal in
the above figure, which is an example of a simple AND-OR graph. The
other task is to either steal a car that will help us accomplish the main goal
or use your own money to purchase a car that will accomplish the main
goal. The AND symbol is used to indicate the AND part of the graphs,
which refers to the need that all sub problems containing the AND to be
resolved before the preceding node or issue may be finished.
The start state and the target state are already known in the knowledge-
based search strategy known as the AO* algorithm, and the best path is
identified by heuristics. The informed search technique considerably
reduces the algorithm’s time complexity. The AO* algorithm is far more
effective in searching AND-OR trees than the A* algorithm.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
Programme: BCA -
508

Course: Artificial
Intelligence

Population Based Search


• An algorithm that maintains an entire set of candidate solutions, each
solution corresponding to a unique point in the search space of the
problem. This chapter focuses on the concepts of dominance and Pareto-
optimality.
Programme: BCA -
508

Course: Artificial
Intelligence

Ant Colony Optimization


• The algorithmic world is beautiful with multifarious strategies and tools
being developed round the clock to render to the need for high-
performance computing. In fact, when algorithms are inspired by natural
laws, interesting results are observed. Evolutionary algorithms belong to
such a class of algorithms. These algorithms are designed so as to mimic
certain behaviours as well as evolutionary traits of the human genome.
Moreover, such algorithmic design is not only constrained to humans but
can be inspired by the natural behaviour of certain animals as well. The
basic aim of fabricating such methodologies is to provide realistic, relevant
and yet some low-cost solutions to problems that are hitherto unsolvable
by conventional means.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Different optimization techniques have thus evolved based on
such evolutionary algorithms and thereby opened up the domain
of met heuristics. Meta heuristic has been derived from two
Greek words, namely, Meta meaning one level
above and heuristic meaning to find. Algorithms such as the
Particle Swarm Optimization (PSO) and Ant Colony
Optimization (ACO) are examples of swarm intelligence and
meta heuristics. The goal of swarm intelligence is to design
intelligent multi-agent systems by taking inspiration from the
collective behaviour of social insects such as ants, termites,
bees, wasps, and other animal societies such as flocks of birds
or schools of fish.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• The stages can be analyzed as follows:
• Stage 1: All ants are in their nest. There is no pheromone content in the environment. (For
algorithmic design, residual pheromone amount can be considered without interfering with
the probability)
• Stage 2: Ants begin their search with equal (0.5 each) probability along each path. Clearly,
the curved path is the longer and hence the time taken by ants to reach food source is greater
than the other.
• Stage 3: The ants through the shorter path reaches food source earlier. Now, evidently they
face with a similar selection dilemma, but this time due to pheromone trail along the shorter
path already available, probability of selection is higher.
• Stage 4: More ants return via the shorter path and subsequently the pheromone
concentrations also increase. Moreover, due to evaporation, the pheromone concentration in
the longer path reduces, decreasing the probability of selection of this path in further stages.
Therefore, the whole colony gradually uses the shorter path in higher probabilities. So, path
optimization is attained.

.
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• Ants live in community nests and the underlying principle of ACO is to
observe the movement of the ants from their nests in order to search for
food in the shortest possible path. Initially, ants start to move randomly in
search of food around their nests. This randomized search opens up multiple
routes from the nest to the food source. Now, based on the quality and
quantity of the food, ants carry a portion of the food back with necessary
pheromone concentration on its return path. Depending on these pheromone
trials, the probability of selection of a specific path by the following ants
would be a guiding factor to the food source. Evidently, this probability is
based on the concentration as well as the rate of evaporation of pheromone.
It can also be observed that since the evaporation rate of pheromone is also
a deciding factor, the length of each path can easily be accounted for.
Programme: BCA -
508

Genetic Algorithms
Course: Artificial
Intelligence

• Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong


to the larger part of evolutionary algorithms. Genetic algorithms are based on
the ideas of natural selection and genetics. These are intelligent exploitation of
random searches provided with historical data to direct the search into the
region of better performance in solution space. They are commonly used to
generate high-quality solutions for optimization problems and search
problems.
• Genetic algorithms simulate the process of natural selection which means
those species that can adapt to changes in their environment can survive and
reproduce and go to the next generation. In simple words, they simulate
“survival of the fittest” among individuals of consecutive generations to solve
a problem. Each generation consists of a population of individuals and each
individual represents a point in search space and possible solution. Each
individual is represented as a string of character/integer/float/bits. This string
is analogous to the Chromosome.
Programme: BCA -50
8

Cours
e: Artificial Intelligen
ce

Continue
• Foundation of Genetic Algorithms
• Genetic algorithms are based on an analogy with the genetic
structure and behaviour of chromosomes of the population.
Following is the foundation of GAs based on this analogy –
• Individuals in the population compete for resources and mate
• Those individuals who are successful (fittest) then mate to create
more offspring than others
• Genes from the “fittest” parent propagate throughout the
generation, that is sometimes parents create offspring which is
better than either parent.
• Thus each successive generation is more suited for their
environment.
Programme: BCA -
508

Search space
Course: Artificial
Intelligence

• The population of individuals are maintained within


search space. Each individual represents a solution in
search space for given problem. Each individual is
coded as a finite length vector (analogous to
chromosome) of components. These variable
components are analogous to Genes. Thus a
chromosome (individual) is composed of several
genes (variable components).
Programme: BCA -
508

Course: Artificial
Intelligence
Fitness Score

• A Fitness Score is given to each individual which shows the ability of an individual to
“compete”. The individual having optimal fitness score (or near optimal) are sought.
• The GAs maintains the population of n individuals (chromosome/solutions) along with
their fitness scores. The individuals having better fitness scores are given more chance to
reproduce than others. The individuals with better fitness scores are selected who mate
and produce better offspring by combining chromosomes of parents. The population
size is static so the room has to be created for new arrivals. So, some individuals die and
get replaced by new arrivals eventually creating new generation when all the mating
opportunity of the old population is exhausted. It is hoped that over successive
generations better solutions will arrive while least fit die.
• Each new generation has on average more “better genes” than the individual (solution)
of previous generations. Thus each new generations have better “partial solutions” than
previous generations. Once the offspring produced having no significant difference from
offspring produced by previous populations, the population is converged. The algorithm
is said to be converged to a set of solutions for the problem.
Programme: BCA -
508

Operators of Genetic Algorithms


Course: Artificial
Intelligence

Once the initial generation is created, the algorithm evolves


the generation using following operators –
1) Selection Operator: The idea is to give preference to
the individuals with good fitness scores and allow them to
pass their genes to successive generations.
2) Crossover Operator: This represents mating between
individuals. Two individuals are selected using selection
operator and crossover sites are chosen randomly. Then
the genes at these crossover sites are exchanged thus
creating a completely new individual (offspring). For
example –
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
3) Mutation Operator: The key idea is to insert random genes in offspring
to maintain the diversity in the population to avoid premature convergence.
For example –
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
Programme: BCA -
508

Course: Artificial
Intelligence

Continue
• Why use Genetic Algorithms
• They are Robust
• Provide optimisation over large space state.
• Unlike traditional AI, they do not break on slight change
in input or presence of noise
• Application of Genetic Algorithms
• Genetic algorithms have many applications, some of them
are –
• Recurrent Neural Network
• Mutation testing
• Code breaking
• Filtering and signal processing
• Learning fuzzy rule base etc
Programme: BCA -50
8

Cours
e: Artificial Intelligen
ce

Continue
Local Search Optimization: Because LSO algorithms assess and enhance a
single solution at a time, they are often more computationally efficient.
Although they may identify local optima more quickly, they might take
more tries to get out of them
Local Search Optimization: LSO algorithms work well in
situations when there is a solid starting solution and a
reasonably smooth solution space. In combinatorial
optimization issues, they are often used.
Local Search Optimization (LSO) methods may restrict
variety since they concentrate on a single solution. To
increase variation, strategies like restarts and numerous
runs from various beginning solutions are often used.
Programme: BCA -
508

Course: Artificial

Continue
Intelligence

• Local escape operator (LEO) is a local search algorithm proposed in the


gradient-based optimizer (GBO), which aims to explore the new areas
needed in the problem and further enhance the exploitation ability of the
algorithm
What is an example of a local search? An example of a
local search is the "Hill Climbing" algorithm. It starts with
an initial solution and iteratively makes small changes to
improve the current solution, with the goal of finding a
locally optimal solution within a limited portion of the
solution space.
Programme: BCA -50
8

Cours
e: Artificial Intelligen
ce

Query
?????

You might also like