0% found this document useful (0 votes)
5 views4 pages

Aiii Mid2

Uploaded by

tripadarsh2112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views4 pages

Aiii Mid2

Uploaded by

tripadarsh2112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Hill Climbing- • Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing

elevation/value to find the peak of the mountain or best solution to the problem. • It terminates when it reaches a peak value
where no neighbor has a higher value. • Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. • One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman. • It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that. • A node of hill climbing algorithm has two components which are state and value. • Hill
Climbing is mostly used when a good heuristic is available. • In this algorithm, we don't need to maintain and handle the search
tree or graph as it only keeps a single current state.

• Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce
feedback which helps to decide which direction to move in the search space.• Greedy approach: Hill-climbing algorithm search
moves in the direction which optimizes the cost.• No backtracking: It does not backtrack the search space, as it does not
remember the previous states.

Simple hill climbing- is the simplest way to implement a hill climbing algorithm. It only evaluates the neighbor node state at a
time and selects the first one which optimizes current cost and set it as a current state. • It only checks it's one successor state,
and if it finds better than the current state, then move else be in the same state. This algorithm has the following features:– Less
time consuming– Less optimal solution and the solution is not guaranteed.

Diff regions in state space:- • Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also
another state which is higher than it. • Global Maximum: Global maximum is the best possible state of state space landscape. It
has the highest value of objective function. • Current state: It is a state in a landscape diagram where an agent is currently
present. • Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same
value. • Shoulder: It is a plateau region which has an uphill edge.

Local Maximum:– A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there
is another state also present which is higher than the local maximum. – Solution: Backtracking technique can be a solution of the
local maximum in state space landscape. Create a list of the promising path so that the algorithm can backtrack the search space
and explore other paths as well.

Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current state contains the same
value, because of this algorithm does not find any best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly
6select a state which is far away from the current state so it is possible that the algorithm could find non-plateau region.

Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a
slope, and cannot be reached in a single move. Solution: With the use of bidirectional search, or by moving in different
directions, we can improve this problem.

Crossover-The crossover operator is analogous to reproduction and biological crossover. In this more than one parent is selected
and one or more off-springs are produced using the genetic material of the parents. Crossover is usually applied in a GA with a
high probability Crossover is a genetic operator used to vary the programming of a chromosome or chromosomes from one
generation to the next. Crossover is sexual reproduction. Two strings are picked from the mating pool at random to crossover in
order to produce superior offspring. The method chosen depends on the Encoding Method.

In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine
the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an
existing population, and is analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be
generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be
mutated before being added to the population.

One Point Crossover- In this one-point crossover, a random crossover point is selected and the tails of its two parents are
swapped to get new off-springs. Single Point Crossover: A crossover point on the parent organism string is selected. All data
beyond that point in the organism string is swapped between the two parent organisms. Strings are characterized by Positional
Bias.

Multi Point Crossover-Multi point crossover is a generalization of the one-point crossover wherein alternating segments are
swapped to get new off-springs. Two-Point Crossover : This is a specific case of a N-point Crossover technique. Two random
points are chosen on the individual chromosomes (strings) and the genetic material is exchanged at these points.
Uniform Crossover- In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene
separately. In this, we essentially flip a coin for each chromosome to decide whether or not it’ll be included in the off-spring. We
can also bias the coin to one parent, to have more genetic material in the child from that parent.Uniform Crossover: Each gene
(bit) is selected randomly from one of the corresponding genes of the parent chromosomes. Use tossing of a coin as an example
technique.

Mutation- mutation may be defined as a small random tweak in the chromosome, to get a new solution. It is used to maintain
and introduce diversity in the genetic population and is usually applied with a low probability – pm. If the probability is very high,
the GA gets reduced to a random search. Mutation is the part of the GA which is related to the “exploration” of the search space.
It has been observed that mutation is essential to the convergence of the GA while crossover is not.

Bit Flip Mutation-In this bit flip mutation, we select one or more random bits and flip them. This is used for binary encoded GAs.
In bit flip mutation, we select one or more genes (array indices) and flip their values i.e. we change 1s to 0s and vice versa.

Random Resetting- Random Resetting is an extension of the bit flip for the integer representation. In this, a random value from
the set of permissible values is assigned to a randomly chosen gene. In random resetting mutation, we select one or more genes
(array indices) and replace their values with another random value from their given ranges

Swap Mutation- In swap mutation, we select two positions on the chromosome at random, and interchange the values. This is
common in permutation based encodings. In Swap Mutation we select two genes from our chromosome and interchange their
values.

Scramble Mutation- Scramble mutation is also popular with permutation representations. In this, from the entire chromosome,
a subset of genes is chosen and their values are scrambled or shuffled randomly. In Scramble Mutation we select a subset of our
genes and scramble their value. The selected genes may not be contiguous.

Inversion Mutation- In inversion mutation, we select a subset of genes like in scramble mutation, but

instead of shuffling the subset, we merely invert the entire string in the subset.

Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary algorithms. In
each generation chromosomes(our solution candidates) undergo mutation and crossover and then selection to produce a better
population whose candidates are nearer to our desired solution. Mutation Operator is a unary operator and it needs only one
parent to work on. It does so by selecting a few genes from our selected chromosome and apply the desired algorithm. In this
article, I will be talking five Mutation Algorithms for string manipulation – 1) Bit Flip Mutation 2) Random Resetting Mutation 3)
Swap Mutation 4) Scramble Mutation 5) Inversion Mutation Bit Flip Mutation is mainly used for bit string manipulation while
others can be used for any kind of strings. Here our chromosome will be represented as an array and each index will represent
one gene. Strings can be represented as an array of characters which in turn is an array of ASCII or numeric values.

Selection- Parent Selection is the process of selecting parents which mate and recombine to create off-springs for the next
generation. Parent selection is very crucial to the convergence rate of the GA as good parents drive individuals to a better and
fitter solutions. However, care should be taken to prevent one extremely fit solution from taking over the entire population in a
few generations, as this leads to the solutions being close to one another in the solution space thereby leading to a loss of
diversity. Maintaining good diversity in the population is extremely crucial for the success of a GA. This taking up of the entire
population by one extremely fit solution is known as premature convergence and is an undesirable condition in a GA.

Fitness Function (also known as the Evaluation Function) evaluates how close a given solution is to the optimum solution of the
desired problem. It determines how fit a solution is. In genetic algorithms, each solution is generally represented as a string of
binary numbers, known as a chromosome. We have to test these solutions and come up with the best set of solutions to solve a
given problem. Each solution, therefore, needs to be awarded a score, to indicate how close it came to meeting the overall
specification of the desired solution. This score is generated by applying the fitness function to the test, or results obtained from
the tested solution. A fitness function is a function in a genetic algorithm (GA) that evaluates the quality of solutions
represented by chromosomes. The fitness function assigns a value to each chromosome, with higher values indicating better
solutions. The fitness function is a crucial part of a genetic algorithm. The fitness function simply defined is a function which
takes a candidate solution to the problem as input and produces as output how “fit” our how “good” the solution is with respect
to the problem in consideration.

The following requirements should be satisfied by any fitness function. 1. The fitness function should be clearly defined. The
reader should be able to clearly understand how the fitness score is calculated. 2. The fitness function should be implemented
efficiently. If the fitness function becomes the bottleneck of the algorithm, then the overall efficiency of the genetic algorithm
will be reduced. 3. The fitness function should quantitatively measure how fit a given solution is in solving the problem. 4. The
fitness function should generate intuitive results. The best/worst candidates should have best/worst score values.
Beam Search: 1. A heuristic search algorithm that examines a graph by extending the most promising node in a limited set is
known as beam search. 2. Beam search is a heuristic search technique that always expands the W number of the best nodes at
each l evel. It progresses level by level and moves downwards only from the best W nodes at each level. Beam Search uses
breadth- first search to build its search tree. 3. It generates all the successors of the current level’s state at each level of the tree.
However, at each level, it only evaluates a W number of states. Other nodes are not taken into account. 4. The heuristic cost
associated with the node is used to choose the best nodes. The width of the beam search is denoted by W. If B is the branching
factor, at every depth, there will always be W × B nodes under consideration, but only W will be chosen. More states are
trimmed when the beam width is reduced. 5. When W = 1, the search becomes a hill-climbing search in which the best node is
always chosen from the successor nodes. No states are pruned if the beam width is unlimited, and the beam search is identified
as a breadth - first search. 6. The beam width bounds the amount of memory needed to complete the search, but it comes at the
cost of completeness and optimality (possibly that it will not find the best solution). The reason for this danger is that the desired
state could have been pruned.

ONLINE SEARCH AGENT AND UNKONOWN ENVIRONMENT: Online search agents operate in unknown environments and must
learn through interaction rather than pure computation. They interleave planning and action by first taking an action, observing
the outcome, and using this information to plan their next action. 1. An online search agent operates by interleaving
computation and action: first it takes an action and then it observes the environment and computes the next action. 2. Online
search is a good idea in dynamic or semi dynamic domains-domains where there is a penalty for sitting around and computing
too long. 3. Online search is an even better idea for stochastic domains. (The term "online" is commonly used in computer
science to refer to algorithms that must process input data as they are received, rather than waiting for the entire input data set
to become available.) 4. In general, an offline search would have to come up with an exponentially large contingency plan that
considers all possible happenings, while an online search need only consider what actually does happen. For example, A chess
playing agent is well-advised to make its first move long before it has figured out the complete course of the game. 5. Online
search is a necessary idea for an exploration problem, where the states and actions are unknown to the agent. An agent in this
state of Ignorance must use its actions as experiments to determine what to do next, and hence must interleave computation
and action.

Mini-Max Algorithm in AI 1- Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and
game theory. It provides an optimal move for the player assuming that opponent is also playing optimally.2- Mini-Max algorithm
uses recursion to search through the game- tree. 3- Min-Max algorithm is mostly used for game playing in AI. Such as
Chess,Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current
state. 4- In this algorithm two players play the game, one is called MAX and other is called MIN. 5- Both the players fight it as the
opponent player gets the minimum benefit while they get the maximum benefit. 6- Both Players of the game are opponent of
each other, where MAX will select the maximized value and MIN will select the minimized value. 7- The minimax algorithm
performs a depth-first search algorithm for the exploration of the complete game tree. 8- The minimax algorithm proceeds all
the way down to the terminal node of the tree, then backtrack the tree as the recursion.

Optimal decision in games- A game can be defined as a type of search in AI which can be formalized of the following elements:
1. Initial state: It specifies how the game is set up at the start. 2. Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space. 3. Result(s, a): It is the transition model, which specifies the result of
moves in the state space. 4.Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case. The state where
the game ends is called terminal states. 5. Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the outcomes are a win, loss, or draw and its payoff
values are +1, 0, 1⁄2. And for tic-tac-toe, utility values are +1, -1, and 0.

Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the minimax algorithm. 1-
As we have seen in the minimax search algorithm that the number of game states it has to examine are exponential indepth of
the tree. Since we cannot eliminate the exponent, but we can cut it to half. Hence there is a technique by which without
checking each node of the game tree we can compute the correct minimax decision, and this technique is called pruning. This
involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as
Alpha-Beta Algorithm. 2- Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub- tree. 3- The two-parameter can be defined as: A. Alpha: The best (highest-value) choice we have found
so far atany point along the path of Maximizer. The initial value of alpha is - ∞. B. Beta: The best (lowest-value) choice we have
found so far at anypoint along the path of Minimizer. The initial value of beta is +∞. 4- The Alpha-beta pruning to a standard
minimax algorithm returns the same move as the standard algorithm does, but it removes all the nodes which are not really
affecting the final decision but making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.

Cutting of Search: Cutting off search – In general, it is infeasible to search the entire game tree. – In practice, Cutoff-Test decides
when to stop searching further. – Prefer to stop at quiescent positions. – Prefer to keep searching in positions that are still
violently in flux.
Forward Pruning: A technique for reducing the number of nodes to be examined at each level in a search process. An evaluation
function can be used to prune unpromising nodes or a mechanical method might be used, such as beam search – expand only n
nodes at each level.

Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform some kind of
search algorithm in the background in order to achieve their tasks. A search problem consists of: 1- A State Space. Set of all
possible states where you can be.2- A Start State. The state from where the search begins.3- A Goal State. A function that looks
at the current state returns whether or notit is the goal state. 4- The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state. 5- This plan is achieved through search algorithms.

Blind or Uninformed Search: A general-purpose algorithm-based search that works on the principle of brute force. As discussed
earlier, and by the name of the search it becomes obvious that such a search doesn’t have any additional information about the
goal, or the path and all that. The salient features for blind search are as follows.1 It doesn’t have any information about the goal,
so it examines each node and consumes time in order to search for the goal. 2 In a blind search, there is no information for
search space. The agent creates a path on its way to the search for the goal. 3 As there is no information about the goal, no
information for the path between the initial point and the target goal. So, they create their own path by going to each node until
they have reached the final node which is the end goal. Once they have reached the goal, the search has been stopped. 4 The
process of blind search is time-consuming as they have to examine each and every node and also there is no concept of path
planning in blind search as well.

Heuristic or Informed Searches-Informed searches are those in which there are optimal solutions are available. In such searches,
there are spaces as well. The heuristic assigns the real number values to the nodes, and branches and the space provide the
solution to the model in order to get the search on that particular area. There are some main features of heuristic searches as
follows. 1 It gives a real and possible solution. 2 The solution can be in the form of a point or state space. 3 They are also capable
to provide the path from the initial position to the final position, goal, or the target 4- It is quick and inexpensive as compared to
blind searches 5- It provides feedback to the model as well. 6- The heuristic defined for each node, branch, and goal depends on
the user.

A* Tree Search, or simply known as A* Search, combines the strengths of uniform-cost search and greedy search. In this search,
the heuristic is the summation of the cost in UCS, denoted by g(x), and the cost in the greedy search, denoted by h(x). The
summed cost is denoted by f(x). Heuristic: The following points should be noted wrt heuristics in A* search.  Here, h(x) is called
the forward cost and is an estimate of the distance of the current node from the goal node.  And, g(x) is called the backward
cost and is the cumulative cost of a node from the root node.  A* search is optimal only when for all nodes, the forward cost for
a node h(x) underestimates the actual cost h*(x) to reach the goal. This property of A* heuristic is called admissibility.

What is heuristic and blind search in AI?- Direct heuristic search techniques may also be called blind control strategy, blind
search, and uninformed search. They utilize an arbitrary sequencing of operations and look for a solution

throughout the entire state space. These include Depth First Search (DFS) and Breadth First Search (BFS).

What is the difference between A blind search and guided search? No heuristics – uninformed search algorithms do not use
additional information, such as heuristics or cost estimates, to guide the search process. Blind search – uninformed search
algorithms do not consider the cost of reaching the goal or the likelihood of finding a solution, leading to a blind search process

What is difference between heuristic and blind search? What is the difference between blind search and heuristic search with
examples? Blind search explores without additional information, like depth-first or breadth-first search. Heuristic search, such as
A* algorithm, uses domain-specific knowledge to guide the search, like estimating distances in pathfinding.

sjidij

You might also like