0% found this document useful (0 votes)
9 views65 pages

Note 4g

The document outlines the syllabus for the Artificial Intelligence & Machine Learning course at P.S.N.A. College, detailing key topics such as problem-solving agents, search algorithms, and various AI applications across sectors like healthcare, finance, and education. It discusses the definitions and approaches to AI, including acting humanly, thinking rationally, and the future implications of AI in various industries. The document emphasizes the importance of AI in solving complex problems and enhancing efficiency in multiple fields.

Uploaded by

a.gokulofficial3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views65 pages

Note 4g

The document outlines the syllabus for the Artificial Intelligence & Machine Learning course at P.S.N.A. College, detailing key topics such as problem-solving agents, search algorithms, and various AI applications across sectors like healthcare, finance, and education. It discusses the definitions and approaches to AI, including acting humanly, thinking rationally, and the future implications of AI in various industries. The document emphasizes the importance of AI in solving complex problems and enhancing efficiency in multiple fields.

Uploaded by

a.gokulofficial3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

P.S.N.A.

COLLEGE OF ENGINEERING & TECHNOLOGY


(An Autonomous Institution affiliated to Anna University, Chennai)
Kothandaraman Nagar, Muthanampatti (PO), Dindigul – 624 622.
Phone: 0451-2554032,2554349 Web Link: www.psnacet.org
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Subject Code / Name : EC2612 / Artificial Intelligence & Machine Learning


Year / Semester : III / VI ‘C’
SYLLABUS
UNIT I PROBLEM SOLVING
Introduction to AI - AI Applications - Problem solving agents – search algorithms – uninformed
search strategies – Heuristic search strategies – Local search and optimization problems –
adversarial search – constraint satisfaction problems (CSP)

INTRODUCTION TO AI
Historically, researchers have pursued several different versions of AI. Some have defined
intelligence in terms of fidelity to human performance, while others prefer an abstract, formal definition
of intelligence called rationality—loosely speaking, doing the “right thing.” The subject matter itself
also varies: some consider intelligence to be a property of internal thought processes and reasoning, while
others focus on intelligent behavior, an external characterization.
From these two dimensions—human vs. rational and thought vs. behavior—there are four
possible combinations, and there have been adherents and research programs for all four. The methods
used are necessarily different: the pursuit of human-like intelligence must be in part an empirical science
related to psychology, involving observations and hypotheses about actual human behavior and thought
processes; a rationalist approach, on the other hand, involves a combination of mathematics and
engineering, and connects to statistics, control theory, and economics. The various groups have both
disparaged and helped each other.
DEFINITION
Artificial Intelligence (AI) is a branch of science that deals with helping machines finds solutions
to complex problems in a more human-like fashion.
• Artificial Intelligence is the branch of computer science concerned with making computers behave
like humans.
• Coined and defined as “The science and engineering of making intelligent machines, especially
intelligent computer programs” by John McCarthy in 1956.
The definitions on top are concerned with thought processes and reasoning, whereas the ones on
the bottom address behavior.

1. Acting humanly: The Turing Test approach


The Turing test, proposed by Alan Turing (1950), was designed as a thought experiment that
would sidestep the philosophical vagueness of the question “Can a machine think?” A computer passes
the test if a human interrogator, after posing some written questions, cannot tell whether the written
responses come from a person or from a computer.
The computer would need to possess the following capabilities:
 Natural Language Processing to enable it to communicate successfully in English;
 Knowledge Representation to store what it knows or hears;
 Automated Reasoning to use the stored information to answer questions and to draw new
conclusions;
 Machine Learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing viewed the physical simulation of a person as unnecessary to demonstrate intelligence.
However, other researchers have proposed a total Turing test, which requires interaction with objects
and people in the real world. To pass the total Turing test, a robot will need
 computer vision and speech recognition to perceive the world;
 robotics to manipulate objects and move about.

2. Thinking humanly: The cognitive modeling approach:


To say that a program thinks like a human, we must know how humans think. We can learn bout
human thought in three ways:
introspection—trying to catch our own thoughts as they go by;
psychological experiments—observing a person in action;
brain imaging—observing the brain in action.
Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory
as a computer program. If the program’s input–output behavior matches corresponding human behavior,
that is evidence that some of the program’s mechanisms could also be operating in humans.
COGNITIVE SCIENCE - The interdisciplinary field that brings together computer models from
AI and experimental techniques from psychology to construct precise and testable theories of the human
mind.
Cognitive science is a fascinating field in itself, worthy of several textbooks and at least one
encyclopedia (Wilson and Keil 1999). We will occasionally comment on similarities or differences
between AI techniques and human cognition. Real cognitive science, however, is necessarily based on
experimental investigation of actual humans or animals.

3. Thinking rationally: The “laws of thought” approach:


The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking”—that is,
irrefutable reasoning processes. His syllogisms provided patterns for argument structures that always
yielded correct conclusions when given correct premises.
Eg:
The canonical example starts with Socrates is a man and all men are mortal and concludes that Socrates is
mortal.
Socrates is a man; All men are mortal; => Socrates is mortal.
These laws of thought were LOGIC supposed to govern the operation of the mind; their study
initiated the field called logic.

4 Acting rationally: The rational agent approach:


An Agent is just something that acts (agent comes from the Latin agere, to do).
All computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and
create and pursue goals.
A Rational Agent is one that acts so as to achieve the best outcome or, when there is uncertainty,
the best expected outcome.
In a nutshell, AI has focused on the study and construction of agents that do the right thing. What
counts as the right thing is defined by the objective that we provide to the agent. This general paradigm is
so pervasive that we might call it the standard model.
The issue of limited rationality—acting appropriately when there is not enough time to do all the
computations one might like. However, perfect rationality often remains a good starting point for
theoretical analysis.

FUTURE OF ARTIFICIAL INTELLIGENCE


Transportation: Although it could take a decade or more to perfect them, autonomous cars will
one day ferry us from place to place.
Manufacturing: AI powered robots work alongside humans to perform a limited range of tasks
like assembly and stacking, and predictive analysis sensors keep equipment running smoothly.
Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly and
accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing assistants monitor
patients and big data analysis helps to create a more personalized patient experience.
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist human
instructors and facial analysis gauges the emotions of students to help determine who’s struggling or
bored and better tailor the experience to their individual needs.
Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses
Cyborg technology to help make quick sense of complex financial reports. The Associated Press employs
the natural language abilities of Automated Insights to produce 3,700 earnings reports stories per year —
nearly four times more than in the recent past.
Customer Service: Last but hardly least, Google is working on an AI assistant that can place
human-like calls to make appointments at, say, your neighborhood hair salon. In addition to words, the
system understands context and nuance.
Ongoing Maintenance of machinery is a huge expense for manufacturers and the shift from
reactive to predictive maintenance has become a must for all manufacturers. By using advanced AI
algorithms and artificial neural networks to formulate predictions regarding asset malfunction and
briefing technicians ahead of time, AI has managed to save businesses valuable time and resources.
Driverless vehicles: Automated vehicles now exist in the many of countries. The U.S.
Department of Transportation has gone ahead and released certain definitions and rules pertaining to the
various levels of automation which can be implemented.
Robotic Process automation: Robotic process automation refers to the use of machine learning
to automate tasks dependent on rules. It will help individuals focus on certain crucial aspects of their
work and leave the routine work to machines.
In addition, predictive maintenance has helped extend the life of machines and has resulted in an
overall reduction in labor costs.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE


Artificial Intelligence has various applications in today's society. It is becoming essential for
today's time because it can solve complex problems with an efficient way in multiple industries, such as
Healthcare, entertainment, finance, education, etc. AI is making our daily life more comfortable and fast.

Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be
helpful for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare
In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to
have a significant impact on this industry.
Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can help
doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the
patient before hospitalization.
3. AI in Gaming
AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the
machine needs to think of a large number of possible places.
4. AI in Finance
AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.
5. AI in Data Security
The security of data is crucial for every company and cyber-attacks are growing very rapidly in the
digital world. AI can be used to make your data more safe and secure. Some examples such as AEG bot,
AI2 Platform,are used to determine software bug and cyber-attacks in a better way.
6. AI in Social Media
Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need
to be stored and managed in a very efficient way. AI can organize and manage massive amounts of data.
AI can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.
7. AI in Travel & Transport
AI is becoming highly demanding for travel industries. AI is capable of doing various travel related
works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the
customers. Travel industries are using AI-powered chatbots which can make human-like interaction with
customers for better and fast response.
8. AI in Automotive Industry
Some Automotive industries are using AI to provide virtual assistant to their user for better performance.
Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.
Various Industries are currently working for developing self-driven cars which can make your journey
more safe and secure.
9. AI in Robotics
Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are programmed such
that they can perform some repetitive task, but with the help of AI, we can create intelligent robots which
can perform tasks with their own experiences without pre-programmed.
Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as
Erica and Sophia has been developed which can talk and behave like humans.
10. AI in Entertainment
We are currently using some AI based applications in our daily life with some entertainment services
such as Netflix or Amazon. With the help of ML/AI algorithms, these services show the
recommendations for programs or shows.
11. AI in Agriculture
Agriculture is an area which requires various resources, labor, money, and time for best result. Now a
day's agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as
agriculture robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful
for farmers.
12. AI in E-commerce
AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in
the e-commerce business. AI is helping shoppers to discover associated products with recommended size,
color, or even brand.
13. AI in education
AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with
students as a teaching assistant.
AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any
time and any place.

PROBLEM SOLVING AGENTS


Problem Solving:
“Problem-solving refers to a state where we wish to reach to a definite goal from a present state
or condition.”
“Problem-solving is a part of artificial intelligence which encompasses a number of techniques
such as algorithms, heuristics to solve a problem.”
Problem-Solving Agent:
Problem-solving agent performs precisely by defining problems and its several solutions.

Problem-solving agent is a goal-driven agent and focuses on satisfying the goal.


Steps performed by Problem-solving agent
 Goal Formulation: It organizes the steps/sequence required to formulate one goal out of
multiple goals as well as actions to achieve that goal. Goal formulation is based on the current
situation and the agent’s performance measure.
 Problem Formulation: Decides what actions should be taken to achieve the formulated goal.

WELL-DEFINED PROBLEMS & SOLUTIONS:


 Five components involved in problem formulation:
 Initial State: Starting state or initial step of the agent towards its goal.
 Actions: Description of the possible actions available to the agent.
 Transition Model: Describes what each action does.
 Goal Test: Determines if the given state is a goal state.
 Path cost: Assigns a numeric cost to each path that follows the goal. The problem-solving
agent selects a cost function, which reflects its performance measure. Remember, an
optimal solution has the lowest path cost among all the solutions.

Example Problem:
Imagine an agent enjoying a touring vacation in Romania. The agent wants to take in the sights,
improve its Romanian, enjoy the nightlife, avoid hangovers, and so on. The decision problem is a
complex one. Now, suppose the agent is currently in the city of Arad and has a nonrefundable ticket to
fly out of Bucharest the following day. The agent observes street signs and sees that there are three roads
leading out of Arad: one toward Sibiu, one to Timisoara, and one to Zerind. None of these are the goal,
so unless the agent is familiar with the geography of Romania, it will not know which road to follow.
If the agent has no additional information—that is, if the environment is unknown—then the
agent can do no better than to execute one of the actions at random. In this chapter, we will assume our
agents always have access to information about the world, such as the map in Figure 3.1 . With that
information, the agent can follow this four-phase problem-solving process:

GOAL FORMULATION: The agent adopts the goal of reaching Bucharest. Goals organize behavior by
limiting the objectives and hence the actions to be considered.
PROBLEM FORMULATION: The agent devises a description of the states and actions necessary to
reach the goal—an abstract model of the relevant part of the world. For our agent, one good model is to
consider the actions of traveling from one city to an adjacent city, and therefore the only fact about the
state of the world that will change due to an action is the current city.
SEARCH: Before taking any action in the real world, the agent simulates sequences of actions in its
model, searching until it finds a sequence of actions that reaches the goal. Such a sequence is called a
solution. The agent might have to simulate multiple sequences that do not reach the goal, but eventually it
will find a solution (such as going from Arad to Sibiu to Fagaras to Bucharest), or it will find that no
solution is possible.
EXECUTION: The agent can now execute the actions in the solution, one at a time.
It is an important property that in a fully observable, deterministic, known environment, the
solution to any problem is a fixed sequence of actions: drive to Sibiu, then Fagaras, then Bucharest. If the
model is correct, then once the agent has found a solution, it can ignore its percepts while it is executing
the actions—closing its eyes, so to speak—because the solution is guaranteed to lead to the goal. Control
theorists call this an open-loop system: ignoring the percepts breaks the loop between agent and
environment. If there is a chance that the model is incorrect, or the environment is nondeterministic, then
the agent would be safer using a closed-loop approach that monitors the percepts.
In partially observable or nondeterministic environments, a solution would be a branching
strategy that recommends different future actions depending on what percepts arrive. For example, the
agent might plan to drive from Arad to Sibiu but might need a contingency plan in case it arrives in
Zerind by accident or finds a sign saying “Drum Închis” (Road Closed).

Search problems and solutions


A search problem can be defined formally as follows:
 Problem
 A set of possible states that the environment can be in. We call this the state space.
 The initial state that the agent starts in. For example: Arad.
 A set of one or more goal states. Sometimes there is one goal state (e.g., Bucharest), sometimes
there is a small set of alternative goal states, and sometimes the goal is defined by a property that
applies to many states (potentially an infinite number). For example, in a vacuum-cleaner world,
the goal might be to have no dirt in any location, regardless of any other facts about the state. We
can account for all three of these possibilities by specifying an IS-GOAL method for a problem.
In this chapter we will sometimes say “the goal” for simplicity, but what we say also applies to
“any one of the possible goal states.”
 The actions available to the agent. Given a state s, ACTIONS(s) returns a finite set of actions that
can be executed in s. We say that each of these actions is applicable in s.
An example:
ACTIONS(Arad) = {ToSibiu, ToTimisoara, ToZerind}.
 A transition model, which describes what each action does. RESULT(s,a) returns the state that
results from doing action in state s.
For example,
RESULT(Arad, ToZerind) = Zerind.
 An action cost function, denoted by ACTION-COST (s,a,s’) when we are programming or
c(s,a,s’)when we are doing math, that gives the numeric cost of applying action in state to reach
state s’. A problem-solving agent should use a cost function that reflects its own performance
measure; for example, for route-finding agents, the cost of an action might be the length in miles
(as seen in Figure 3.1 ), or it might be the time it takes to complete the action.
 A sequence of actions forms a path, and a solution is a path from the initial state to a goal state.
We assume that action costs are additive; that is, the total cost of a path is the sum of the
individual action costs. An optimal solution has the lowest path cost among all solutions. In this
chapter, we assume that all action costs will be positive, to avoid certain complications.
 The state space can be represented as a graph in which the vertices are states and the directed
edges between them are actions. The map of Romania shown in Figure 3.1 is such a graph, where
each road indicates two actions, one in each direction.

Formulating problems
Our formulation of the problem of getting to Bucharest is a model—an abstract mathematical
description—and not the real thing. Compare the simple atomic state description Arad to an actual cross-
country trip, where the state of the world includes so many things: the traveling companions, the current
radio program, the scenery out of the window, the proximity of law enforcement officers, the distance to
the next rest stop, the condition of the road, the weather, the traffic, and so on. All these considerations
are left out of our model because they are irrelevant to the problem of finding a route to Bucharest.
The process of removing detail from a representation is called abstraction. A good problem
formulation has the right level of detail. If the actions were at the level of “move the right foot forward a
centimeter” or “turn the steering wheel one degree left,” the agent would probably never find its way out
of the parking lot, let alone to Bucharest.
Can we be more precise about the appropriate level of abstraction? Think of the abstract states
and actions we have chosen as corresponding to large sets of detailed world states and detailed action
sequences. Now consider a solution to the abstract problem: for example, the path from Arad to Sibiu to
Rimnicu Vilcea to Pitesti to Bucharest. This abstract solution corresponds to a large number of more
detailed paths. For example, we could drive with the radio on between Sibiu and Rimnicu Vilcea, and
then switch it off for the rest of the trip.
The abstraction is valid if we can elaborate any abstract solution into a solution in the more
detailed world; a sufficient condition is that for every detailed state that is “in Arad,” there is a detailed
path to some state that is “in Sibiu,” and so on. The abstraction is useful if carrying out each of the
actions in the solution is easier than the original problem; in our case, the action “drive from Arad to
Sibiu” can be carried out without further search or planning by a driver with average skill. The choice of
a good abstraction thus involves removing as much detail as possible while retaining validity and
ensuring that the abstract actions are easy to carry out. Were it not for the ability to construct useful
abstractions, intelligent agents would be completely swamped by the real world.

EXAMPLE PROBLEMS
The problem-solving approach has been applied to a vast array of task environments. We list some of
the best known here, distinguishing between standardized and real-world problems.
 A standardized problem is intended to illustrate or exercise various problemsolving methods. It
can be given a concise, exact description and hence is suitable as a benchmark for researchers to
compare the performance of algorithms.
 A real-world problem, such as robot navigation, is one whose solutions people actually use, and
whose formulation is idiosyncratic, not standardized, because, for example, each robot has
different sensors that produce different data.

Some Toy Problems / Standardized problems


I. 8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank
space. The tile adjacent to the blank space can slide into that space. The objective is to reach a specified
goal state similar to the goal state, as shown in the below figure.
- In the figure, our task is to convert the current state into goal state by sliding digits into
the blank space.

 States: It describes the location of each numbered tiles and the blank tile.
 Initial State: We can start from any state as the initial state.
 Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down
 Transition Model: It returns the resulting state as per the given state and actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the cost of each step is 1

II. 8-queens problem:


The aim of this problem is to place eight queens on a chessboard in an order where no queen may
attack another. A queen can attack other queens either diagonally or in same row and column.
From the following figure, we can understand the problem as well as its correct solution.

For this problem, there are two main kinds of formulation:


1. Incremental formulation: It starts from an empty state where the operator augments a queen at each
step.
Following steps are involved in this formulation:
 States: Arrangement of any 0 to 8 queens on the chessboard.
 Initial State: An empty chessboard
 Actions: Add a queen to any empty box.
 Transition model: Returns the chessboard with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without any attack.
 Path cost: There is no need for path cost because only final states are counted.
In this formulation, there is approximately 1.8 x 1014 possible sequences to investigate.

2.Complete-state formulation: It starts with all the 8-queens on the chessboard and moves them around,
saving from the attacks.
Following steps are involved in this formulation
 States: Arrangement of all the 8 queens one per column with no queen attacking the other queen.
 Actions: Move the queen at the location where it is safe from the attacks.

Other standardized problems:


A grid world problem – Vacuum world, sokoban puzzle, sliding-tile puzzle, 8-puzzle,15-puzzle

Some Real-World Problems


 Route-finding problem: Route-finding algorithms are used in a variety of applications. Some, such
as Web sites and in-car systems that provide driving directions, are relatively straightforward
extensions of the Romania example. (The main complications are varying costs due to traffic-
dependent delays, and rerouting due to road closures.) Others, such as routing video streams in
computer networks, military operations planning, and airline travel-planning systems, involve much
more complex specifications.
Consider the airline travel problems that must be solved by a travel-planning Web site:
 STATES: Each state obviously includes a location (e.g., an airport) and the current time.
 Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must record
extra information about these “historical” aspects.
 INITIAL STATE: The user’s home airport.
 ACTIONS: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
 TRANSITION MODEL: The state resulting from taking a flight will have the flight’s
destination as the new location and the flight’s arrival time as the new time.
 GOAL STATE: A destination city. Sometimes the goal can be more complex, such as “arrive
at the destination on a nonstop flight.”
 ACTION COST: A combination of monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer reward
points, and so on.
 Traveling salesperson problem (TSP): It is a touring problem where the salesman can visit each
city only once. The objective is to find the shortest tour and sell-out the stuff in each city.
 VLSI Layout problem: In this problem, millions of components and connections are positioned on a
chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing the
manufacturing yield.
The layout problem is split into two parts:
 Cell layout: Here, the primitive components of the circuit are grouped into cells, each
performing its specific function. Each cell has a fixed shape and size. The task is to place the
cells on the chip without overlapping each other.
 Channel routing: It finds a specific route for each wire through the gaps between the cells.
 Protein Design: The objective is to find a sequence of amino acids which will fold into 3D
protein having a property to cure some disease.

SEARCHING FOR SOLUTIONS


In order to solve a problem, there is a need to search for solutions to solve them. Searching can be
used by the agent to solve a problem. For solving different kinds of problem, an agent makes use of
different strategies to reach the goal by searching the best possible algorithms. This process of searching
is known as search strategy.

MEASURING PROBLEM-SOLVING PERFORMANCE:


Before discussing different search strategies, the performance measure of an algorithm should
be measured. Consequently, there are four ways to measure the performance of an algorithm:
Completeness: It measures if the algorithm guarantees to find a solution (if any solution exist).
Optimality: It measures if the strategy searches for an optimal solution.
Time Complexity: The time taken by the algorithm to find a solution.
Space Complexity: Amount of memory required to perform a search.

The complexity of an algorithm depends on branching factor or maximum number of


successors, depth of the shallowest goal node (i.e., number of steps from root to the path) and the
maximum length of any path in a state space.

SEARCH STRATEGIES / ALGORITHMS


Search: The process of looking for a sequence of actions that reaches the goal.
A search algorithm takes a problem as input and returns a solution in the form of an action
sequence. Once a solution is found, the actions it recommends can be carried out.
There are two types of strategies that describe a solution for a given problem:

1. Uninformed Search (Blind Search)


>This type of search strategy does not have any additional information about the states except the
information provided in the problem definition.
>They can only generate the successors and distinguish a goal state from a non-goal state. >This
type of search does not maintain any internal state, that’s why it is also known
as Blind search.

There are following types of uninformed searches:


 Breadth-first search
 Uniform cost search
 Depth-first search
 Depth-limited search
 Iterative deepening search
 Bidirectional search

2. Informed Search (Heuristic Search)


>This type of search strategy contains some additional information about the states beyond the
problem definition.
>This search uses problem-specific knowledge to find more efficient solutions.
>This search maintains some sort of internal states via heuristic functions (which provides hints),
so it is also called heuristic search.

There are following types of informed searches:


(i) Best first search (Greedy search)
(ii) A* search

UNINFORMED SEARCH STRATEGIES


Uninformed search is a class of general-purpose search algorithms which operates in brute force-
way. Uninformed search algorithms do not have additional information about state or search space other
than how to traverse the tree, so it is also called blind search.
1. BREADTH-FIRST SEARCH
 Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
 It is a simple search strategy where the root node is expanded first, then covering all other
successors of the root node, further move to expand the next level nodes and the search continues
until the goal node is not found.
 Breadth-first search implemented using FIFO queue data structure.
 BFS expands the shallowest (i.e., not deep) node first using FIFO (First in first out) order. Thus,
new nodes (i.e., children of a parent node) remain in the queue and old unexpanded node which
are shallower than the new nodes, get expanded first.
 In BFS, goal test (a test to check whether the current state is a goal state or not) is applied to each
node at the time of its generation rather when it is selected for expansion.
 The breadth-first search algorithm is an example of a general-graph search algorithm.
 BFS expands the nodes level by level, i.e., breadthwise, therefore it is also known as a Level
search technique.

The root node is expanded first, then all the successors of the root node are expanded next, then
their successors, and so on.

BFS Algorithm:
 Set a variable NODE to the initial state, i.e., the root node.
 Set a variable GOAL which contains the value of the goal state.
 Loop each node by traversing level by level until the goal state is not found.
 While performing the looping, start removing the elements from the queue in FIFO order.
 If the goal state is found, return goal state otherwise continue the search.

Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from
the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which
is shown by the dotted arrow, and the traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
An exponential complexity bound such as O(bd) is scary. Figure 3.13 shows why. It lists, for
various values of the solution depth d, the time and memory required for a breadth first search with
branching factor b = 10. The table assumes that 1 million nodes can be generated per second and that a
node requires 1000 bytes of storage. Many search problems fit roughly within these assumptions (give or
take a factor of 100) when run on a modern personal computer.

The performance measure of BFS is as follows:


Completeness: It is a complete strategy as it definitely finds the goal state.
Optimality: It gives an optimal solution if the cost of each node is same.
Space Complexity: The space complexity of BFS is O(bd), i.e., it requires a huge amount of
memory. Here, b is the branching factor and d denotes the depth/level of the tree
Time Complexity: BFS consumes much time to reach the goal node for large instances.

Advantages:
 BFS will provide a solution if any solution exists.
 If there are more than one solution for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.
Disadvantages of BFS
 The biggest disadvantage of BFS is that it requires a lot of memory space, therefore it is a
memory bounded strategy.
 BFS is time taking search strategy because it expands the nodes breadthwise.

2. UNIFORM-COST SEARCH
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost.
A uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to
the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.
This search explores nodes based on their path cost from the root node.
It expands a node n having the lowest path cost g(n), where g(n) is the total cost from a root node to
node n.
Uniform-cost search is significantly different from the breadth-first search because of the following
two reasons:
 First, the goal test is applied to a node only when it is selected for expansion not when it is first
generated because the first goal node which is generated may be on a suboptimal path.
 Secondly, a goal test is added to a node, only when a better/optimal path is found.
Thus, uniform-cost search expands nodes in a sequence of their optimal path cost because before
exploring any node, it searches the optimal path. Also, the step cost is positive so, paths never get shorter
when a new node is added in the search.

The successors of Sibiu are RimnicuVilcea and Fagaras, with costs 80 and 99, respectively. The
least-cost node, RimnicuVilcea, is expanded next, adding Pitesti with cost 80 + 97 =177. The least-cost
node is now Fagaras, so it is expanded, adding Bucharest with cost 99 + 211 =310.
Uniform-cost search Algorithm
 Set a variable NODE to the initial state, i.e., the root node and expand it.
 After expanding the root node, select one node having the lowest path cost and expand it further.
Remember, the selection of the node should give an optimal path cost.
 If the goal node is searched with optimal value, return goal state, else carry on the search.
Example:

In the above figure, it is seen that the goal-state is F and start/ initial state is A. There are three
paths available to reach the goal node. We need to select an optimal path which may give the lowest total
cost g(n). Therefore, A->B->E->F gives the optimal path cost i.e., 0+1+3+4=8.

The performance measure of Uniform-cost search


Completeness: It guarantees to reach the goal state.
Optimality: It gives optimal path cost solution for the search.
Space and time complexity: The worst space and time complexity of the uniform-cost search
is O(b1+LC*/ᵋ˩).
Note: When the path cost is same for all the nodes, it behaves similar to BFS.
Advantages:
 Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages of Uniform-cost search:
 It does not care about the number of steps a path has taken to reach the goal state.
 It may stick to an infinite loop if there is a path with infinite zero cost sequence.
 It works hard as it examines each node in search of lowest cost path.

3. DEPTH-FIRST SEARCH
This search strategy explores the deepest node first, then backtracks to explore other nodes.
It uses LIFO (Last in First Out) order, which is based on the stack, in order to expand the unexpanded
nodes in the search tree. The search proceeds to the deepest level of the tree where it has no successors.
This search expands nodes till infinity, i.e., the depth of the tree.
DFS Algorithm:
 Set a variable NODE to the initial state, i.e., the root node.
 Set a variable GOAL which contains the value of the goal state.
 Loop each node by traversing deeply in one direction/path in search of the goal node.
 While performing the looping, start removing the elements from the stack in LIFO order.
 If the goal state is found, return goal state otherwise backtrack to expand nodes in other direction.
DFS SEARCH TREE
 Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
 This search strategy explores the deepest node first, then backtracks to explore other nodes. It
uses LIFO (Last in First Out) order, which is based on the stack, in order to expand the
unexpanded nodes in the search tree. The search proceeds to the deepest level of the tree
where it has no successors. This search expands nodes till infinity, i.e.,the depth of the tree.
 The process of the DFS algorithm is similar to the BFS algorithm.

DFS Algorithm
 Set a variable NODE to the initial state, i.e., the root node.
 Set a variable GOAL which contains the value of the goal state.
 Loop each node by traversing deeply in one direction/path in search of the goal node.
 While performing the looping, start removing the elements from the stack in LIFO order.
 If the goal state is found, return goal state otherwise backtrack to expand nodes in other direction.
In the above figure, DFS works starting from the initial node A (root node) and traversing in one
direction deeply till node I and then backtrack to B and so on. Therefore, the sequence will be A->B->D-
>I->E->C->F->G.

Performance measure of DFS:


Completeness: DFS does not guarantee to reach the goal state.
Optimality: It does not give an optimal solution as it expands nodes in one direction deeply.
Space complexity: It needs to store only a single path from the root node to the leaf node. Therefore,
DFS has O(bm) space complexity where b is the branching factor (i.e., total no. of child nodes, a parent
node have) and m is the maximum length of any path.
Time complexity: DFS has O(bm) time complexity.
Advantage:
 DFS requires very less memory as it only needs to store a stack of the nodes on the path from root
node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantages of DFS
 It may get trapped in an infinite loop.
 It is also possible that it may not reach the goal state.
 DFS does not give an optimal solution.
Note: DFS uses the concept of backtracking to explore each node in a search tree.

4. DEPTH-LIMITED SEARCH
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-
limited search can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the
node at the depth limit will treat as it has no successor nodes further.
This search strategy is similar to DFS with a little difference. The difference is that in depth-limited
search, we limit the search by imposing a depth limit l to the depth of the search tree. It does not need to
explore till infinity. As a result, the depth-first search is a special case of depth-limited search. when the
limit l is infinite.
Depth-limited search can be terminated with two Conditions of failure:
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

In the above tree, the depth-limit is 1. So, only level 0 and 1 get expanded in A->B->C DFS
sequence, starting from the root node A till node B. It is not giving satisfactory result because we could
not reach the goal node I.
Depth-limited search Algorithm:
 Set a variable NODE to the initial state, i.e., the root node.
 Set a variable GOAL which contains the value of the goal state.
 Set a variable LIMIT which carries a depth-limit value.
 Loop each node by traversing in DFS manner till the depth-limit value.
 While performing the looping, start removing the elements from the stack in LIFO order.
 If the goal state is found, return goal state. Else terminate the search.
Performance measure of Depth-limited search:
Completeness: Depth-limited search does not guarantee to reach the goal node.
Optimality: It does not give an optimal solution as it expands the nodes till the depth-limit.
Space Complexity: The space complexity of the depth-limited search is O(bl).
Time Complexity: The time complexity of the depth-limited search is O(bl).
Advantages:
Depth-limited search is Memory efficient.
Disadvantages of Depth-limited search:
 This search strategy is not complete.
 It does not provide an optimal solution.
Note: Depth-limit search terminates with two kinds of failures: the standard failure value indicates “no
solution,” and cut-off value, which indicates “no solution within the depth-limit.”

5. ITERATIVE DEEPENING DEPTH-FIRST SEARCH / ITERATIVE DEEPENING SEARCH:


 Iterative deepening combines the BFS and DFS.
 As BFS guarantees to reach the goal node and DFS occupies less memory space.
 Iterative deepening search combines these two advantages of BFS and DFS to reach the goal
node.
 It gradually increases the depth-limit from 0,1,2 and so on and reach the goal node.

In the above figure, the goal node is H and initial depth-limit =[0-1]. So, it will expand level 0 and 1
and will terminate with A->B->C sequence.
Further, change the depth-limit =[0-3], it will again expand the nodes from level 0 till level 3 and the
search terminate with A->B->D->F->E->H sequence where H is the desired goal node.
Iterative Deepening Search Algorithm:
 Explore the nodes in DFS order.
 Set a LIMIT variable with a limit value.
 Loop each node up to the limit value and further increase the limit value accordingly.
 Terminate the search when the goal state is found.
Performance measure of Iterative deepening search:
Completeness: Iterative deepening search may or may not reach the goal state.
Optimality: It does not give an optimal solution always.
Space Complexity: It has the same space complexity as BFS, i.e., O(bd).
Time Complexity: It has O(d) time complexity.
Advantage:
It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory
efficiency.
Disadvantages of Iterative deepening search:
The drawback of iterative deepening search is that it seems wasteful because it generates states
multiple times.

6. BIDIRECTIONAL SEARCH
Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node. Hoping that
the two searches meet in the middle.
As soon as the two searches intersect one another, the bidirectional search terminates with the
goal node.

Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree
into two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16
in the backward direction.

The algorithm terminates at node 9 where two searches meet.


Performance measure of Bidirectional search:
 Completeness: Bidirectional search is complete if we use BFS in both searches.
 Optimal: It gives an optimal solution.
 Time and space complexity: Bidirectional search has O(bd/2)

Advantages:
 Bidirectional search is fast.
 Bidirectional search requires less memory
Disadvantages:
 Implementation of the bidirectional search tree is difficult.
 In bidirectional search, one should know the goal state in advance.
 Requires a lot of memory space.

INFORMED (HEURISTIC) SEARCH STRATEGIES


Informed search methods have access to a heuristic function h(n) that estimates the cost of a
solution from n. They may have access to additional information such as pattern databases with solution
costs.
h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
For example, in route-finding problems, we can estimate the distance from the current state to a goal by
computing the straight-line distance on the map between the two points.
Best-first Search (Greedy search) / Greedy best-first search
A best-first search is a general approach of informed search. Here, a node is selected for
expansion based on an evaluation function f(n), where f(n) interprets the cost estimate value. The
evaluation function expands that node first, which has the lowest cost.
A component of f(n) is h(n) which carries the additional information required for the search
algorithm, i.e.,
h(n)= estimated cost of the cheapest path from the current node n to the goal node.
Note: If the current node n is a goal node, the value of h(n) will be 0.
Best-first search is known as a greedy search because it always tries to explore the node which is
nearest to the goal node and selects that path, which gives a quick solution. Thus, it evaluates nodes with
the help of the heuristic function, i.e., f(n)=h(n).
• Tries to expand the node closest to the goal node, to reach the solution quickly.
• Thus, it evaluates nodes with the help of the heuristic function, i.e., h(n)=f(n)
Example :
Finding route from Arad to Bucharest in Romania using the Straight Line Distance (SLD)
• Heuristic function of SL is used here.
Example: hSLD (In(Arad))=366
Problem: Find the shortest route from Arad to Bucharest.

Solution:
1. First node to be expanded is Sibiu, since it is close to Bucharest than Timisoara and Zerind.
2. Then we should expand Fargas which is close to the goal node.
3. But here Fargas generates Bucharest instead of expanding.
4. Since SLD is used, the search cost is minimal.
Hint : Expand the node that is near to the goal node.
 This is not an optimal solution since goal can be reached with less cost through Rimnicu and
Pitesti.
• GBFTS is incomplete even in finite state space.
Example: Find the shortest route from Iasi to Fagaras.
 Heuristic suggest that Neamt should be expanded but it is dead end.
 The solution is to go through Vaslui – Urziceni-Bucharest-Fargas.
 GBFTS wont find solution for this problem and put Iasi into an infinte loot.

Greedy best-first graph search is complete in finite state spaces, but not in infinite ones. The
worst-case time and space complexity is O(|V|). With a good heuristic function, however, the complexity
can be reduced substantially, on certain problems reaching O(bm).

A* Search:
A* search is the most widely used informed search algorithm where a node n is evaluated by
combining values of the functions g(n)and h(n). The function g(n) is the path cost from the start/initial
node to a node n and h(n) is the estimated cost of the shortest path from node n to the goal node.
Therefore, we have
f(n)=g(n)+h(n)
where f(n) = estimated cost of the best path that continues from n to a goal.
So, in order to find the cheapest solution, try to find the lowest values of f(n).
Conditions for optimality:
• h(n) should be an admissible heuristic that never over estimates the cost to reach the goal.
• Admissible heuristic are optimal since they think that the cost of solving the problem is less than
it actually is.
• Example problem: Route to reach Bucharest from Arad using hSLD.
A* search is complete. Whether A* is cost-optimal depends on certain properties of the heuristic.
A key property is admissibility: an admissible heuristic is one that never overestimates the cost to reach
a goal. (An admissible heuristic is therefore optimistic.) With an admissible heuristic, A* is cost-optimal,
which we can show with a proof by contradiction. Suppose the optimal path has cost C* , but the
algorithm returns a path with cost C > C*.Then there must be some node n which is on the optimal path
and is unexpanded (because if all the nodes on the optimal path had been expanded, then we would have
returned that optimal solution). So then, using the notation g*(n) to mean the cost of the optimal path
from the start to n and h*(n) to mean the cost of the optimal path from n to the nearest goal, we have:

f (n) > C* (otherwise n would have been expanded)


f (n) = g (n) + h (n) (by definition)
f (n) = g*(n) + h (n) (because n is on an optimal path)
f (n) ≤ g*(n) + h*(n) (because of admissibility, h (n) ≤ h*(n))
f (n) ≤ C* (by definition, C* = g*(n) + h*(n))

The first and last lines form a contradiction, so the supposition that the algorithm could return a
suboptimal path must be wrong—it must be that A* returns only cost-optima paths.
A slightly stronger property is called consistency. A heuristic h(n) is consistent if, for every node and
every successor of generated by an action we have:
h(n) ≤ c(n, a, n′) + h(n′).
This is a form of the triangle inequality, which stipulates that a side of a triangle cannot be longer
than the sum of the other two sides (see Figure). An example of a consistent heuristic is the straight-line
distance hSLD that we used in getting to Bucharest.

Every consistent heuristic is admissible (but not vice versa), so with a consistent heuristic, A* is
cost-optimal. In addition, with a consistent heuristic, the first time we reach a state it will be on an
optimal path, so we never have to re-add a state to the frontier, and never have to change an entry in
reached. But with an inconsistent heuristic, we may end up with multiple paths reaching the same state,
and if each new path has a lower path cost than the previous one, then we will end up with multiple nodes
for that state in the frontier, costing us both time and space. Because of that, some implementations of A*
take care to only enter a state into the frontier once, and if a better path to the state is found, all the
successors of the state are updated (which requires that nodes have child pointers as well as parent
pointers).
We say that A* with a consistent heuristic is optimally efficient in the sense that any algorithm
that extends search paths from the initial state, and uses the same heuristic information, must expand all
nodes that are surely expanded by A* (because any one of them could have been part of an optimal
solution).
A* is efficient because it prunes away search tree nodes that are not necessary for finding an
optimal solution.

Satisficing search: Inadmissible heuristics and weighted A*


A* search has many good qualities, but it expands a lot of nodes. We can explore fewer nodes
(taking less time and space) if we are willing to accept solutions that are suboptimal, but are “good
enough”—what we call satisficing solutions. If we allow A* search to use an inadmissible
heuristic—one that may overestimate—then we risk missing the optimal solution, but the heuristic can
potentially be more accurate, thereby reducing the number of nodes expanded. For example, road
engineers know the concept of a detour index, which is a multiplier applied to the straight-line distance
to account for the typical curvature of roads. A detour index of 1.3 means that if two cities are 10 miles
apart in straight-line distance, a good estimate of the best path between them is 13 miles. For most
localities, the detour index ranges between 1.2 and 1.6.
We can apply this idea to any problem, not just ones involving roads, with an approach called
weighted A* search where we weight the heuristic value more heavily, giving us the evaluation function
f(n) = g(n) +W × h(n), for some W > 1.
We have considered searches that evaluate states by combining and in various ways; weighted A* can be
seen as a generalization of the others:
A* search: g(n) + h(n) (W = 1)
Uniform-cost search: g(n) (W = 0)
Greedy best-first search: h(n) (W = ∞)
Weighted A* search: g(n) +W × h(n) (1 < W < ∞)
You could call weighted A* “somewhat-greedy search”: like greedy best-first search, it focuses
the search towards a goal; on the other hand, it won’t ignore the path cost completely, and will suspend a
path that is making little progress at great cost.

Memory-bounded search
The main issue with A* is its use of memory. In this section we’ll cover some implementation
tricks that save space, and then some entirely new algorithms that take better advantage of the available
space.
Memory is split between the frontier and the reached states. In our implementation of best first
search, a state that is on the frontier is stored in two places: as a node in the frontier (so we can decide
what to expand next) and as an entry in the table of reached states (so we now if we have visited the state
before). For many problems (such as exploring a grid), this duplication is not a concern, because the size
of frontier is much smaller than reached, so duplicating the states in the frontier requires a comparatively
trivial amount of memory. But some implementations keep a state in only one of the two places, saving a
bit of space at the cost of complicating (and perhaps slowing down) the algorithm. Another possibility is
to remove states from reached when we can prove that they are no longer needed.
For other problems, we can keep reference counts of the number of times a state has been
reached, and remove it from the reached table when there are no more ways to reach the state. For
example, on a grid world where each state can be reached only from its four neighbors, once we have
reached a state four times, we can remove it from the table.
Now let’s consider new algorithms that are designed to conserve memory usage.
Beam search
Beam search limits the size of the frontier. The easiest approach is to keep only the nodes with the
best -scores, discarding any other expanded nodes. This of course makes the search incomplete and
suboptimal, but we can choose to make good use of available memory, and the algorithm executes fast
because it expands fewer nodes. For many problems it can find good near-optimal solutions. You can
think of uniform-cost or A* search as spreading out everywhere in concentric contours, and think of
beam search as exploring only a focused portion of those contours, the portion that contains the best
candidates.
An alternative version of beam search doesn’t keep a strict limit on the size of the frontier but
instead keeps every node whose -score is within of the best -score. That way, when there are a few
strong-scoring nodes only a few will be kept, but if there are no strong nodes then more will be kept until
a strong one emerges.
Iterative-deepening A* search (IDA*)
• Like IDDFS uses depth as cutoff value in IDA* f-cost (g+h) is used as cutoff rather than depth.
• Reduces memory requirements, incurred in A*, thereby putting bound on memory hence it is
called as memory bounded algorithm
• IDA* suffers from real value costs of the problem.
Two memory bounded algorithm:
1) Recursive Best First Search
2) MA* (Memory Bounded A*)

Recursive best-first search (RBFS)


• Works like Best First Search.
• Structure is similar to recursive DFS but instead of continuing indefinitely down the current path
it keeps track of the F value of the best alternative path available from any ancestor of the current
node.
• Recursion procedure is unwinded back to the alternative path if the current node crosses limit.
• Important property of RBFS is that it remembers the f-value of the best leaf in the forgotten
subtree (previously left unexpanded).i.e it is able to take decision regarding re-expanding the
subtree.
• It is reliable, cost effective than IDA but its critical problem is excessive node generation.
• RBFS suffers from problem of expanding repeated states, as the algorithm fails to detect them.
Example:
Memory Bounded A* (MA*) / SMA* ( Simplified Memory Bounded A*)
• RBFS utilizes memory. To overcome this problem MA* is devised.
• SMA* is a shortest path algorithm that is based on the A* algorithm. The difference between
SMA* and A* is that SMA* uses a bounded memory, while the A* algorithm might need
exponential memory.
• f(n) = g(n) + h(n).
• The lower the f value is, the higher priority the node will have.
• It expands the best leaf until memory is full.
• At this point it cannot add new node to the search tree without dropping an old one.
• It always drops the node with highest f-value.
• If goal is not reached then it backtracks and go to the alternative path.
• While selecting the node for expansion it may happen that two nodes are with same f-value.
• SMA* generates new best node and new worst node for expansion and deletion respectively.
SMA* by Example
Our requirement is to find the shortest path starting from node “S” to node “G”
Bidirectional heuristic search
With unidirectional best-first search, we saw that using f(n) = g(n) + h(n) as the evaluation
function gives us an A* search that is guaranteed to find optimal-cost solutions (assuming an admissible )
while being optimally efficient in the number of nodes expanded.
With bidirectional best-first search we could also try using f(n) = g(n) + h(n), but unfortunately
there is no guarantee that this would lead to an optimal-cost solution, nor that it would be optimally
efficient, even with an admissible heuristic. With bidirectional search, it turns out that it is not individual
nodes but rather pairs of nodes (one from each frontier) that can be proved to be surely expanded, so any
proof of efficiency will have to consider pairs of nodes

LOCAL SEARCH AND OPTIMIZATION PROBLEMS


Local search algorithms operate by searching from a start state to neighboring states, without keeping
track of the paths, nor the set of states that have been reached. That means they are not systematic—they
might never explore a portion of the search space where a solution actually resides.
• “local search algorithms” where the path cost does not matters, and only focus on solution-state
needed to reach the goal node.
• Local search algorithms operate using a single current node (rather than multiple paths) and
generally move only to neighbors of that node.
key advantages:
(1) they use very little memory - usually a constant amount;
(2) they can often find reasonable solutions in large or infinite (continuous) state spaces
Local search algorithms can also solve optimization problems, in which the aim is to find the
best state according to an objective function.
Useful for solving pure optimization problems, in which the aim is to find the best state according
to an objective function. (An objective function is a function whose value is either minimized or
maximized in different contexts of the optimization problems)
To understand local search, consider the states of a problem laid out in a state-space landscape,
as shown in Figure. Each point (state) in the landscape has an “elevation,” defined by the value of the
objective function. If elevation corresponds to an objective function, then the aim is to find the highest
peak—a global maximum—and we call the process hill climbing. If elevation corresponds to cost, then
the aim is to find the lowest valley—a global minimum—and we call it gradient descent.
Different types of local searches
1. Hill climbing search
2. Simulated annealing
3. Local beam search
4. Genetic algorithm
Hill climbing search Algorithm (does not maintain a search tree)
• Hill climbing search is a local search problem. The purpose of the hill climbing search is to climb
a hill and reach the topmost peak/ point of that hill. It is based on the heuristic search
technique where the person who is climbing up on the hill estimates the direction which will lead
him to the highest peak.
• Simply a loop that continually moves in the direction of increasing value – Uphill.
• It terminates when it reaches a “peak” where no neighbor has a higher value.
• Does not look ahead beyond the immediate neighbors of the current state.
• 8-queens problem - Local search algorithms typically use a complete-state formulation. The
successors of a state are all possible states generated by moving a single queen to another square
in the same column (so each state has 8 × 7 = 56 successors).
• The heuristic cost function h is the number of pairs of queens that are attacking each other, either
directly or indirectly.
• The global minimum of this function is zero, which occurs only at perfect solutions.
• Hill-climbing algorithms typically choose randomly among the set of best successors if there is
more than one
To understand the concept of hill climbing algorithm, consider the below landscape representing
the goal state/peak and the current state of the climber. The topographical regions shown in the figure
can be defined as:
• Global Maximum: It is the highest point on the hill, which is the goal state.
• Local Maximum: It is the peak higher than all other peaks but lower than the global maximum.
• Flat local maximum: It is the flat area over the hill where it has no uphill or downhill. It is a
saturated point of the hill.
• Shoulder: It is also a flat area where the summit is possible.
• Current state: It is the current position of the person.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space landscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore other
paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current
state contains the same value, because of this algorithm does not find any best direction to move. A hill-
climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the
problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.
• Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.

It is an iterative algorithm that starts with an arbitrary solution to a problem and attempts to find a
better solution by changing a single element of the solution incrementally. If the change produces a better
solution, an incremental change is taken as a new solution. This process is repeated until there are no
further improvements.
function Hill-Climbing (problem), returns a state that is a local maximum.
inputs: problem, a problem
local variables: current, a node
neighbor, a node
current <-Make_Node(Initial-State[problem])
loop
do neighbor <- a highest_valued successor of current
if Value[neighbor] ≤ Value[current] then
return State[current]
current <- neighbor

end
Types of hill climbing:
1. Stochastic hill climbing chooses at random from among the uphill moves. The probability of
selection can vary with the steepness of the uphill, it finds better solutions.
2. First-choice hill climbing implements stochastic hill climbing by generating successors
randomly until one is generated that is better than the current state. This is a good strategy when a
state has many successors.
3. Random-restart hill climbing - “If at first you don’t succeed, try, try again.” It conducts a series
of hill-climbing searches from randomly generated initial state 1, until a goal is found.
4. Steepest ascent hill climbing: differs from basic hill climbing algorithm by choosing best
successor rather than the first successor that is better.

Simulated annealing:
• A hill-climbing algorithm that never makes “downhill” moves toward states with lower value is
guaranteed to be incomplete.
• In contrast, a purely random walk—that is, moving to a successor chosen uniformly at random
from the set of successors - is complete but extremely inefficient.
• Therefore, it seems reasonable to try to combine hill climbing with a random walk in some way
that yields both efficiency and completeness.
• Simulated annealing is such an algorithm.

The simulated annealing algorithm, a version of stochastic hill climbing where some downhill
moves are allowed. The schedule input determines the value of the “temperature” T as a function
of time.
• In metallurgy, SIMULATED ANNEALING is the process used to temper or harden metals and
glass by heating them to a high temperature and then gradually cooling them, thus allowing the
material to reach a low energy crystalline state.
• The innermost loop of the simulated-annealing algorithm is quite similar to hill climbing. Instead
of picking the best move, however, it picks a random move. If the move improves the situation, it
is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1.
Local beam search:
• Keeping just one node in memory might seem to be an extreme reaction to the problem of
memory limitations.
• Keeps track of k states rather than just one. It begins with k randomly generated states.
• At each step, all the successors of all k states are generated. If any one is a goal, the algorithm
halts. Otherwise, it selects the k best successors from the complete list and repeats.
• In a local beam search, useful information is passed among the parallel search threads.
• The algorithm quickly abandons unfruitful searches and moves its resources to where the most
progress is being made.
• Local beam search can suffer from a lack of diversity among the k states making the search
expensive than hill climbing.
• Stochastic beam search chooses k successors at random, with the probability of choosing a given
successor being an increasing function of its value.
• In this algorithm, it holds k number of states at any given time. At the start, these states are
generated randomly. The successors of these k states are computed with the help of objective
function. If any of these successors is the maximum value of the objective function, then the
algorithm stops.
• Otherwise the (initial k states and k number of successors of the states = 2k) states are placed in a
pool. The pool is then sorted numerically. The highest k states are selected as new initial states.
This process continues until a maximum value is reached.

function BeamSearch( problem, k), returns a solution state.


start with k randomly generated states
loop
generate all successors of all k states
if any of the states = solution, then return the state
else select the k best successors
end

Evolutionary algorithms
Evolutionary algorithms can be seen as variants of stochastic beam search that are explicitly
motivated by the metaphor of natural selection in biology: there is a population of individuals (states), in
which the fittest (highest value) individuals produce offspring (successor states) that populate the next
generation, a process called recombination. There are endless forms of evolutionary algorithms, varying
in the following ways:
 The size of the population.
 The representation of each individual. In genetic algorithms, each individual is a string over a
finite alphabet (often a Boolean string), just as DNA is a string over the alphabet ACGT. In
evolution strategies, an individual is a sequence of real numbers, and in genetic programming
an individual is a computer program.
 The mixing number, , which is the number of parents that come together to form offspring. The
most common case is ρ = 2: two parents combine their “genes” (parts of their representation) to
form offspring. When ρ = 1 we have stochastic beam search (which can be seen as asexual
reproduction). It is possible to have ρ > 2, which occurs only rarely in nature but is easy enough to
simulate on computers.
 The selection process for selecting the individuals who will become the parents of the next
generation: one possibility is to select from all individuals with probability proportional to their
fitness score. Another possibility is to randomly select individuals , and then select the most fit
ones as parents.
 The recombination procedure. One common approach (assuming ρ =2 ), is to randomly select a
crossover point to split each of the parent strings, and recombine the parts to form two children,
one with the first part of parent 1 and the second part of parent 2; the other with the second part of
parent 1 and the first part of parent 2.
 The mutation rate, which determines how often offspring have random mutations to their
representation. Once an offspring has been generated, every bit in its composition is flipped with
probability equal to the mutation rate.
 The makeup of the next generation. This can be just the newly formed offspring, or it can include
a few top-scoring parents from the previous generation (a practice called elitism, which
guarantees that overall fitness will never decrease over time). The practice of culling, in which all
individuals below a given threshold are discarded, can lead to a speedup.

Figure 4.6(a) shows a population of four 8-digit strings, each representing a state of the 8- queens
puzzle: the -th digit represents the row number of the queen in column . In (b), each state is rated by the
fitness function. Higher fitness values are better, so for the 8- queens problem we use the number of
nonattacking pairs of queens, which has a value of 8 × 7/2 = 28 for a solution. The values of the four
states in (b) are 24, 23, 20, and 11. The fitness scores are then normalized to probabilities, and the
resulting values are shown next to the fitness values in (b).
In (c), two pairs of parents are selected, in accordance with the probabilities in (b). Notice that one
individual is selected twice and one not at all. For each selected pair, a crossover point (dotted line) is
chosen randomly. In (d), we cross over the parent strings at the crossover points, yielding new offspring.
For example, the first child of the first pair gets the first three digits (327) from the first parent and the
remaining digits (48552) from the second parent. The 8-queens states involved in this recombination step
are shown in Figure 4.7.

Finally, in (e), each location in each string is subject to random mutation with a small independent
probability. One digit was mutated in the first, third, and fourth offspring. In the 8-queens problem, this
corresponds to choosing a queen at random and moving it to a random square in its column. It is often the
case that the population is diverse early on in the process, so crossover frequently takes large steps in the
state space early in the search process (as in simulated annealing). After many generations of selection
towards higher fitness, the population becomes less diverse, and smaller steps are typical. Figure 4.8
describes an algorithm that implements all these steps.

Working of Genetic Algorithm:


Input-1) State Population(a set of individuals)
2) Fitness function(than rates individual)
Steps:
1. Create an individual ‘X’(parent) by using random selection with fitness function ‘A’ of ‘X’
2. Create an individual ‘Y’(parent) by using random selection with fitness function ‘B’ of ‘Y’
3. Child with good fitness is created for X+Y
4. For small probability apply mutate operator on child.
5. Add child to new population
6. The above process is repeated until child (an individual) is not fit as specified by fitness function.
ADVERSARIAL SEARCH
Game Playing
• Adversarial search problems
• Game theory views any multiagent environment as a game, provided that the impact of each agent
on the others is “significant” regardless of whether the agents are cooperative or competitive.
• Games, like the real world, therefore require the ability to make some decision even when
calculating the optimal decision is infeasible.
• Pruning allows us to ignore portions of the search tree that make no difference to the final choice

A game can be formally defined as a kind of search problem with the following elements:
 Initial state – Empty squares
 Players – X O
 Action – X moved one spot 3rd row, 2nd colum
 Result – result of each move – x is placed in 3rd
 Terminal result – final result of the game – O won
 Utility – final numeric value for a game that ends in terminal state – O 1, X 0
• The initial state, action function, and result function define the game tree for the game - a tree
where the nodes are game states and the edges are moves.
Mini-max algorithm
o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making
and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the current
state.
o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the maximized
value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack
the tree as the recursion.
Working of Mini-max algorithm:
o The working of the minimax algorithm can be easily described using an example. Below we have
taken an example of game-tree which is representing the two-player game.
o In this example, there are two players one is called Maximizer and other is called Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves to
reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value and backtrack
the tree until the initial state occurs. Following are the main steps involved in solving the two-
player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get
the utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the
tree. Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will
find the 3rd layer node values.
o For node B= min(4,6) = 4
o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find
the maximum value for the root node. In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4


Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really slow for complex games such
as Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which we
have discussed in the next topic.

Alpha – Beta Pruning


o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can
cut it to half. Hence there is a technique by which without checking each node of the game tree
we can compute the correct minimax decision, and this technique is called pruning. This involves
two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning.
It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
o The two-parameter can be defined as:
 Alpha: The best (highest-value) choice we have found so far at any point along the path
of Maximizer. The initial value of alpha is -∞.
 Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
Condition for Alpha-beta pruning:
The main condition which required for alpha-beta pruning is:
α>=β
Key points about alpha-beta pruning:
The Max player will only update the value of alpha.
The Min player will only update the value of beta.
While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha and
beta.
We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:


Let's take an example of two-player search tree to understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the
same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min,
Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B
now α= -∞, and β= 3
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -
∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha
will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right
successor of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values
now passes to right successor of A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the
final game tree which is the showing the nodes which are computed and nodes which has never
computed. Hence the optimal value for the maximizer is 3 for this example.

CONSTRAINT SATISFACTION PROBLEMS (CSP)


Constraint Satisfaction Problems (CSP)
• By the name, it is understood that constraint satisfaction means solving a problem under certain
constraints or rules.
• Constraint satisfaction is a technique where a problem is solved when its values satisfy certain
constraints or rules of the problem.
Constraint satisfaction depends on three components, namely:
1. X: It is a set of variables.
2. D: It is a set of domains where the variables reside. There is a specific domain for each variable.
3. C: It is a set of constraints which are followed by the set of variables.
• The constraint value consists of a pair of {scope, rel}. The scope is a tuple of variables which
participate in the constraint and rel is a relation which includes a list of values which the variables
can take to satisfy the constraints of the problem.
• A state in state-space is defined by assigning values to some or all variables such as
{X1=v1, X2=v2, and so on…}.
• CSF is defined by a set of variables X1,X2,…Xn, a set of constrains C1,C2…Cm.
• Each variable Xi has a non empty domain Di of possible values.
• Each constrain Ci involves some subset of the variables and specifies the allowable combinations
of values for that subset.
• A state of the problem is defined by an assignment of values to some or all of the variables
{ Xi=Vi,Xj=Vj…}
• An assignment does not violate any constrains is called consistent or legal assignment.
• Complete assignment is an assignment in which every variable is assigned with values.
• Solution for the CSP is complete assignment that satisfies all the constrains.
An assignment of values to a variable can be done in three ways:
• Consistent or Legal Assignment: An assignment which does not violate any constraint or rule is
called Consistent or legal assignment.
• Complete Assignment: An assignment where every variable is assigned with a value, and the
solution to the CSP remains consistent. Such assignment is known as Complete assignment.
• Partial Assignment: An assignment which assigns values to some of the variables only. Such
type of assignments are called Partial assignments.

Types of Domains in CSP


• Discrete Domain: It is an infinite domain which can have one state for multiple variables. For
example, a start state can be allocated infinite times for each variable.
• Finite Domain: It is a finite domain which can have continuous states describing one domain for
one specific variable. It is also called a continuous domain.

CSF can be formulated as a standard search problem as follows:


1. Initial state: Empty assignment { } in which all variables are unassigned.
2. Successor function: A value can be assigned to any unassigned variable, provided if no conflict
raises.
3. Goal cost: Current assignment is complete
4. Path cost: A constant cost fee every step.

Constraint Satisfaction Problems

Variables of CSP : WA,NT,Q,NSW,V,SA,T


Domains Di : { Red,Green,Blue }
Constrains : Adjacent regions must have different colors ( eg: NT =! SA )
• To solve the CSP problem, DFS with backtracking can be used.
Solution : WA = R, NT = G, Q = R, SA = B, NSW = G, V = R
We consider a small part of the car assembly, consisting of 15 tasks: install axles (front and back), affix
all four wheels (right and left, front and back), tighten nuts for each wheel, affix hubcaps, and inspect the
final assembly. We can represent the tasks with 15 variables
Varieties of CSPs
1. Discrete variables
2. CSPs with continuous domains
Discrete variables
i) Finite domains
• Involves variables that are discrete and have finite domains.
• If the maximum domain size of any variable in a CSP is d, then the number of possible
complete assignments is O( dn ).
• Finite domain CSPs include Boolean CSPs, whose variables can be either true or false.
ii) Infinite domains
• Example, the set of integers or the set of strings.
• It is no longer possible to describe constraints by enumerating all allowed combination of
values.
• Instead a constraint language of algebric inequalities such as Startjob1 + 5 <= Startjob3.
CSPs with continuous domains
• Example: In operation research field, the scheduling of experiments on the Hubble Telescope
requires very precise timing of observations;
• The start and finish of each observation and maneuver are continuous-valued variables
that must obey a variety of astronomical, precedence and power constraints.
• The best known category of continuous-domain CSPs is linear programming problems,
where the constraints must be linear inequalities forming a convex region.
• Linear programming problems can be solved in time polynomial in the number of variables.
Types of constraints :
 Unary constraints involve a single variable. Example: SA # green
 Binary constraints involve pairs of variables. Example: SA # WA
 Global Constraints: It is the constraint type which involves an arbitrary number of variables.
 Higher order constraints involve 3 or more variables. Example: cryptarithmetic puzzles.
 Absolute constraints are the constraints, which rules out a potential solution when they are
violated
 Preference constraints are the constraints indicating which solutions are preferred

Some special types of solution algorithms are used to solve the following types of constraints:
Linear Constraints: These type of constraints are commonly used in linear programming where each
variable containing an integer value exists in linear form only.
Non-linear Constraints: These type of constraints are used in non-linear programming where each
variable (an integer value) exists in a non-linear form.

Constrain Propagation
• Although forward checking detects many inconsistencies, it does not detect all of them.
• In local state-spaces, the choice is only one, i.e., to search for a solution. But in CSP, we have two
choices either:
• We can search for a solution or
• We can perform a special type of inference called constraint propagation.
• Constraint propagation is a special type of inference that helps in reducing the legal number of
values for the variables. The idea behind constraint propagation is local consistency.
• In local consistency, variables are treated as nodes, and each binary constraint is treated as an arc
in the given problem.

There are following local consistencies which are discussed below:


Node Consistency: A single variable is said to be node consistent if all the values in the variable’s
domain satisfy the unary constraints on the variables.
Arc Consistency: A variable is arc consistent if every value in its domain satisfies the binary constraints
of the variables.
Path Consistency: When the evaluation of a set of two variable with respect to a third variable can be
extended over another variable, satisfying all the binary constraints. It is similar to arc consistency.
k-consistency: This type of consistency is used to define the notion of stronger forms of propagation.
Here, we examine the k-consistency of the variables.

CSP Problems:
 Graph coloring
 Sudoku Playing
 n-queen problem
 Crossword
 Cryptarithmetic Problem

Cryptarithmetic Problem
• This problem has one most important constraint that is, we cannot assign a different digit to the
same character. All digits should contain a unique alphabet.
• Cryptarithmetic Problem is a type of constraint satisfaction problem where the game is about
digits and its unique replacement either with alphabets or other symbols.
• In cryptarithmetic problem, the digits (0-9) get substituted by some possible alphabets or
symbols. The task in cryptarithmetic problem is to substitute each digit with an alphabet to get the
result arithmetically correct.
• We can perform all the arithmetic operations on a given cryptarithmetic problem.
The rules or constraints on a cryptarithmetic problem are as follows:
• There should be a unique digit to be replaced with a unique alphabet.
• The result should satisfy the predefined arithmetic rules, i.e., 2+2 =4, nothing else.
• Digits should be from 0-9 only.
• There should be only one carry forward, while performing the addition operation on a problem.
• The problem can be solved from both sides, i.e., lefthand side (L.H.S), or righthand side
(R.H.S)
Let’s understand the cryptarithmetic problem as well its constraints better with the help of an
example:
Given a cryptarithmetic problem, i.e., S E N D + M O R E = M O N E Y
• In this example, add both terms S E N D and M O R E to bring M O N E Y as a result.

Follow the below steps to understand the given problem by breaking it into its subparts:
• Starting from the left hand side (L.H.S) , the terms are S and M. Assign a digit which could give a
satisfactory result. Let’s assign S->9 and M->1.

• Hence, we get a satisfactory result by adding up the terms and got an assignment for O as O->0 as
well.
• Now, move ahead to the next terms E and O to get N as its output.

• Adding E and O, which means 5+0=0, which is not possible because according to
cryptarithmetic constraints, we cannot assign the same digit to two letters. So, we need to think
more and assign some other value.

• Note: When we will solve further, we will get one carry, so after applying it, the answer will be
satisfied.
• Further, adding the next two terms N and R we get,
• But, we have already assigned E->5. Thus, the above result does not satisfy the values
• because we are getting a different value for E. So, we need to think more.
• Again, after solving the whole problem, we will get a carryover on this term, so our answer
will be satisfied.

• Let’s move ahead.


• Again, on adding the last two terms, i.e., the rightmost terms D and E, we get Y as its result.

• Keeping all the constraints in mind, the final resultant is as follows:

• Below is the representation of the assignment of the digits to the alphabets.


More Examples:
• BASE+BALL=GAMES
• YOUR+YOU=HEART
• TO+GO=OUT
• USA+USSR=PEACE

You might also like