0% found this document useful (0 votes)
20 views59 pages

Unit I

Uploaded by

punitha.ece
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views59 pages

Unit I

Uploaded by

punitha.ece
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 59

UNIT I

PROBLEM SOLVING

Page 1
TECHNICAL TERMS

S.NO TERMS LITERAL MEANING TECHNICAL MEANING


1. Artificial Intelligence Artificial means Man-made Artificial Intelligence is a branch of
, Intelligence means computer science that deals with
Thinking Power. developing intelligent machines
which can behave like human,
think like human, and has ability to
take decisions by their own
2. Agent A person or thing that It percepts from the environment
performs an action or brings through the sensors and return
about a certain result. the action through actuators.
3. Search To look into or over Searching is a step by step
carefully or thoroughly in anprocedure to solve a
effort to find or discover search-problem in a given search
something. space
4. Completeness The quality of being whole A search algorithm is said to be
or perfect and having complete if it guarantees to return
nothing missing. a solution if at least any solution
exists for any random input.
5. Optimality Most desirable or If a solution found for an algorithm
satisfactory. is guaranteed to be the best
solution (lowest path cost) among
all other solutions, then such a
solution for is said to be an optimal
solution.
6. Uninformed Search Uninformed means not Uninformed search algorithms
informed and search means do not have additional
to find something. information about state or search
space other than how to traverse
the tree, so it is also called blind
search.
7. Informed Search Informed means give In an informed search,
someone facts or
information and search problem information is
means to find. available which can guide
the search. Informed
search is also called a
Heuristic search.

Page 2
8. Local Maxima Local means relating to
A local maximum is a peak
place and maxima means
maximum. that is higher than each of its
neighboring states but lower
than the global maximum.

9. Ridges A long narrow raised land A ridge is a special form of


formation with sloping
the local maximum. It has an
sides.
area which is higher than its
surrounding areas, but itself
has a slope, and cannot be
reached in a single move.

10. Plateaus A flat area of land that is A plateau is a flat area of


elevated above sea level.
the state-space landscape.
It can be a flat local
maximum, from which no
uphill exit exists, or a
shoulder, from which
progress is possible.

Page 3
UNIT I PROBLEM SOLVING

1.1 Introduction To Artificial Intelligence

1.1.1 History of Artificial Intelligence

1.1.2 Types of AI

1.1.3 AI Type -II: Based on functionalities

1.2 Applications of AI

1.3 Problem Solving Agents

1.3.1 Components to formulate the Associated Problem

1.4 Search Algorithms

1.4.1 Search Algorithm Terminologies

1.4.2 Properties of Search Algorithms

1.4.3 Types of Search Algorithms

1.4.4 Uninformed/Blind search

1.4.5 Informed Search

1.5 Uninformed/ Blind Search

1.5.1 Breadth First Search

1.5.2. Depth First Search

1.5.3 Depth limited Search

1.5.4 Uniform Cost Search

Page 4
1.5.5 Iterative Deepening Depth First Search

1.5.6 Bidirectional Search

1.6 Heuristic Search Strategy

1.7 Local search & Optimization problems

1.8 Adversarial Search

1.9 Constraint Satisfaction Problems

Page 5
1.1 Introduction to Artificial Intelligence
● "Artificial Intelligence is a branch of computer science that deals with
developing intelligent machines which can behave like human, think like
human, and has ability to take decisions by their own."
● Artificial Intelligence is a combination of two words Artificial and Intelligence,
which refers to man-made intelligence. Therefore, when machines are equipped
with man-made intelligence to perform intelligent tasks similar to humans, it is
known as Artificial Intelligence. It is all about developing intelligent machines
that can simulate the human brain and work & behave like human beings.

1.1.1 HISTORY OF ARTIFICIAL INTELLIGENCE

• Artificial intelligence is assumed a new technology, but in reality, it is not new.


The researchers in the field of AI are much older. It is said that the concept of
intelligent machines was found in Greek Mythology. Below are some keystones in
the development of AI:
❖ In the year 1943, Warren McCulloch and Walter pits proposed a model of
Artificial neurons.
❖ In the year 1950, Alan Turing published a "Computer Machinery and
Intelligence" paper in which he introduced a test, known as a Turing Test.
This test is used to determine intelligence in machines by checking if the
machine is capable of thinking or not.
❖ In the year 1956, for the first time, the term Artificial Intelligence was
coined by the American Computer scientist John Mc Carthy at the
Dartmouth Conference. John McCarthy is also known as the Father of AI.
❖ In the year 1972, the first full-scale intelligent humanoid robot, WABOT1,
was created in Japan.
❖ In the year 1980, AI came with the evolution of Expert Systems. These
systems are computer programs, which are designed to solve complex
problems.
❖ In the year 1997, IBM Deep Blue beat world chess champion Gary
Kasparov and became the first computer to defeat a world chess
champion.
❖ In the year 2006, AI came into the business world. World's top companies
like Facebook, Twitter, and Netflix also started using AI in their

Page 6
1.1.2 TYPES OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence can be divided in various types, there are mainly two types
of main categorization which are based on capabilities and based on functionally of
AI. Following is flow diagram which explain the types of AI.

Fig. 1.1. AI type-1: Based on Capabilities

1. Weak AI or Narrow AI:


❖ Narrow AI is a type of AI which is able to perform a dedicated task with
intelligence. The most common and currently available AI is Narrow AI in
the world of Artificial Intelligence.

❖ Narrow AI cannot perform beyond its field or limitations, as it is only


trained for one specific task. Hence it is also termed as weak AI. Narrow
AI can fail in unpredictable ways if it goes beyond its limits.
❖ Apple Siri is a good example of Narrow AI, but it operates with a limited
pre-defined range of functions.
❖ IBM's Watson supercomputer also comes under Narrow AI, as it uses an
Expert system approach combined with Machine learning and natural
language processing.
❖ Some Examples of Narrow AI are playing chess, purchasing suggestions
on e-commerce site, self-driving cars, speech recognition, and image
recognition.

Page 7
❖ General AI is a type of intelligence which could perform any
intellectual task with efficiency like a human.
❖ The idea behind the general AI to make such a system which could
be smarter and think like a human by its own.
❖ Currently, there is no such system exist which could come under
general AI and can perform any task as perfect as a human.
❖ The worldwide researchers are now focused on developing machines
with General AI.
❖ As systems with general AI are still under research, and it will take lots
of efforts and time to develop such systems.
3. Super AI:

❖ Super AI is a level of Intelligence of Systems at which machines could


surpass human intelligence, and can perform any task better than human
with cognitive properties. It is an outcome of general AI.
❖ Some key characteristics of strong AI include capability include the ability
to think, to reason, solve the puzzle, make judgments, plan, learn, and
communicate by its own.
❖ Super AI is still a hypothetical concept of Artificial Intelligence.
Development of such systems in real is still world changing task.

1.1.3 ARTIFICIAL INTELLIGENCE TYPE -2 : BASED ON FUNCTIONALITY

1. Reactive Machines
❖ Purely reactive machines are the most basic types of Artificial Intelligence.
❖ Such AI systems do not store memories or past experiences for future

Page 8
❖ These machines only focus on current scenarios and react on it as
per possible best action.
❖ IBM's Deep Blue system is an example of reactive machines.
❖ Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
❖ Limited memory machines can store past experiences or some data for a
short period of time.
❖ These machines can use stored data for a limited time period only.
❖ Self-driving cars are one of the best examples of Limited Memory
systems. These cars can store recent speed of nearby cars, the distance of
other cars, speed limit, and other information to navigate the road.

3. Theory of Mind
❖ Theory of Mind AI should understand the human emotions,
people, beliefs, and be able to interact socially like humans.
❖ This type of AI machines is still not developed, but researchers are
making lots of efforts and improvement for developing such AI machines.

4. Self-Awareness
❖ Self-awareness AI is the future of Artificial Intelligence. These machines
will be super intelligent, and will have their own consciousness,
sentiments, and self-awareness.
❖ These machines will be smarter than human mind.
❖ Self-Awareness AI does not exist in reality still and it is a hypothetical
concept.
A.I Approaches
The definitions of AI according to some text books are categorized into
four approaches and are summarized in the table below:
❖ Systems that think like human
❖ Systems that act like human
❖ Systems that think rationally
❖ Systems that act rationally

Page 9
Thinking Humanly Thinking Rationally
―The exciting new effort to ―The study of mental
faculties through the use of
make computers think …
machines computational modelsǁ.

with minds, in the full and literal (Chamiak and McDermott, 1985)
senseǁ. (Haugeland, 1985)
―(The automation of) activities ―The study of the computations
that we associate with human that make it possible to perceive,
thinking, activities such as reason, and actǁ.
decision-making, problem solving, (―Winston, 1992)
learning …ǁ

(Bell man, 1978)


Acting Humanly Acting Rationally
―The art of creating machines ―Computational Intelligence is
that perform functions that require the study of the design of intelligent
intelligence when performed by agentsǁ
peopleǁ.
(Poole et al., 1998)
(Kurzweil, 1990)
―The study of how to
―AI… is concerned with
make computers do things at which,
at the moment, people are betterǁ. intelligent behavior in artifactsǁ.

(Rich and Knight, 1991) (Nilsson, 1998)


Thinking humanly: The cognitive modeling approach
To say that a program thinks like a human, consider the human thinking which can be expressed in
three ways:
introspection - trying to catch our own thoughts as they go by;
psychological experiments - observing a person in action; brain
imaging - observing the brain in action.

Once there is sufficient precise theory of the mind, it becomes possible to express
the theory as a computer program
Cognitive Study of Human Mind:
It is a highly interdisciplinary field which combines ideas and methods from
psychology, computer science, philosophy, linguistics and neuroscience.
The goal of cognitive science is to characterize the nature of human knowledge

Page 10
Paavai InstitutionsDepartment of CSE
and how that knowledge is used, processed and acquired

Acting humanly: The Turing Test approach


The Turing Test, proposed by Alan Turing(1950), was designed to provide a
satisfactory operational definition of intelligence. A computer passes the test if a
human interrogator, after posing some written questions, cannot tell whether the
written responses come from a person or from a computer.
The computer would need to possess the following capabilities:
❖ natural language processing to enable it to communicate successfully in
English
❖ knowledge representation to store what it knows or hears;
❖ automated reasoning to use the stored information to answer questions and
to draw new conclusions;
❖ machine learning to adapt to new circumstances and to detect and to
discover new patterns

❖ computer vision to perceive object and


❖ robotics to manipulate objects and to move about
Thinking rationally: The “laws of thought” approach
❖ Aristotle The concept of ―right thinkingǁ was proposed by Aristotle. His
syllogisms provided patterns for argument structures that always yielded
correct conclusions when given correct premises.
❖ The canonical example starts with Socrates is a man and all men are
mortal and concludes that Socrates is mortal.
❖ These laws of thought were supposed to govern the operation of the mind;
their study initiated the field called logic.
❖ Logicians developed a precise notation for statements about objects in the
world and the relations among them .Logics are needed to create
intelligent systems.
Acting rationally: The rational agent approach
An agent is just something that acts operate autonomously, perceive their
environment, persist over a prolonged time period, adapt to change, create and
pursue goals. A rational agent is one that acts so as to achieve the best outcome
One way to act rationally is to deduce that a given action is best and then to act on
that conclusion. On the other hand, there are ways of acting rationally that cannot
be

Page 11
Paavai Institutions Department of CSE
said to involve inference.

The rational-agent approach to AI has two advantages over the other approaches.
First, it is more general than the ―laws of thoughtǁ approach because correct
inference is just one of several possible mechanisms for achieving rationality.
Second, it is suitable for scientific development.

1.2 APPLICATIONS OF AI

1. Game Playing:
AI is widely used in Gaming. Different strategic games such as Chess, where the
machine needs to think logically, and video games to provide real-time experiences
use Artificial Intelligence.

2. Robotics:
Artificial Intelligence is commonly used in the field of Robotics to develop
intelligent robots. AI implemented robots use real-time updates to sense any obstacle
in their path and can change the path instantly. AI robots can be used for carrying
goods in hospitals and industries and can also be used for other different purposes.

3. Healthcare:
In the healthcare sector, AI has diverse uses. In this field, AI can be used to detect
diseases and cancer cells. It also helps in finding new drugs with the use of historical
data and medical intelligence.

4. Computer Vision:
Computer vision enables the computer system to understand and derive
meaningful information from digital images, video, and other visual input with the
help of AI.

5. Agriculture:
AI is now widely used in Agriculture; for example, with the help of AI, we can
easily identify defects and nutrient absences in the soil. To identify these defects, AI
robots can be utilized. AI bots can also be used in crop harvesting at a higher speed
than human workers.

6. E-commerce:
AI is one of the widely used and demanding technologies in the E-commerce

Page 12
Paavai Institutions Department of CSE
industry. With AI, e-commerce businesses are gaining more profit and grow in
business by recommending products as per the user requirement.

7. Social Media
Different social media websites such as Facebook, Instagram, Twitter, etc., use AI
to make the user experiences much better by providing different features. For
example, Twitter uses AI to recommend tweets as per the user interest and search
history.

1.3 PROBLEM SOLVING AGENTS


When the correct action to take is not immediately obvious, an agent may need to
to plan ahead: to consider a sequence of actions that form a path to a goal state. Such
an agent is called a problem- solving agent, and the computational process it
undertakes is called search.
The agent can follow this four-phase problem-solving process:
❖ Goal Formulation: Goals organize behavior by limiting the objectives
and hence the actions to be considered.
❖ Problem Formulation: The agent devises a description of the states and
actions necessary to reach the goal - an abstract model of the relevant part
of the world.
❖ Search: Before taking any action in the real world, the agent simulates
sequences of actions in its model, searching until it finds a sequence of
actions that reaches the goal. Such a sequence is called a solution. The
agent might have to simulate multiple sequences that do not reach the
goal, but eventually it will find a solution (such as going from Arad to
Sibiu to Fagaras to Bucharest), or it will find that no solution is possible.
❖ Execution: The agent can now execute the actions in the solution, one at a
time. It is an important property that in a fully observable, deterministic,
known environment, the solution to any problem is a fixed sequence of
actions. If the model is correct, then once the agent has found a solution, it
can ignore its percepts while it is executing the actions—because the
solution is guaranteed to lead to the goal. Control theorists call this an

Page 13
Paavai Institutions Department of CSE
open-loop system: ignoring the percepts breaks the loop between agent
and environment. If there is a chance that the model is incorrect, or the
environment is nondeterministic, then the agent would be safer using a
closed-loop approach that monitors the percepts.
1.3.1 COMPONENTS TO FORMULATE THE ASSOCIATED PROBLEM

❖ Initial State: This state requires an initial state for the problem which
starts the AI agent towards a specified goal. In this state new methods also
initialize problem domain solving by a specific class.
❖ Action: This stage of problem formulation works with function with a
specific class taken from the initial state and all possible actions done in
this stage.
❖ Transition: This stage of problem formulation integrates the actual action
done by the previous action stage and collects the final stage to forward it
to their next stage.
❖ Goal test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the
action and forward into the next stage to determine the cost to achieve the
goal.
❖ Path costing: This component of problem-solving numerical assigned
what will be the cost to achieve the goal. It requires all hardware software
and human working cost.

1.4 SEARCH ALGORITHMS

Search algorithms are one of the most important areas of Artificial


Intelligence. This topic will explain all about the search algorithms in AI.
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these
search strategies or algorithms to solve a specific problem and provide the
best result. Problem-solving agents are the goal-based agents and use atomic
representation. In this topic, we will learn various problem-solving search
algorithms.

Page 14
Paavai Institutions Department of CSE
1.4.1 SEARCH ALGORITHM TERMINOLOGIES

Search: Searching is a step by step procedure to solve a search-problem in a


given search space. A search problem can have three main factors:

❖ Search Space: Search space represents a set of possible solutions, which


a system may have.
❖ Start State: It is a state from where agent begins the search.
❖ Goal Test: It is a function which observe the current state and
returns whether the goal state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a
transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.

1.4.2 PROPERTIES OF SEARCH ALGORITHMS

Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return
a solution if at least any solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for is said
to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.

1.4.3 TYPES OF SEARCH ALGORITHMS

Based on the search problems we can classify the search algorithms into

Page 15
Paavai Institutions Department of CSE
uninformed (Blind search) search and informed search (Heuristic search) algorithms.

1.4.4 UNINFORMED / BLIND SEARCH


The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it only includes
information about how to traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal, so
it is also called blind search. It examines each node of the tree until it achieves the
goal node.
It can be divided into five main types:
❖ Breadth-first search
❖ Uniform cost search
❖ Depth-first search
❖ Iterative deepening depth-first search
❖ Bidirectional Search

1.4.5 INFORMED SEARCH

Informed search algorithms use domain knowledge. In an informed search,


problem information is available which can guide the search. Informed search
strategies can find a solution more efficiently than an uninformed search strategy.
Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions
but guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in

Page 16
Paavai Institutions Department of CSE
another way.

An example of informed search algorithms is a traveling salesman problem.


1. Greedy Search
2. A* Search

1.5 UNINFORMED / BLIND SEARCH

Uninformed search is a class of general-purpose search algorithms which


operates in brute force-way. Uninformed search algorithms do not have additional
information about state or search space other than how to traverse the tree, so it is also
called blind search.

Following are the various types of uninformed search algorithms:


1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1.5.1 BREADTH-FIRST SEARCH

❖ Breadth-first search is the most common search strategy for traversing a


tree or graph. This algorithm searches breadthwise in a tree or graph, so it
is called breadth-first search.
❖ BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before moving to nodes of next
level.
❖ The breadth-first search algorithm is an example of a general-graph
search algorithm.
❖ Breadth-first search implemented using FIFO queue data structure.
Advantages:
❖ BFS will provide a solution if any solution exists.
❖ If there are more than one solutions for a given problem, then BFS
will provide the minimal solution which requires the least number of
steps.
Page 17
Paavai Institutions Department of CSE
Disadvantages:

❖ It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
❖ BFS needs lots of time if the solution is far away from the root node.

Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be:
1. S → A→ B → C → D → G → H → E → F → I → K

Fig. 1.4. Breadth First Search

Time Complexity: Time Complexity of BFS algorithm can be obtained by the


number of nodes traversed in BFS until the shallowest Node. Where the d = depth of
shallowest solution and b is a node at every state.
T(b) = 1 + b2 + b3 + …+ bd = O(bd )

Space Complexity: Space complexity of BFS algorithm is given by the Memory


size of frontier which is O( bd).
Completeness: BFS is complete, which means if the shallowest goal node is at
some finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth

Page 18
Paavai Institutions Department of CSE
of the node.

1.5.2 DEPTH-FIRST SEARCH

❖ Depth-first search is a recursive algorithm for traversing a tree or graph


data structure.
❖ It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next path.
❖ DFS uses a stack data structure for its implementation.
❖ The process of the DFS algorithm is similar to the BFS algorithm.

Advantage:
❖ DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
❖ It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).

Disadvantage:
❖ There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
❖ DFS algorithm goes for deep down searching and sometime it may go to
the infinite loop.

Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node -> Left node -> right node
It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and still goal
node is not found. After backtracking it will traverse node C and then G, and here it
will terminate as it found goal node.

Page 19
Paavai Institutions Department of CSE

Fig. 1.5. Depth First Search

Completeness: DFS search algorithm is complete within finite state space as it


will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
2 3 m m
T(n) = 1 + n + n + .. + n = O (n )

Where, m = maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.

1.5.3 DEPTH-LIMITED SEARCH ALGORITHM


A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the infinite path
in the Depth-first search. In this algorithm, the node at the depth limit will treat as it
has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
❖ Standard failure value: It indicates that problem does not have any solution.
❖ Cutoff failure value: It defines no solution for the problem within a given
depth limit.

Page 20
Paavai Institutions Department of CSE
Advantages:

Depth-limited search is Memory efficient.

Disadvantages:
❖ Depth-limited search also has a disadvantage of incompleteness.
❖ It may not be optimal if the problem has more than one solution.
Example:

Fig. 1.6. Depth Limited Search

Completeness: DLS search algorithm is complete if the solution is above


the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(b ℓ). Space
Complexity: Space complexity of DLS algorithm is O(b × ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is


also not optimal even if ℓ > d.

1.5.4 UNIFORM COST SEARCH ALGORITHM

❖ Uniform-cost search is a searching algorithm used for traversing a weighted


tree or graph. This algorithm comes into play when a different cost is
available for each edge.

❖ The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost.

❖ Uniform-cost search expands nodes according to their path costs form the

Page 21
Paavai Institutions Department of CSE
root node. It can be used to solve any graph/tree where the optimal cost is in
demand.

❖ A uniform-cost search algorithm is implemented by the priority queue. It


gives maximum priority to the lowest cumulative cost.

❖ Uniform cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.
Advantages:
❖ Uniform cost search is optimal because at every state the path with the least
cost is chosen.
Disadvantages:
❖ It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.
Example:

Fig. 1.7. Uniform Cost Search

Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal
node. Then the number of steps is = C*/ ε + 1. Here we have taken +1, as we start
from state 0 and end to C*/ ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ ε])/.

Page 22
Paavai Institutions Department of CSE
Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ ε]).

Optimal:
Uniform-cost search is always optimal as it only selects a path with the lowest
path cost.

1.5.5 ITERATIVE DEEPENING DEPTH-FIRST SEARCH

❖ The iterative deepening algorithm is a combination of DFS and BFS


algorithms. This search algorithm finds out the best depth limit and does it
by gradually increasing the limit until a goal is found.

❖ This algorithm performs depth-first search up to a certain "depth limit",


and it keeps increasing the depth limit after each iteration until the goal
node is found.

❖ This Search algorithm combines the benefits of Breadth-first search's fast


search and depth-first search's memory efficiency.

❖ The iterative search algorithm is useful uninformed search when search


space is large, and depth of goal node is unknown.

Advantages:
❖ It combines the benefits of BFS and DFS search algorithm in terms of
fast search and memory efficiency.
Disadvantages:
❖ The main drawback of IDDFS is that it repeats all the work of the
previous phase.

Example:
Following tree structure is showing the iterative deepening depth-first search.
IDDFS algorithm performs various iterations until it does not find the goal node. The
iteration performed by the algorithm is given as:

Page 23
Paavai Institutions Department of CSE

Fig. 1.8. Iterative deepening depth first search

1st Iteration → A

2nd Iteration → A, B, C

3rd Iteration → A, B, D, E, C, F, G

4th Iteration → A, B, D, H, I, E, C, F, K, G

In the fourth iteration, the algorithm will find the goal node.

Completeness:
This algorithm is complete is if the branching factor is finite.

Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).

Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth
of the node.

Page 24
Paavai Institutions Department of CSE

1.5.6 BIDIRECTIONAL SEARCH


Bidirectional search algorithm runs two simultaneous searches, one form initial
state called as forward-search and other from goal node called as backward-search,
to find the goal node. Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial vertex and other starts
from goal vertex. The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Advantages:
❖ Bidirectional search is fast.
❖ Bidirectional search requires less memory
Disadvantages:
❖ Implementation of the bidirectional search tree is difficult.
❖ In bidirectional search, one should know the goal state in advance.
Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.

Page 25
Paavai Institutions Department of CSE

Fig. 1.9. Bidirectional Search

Completeness: Bidirectional Search is complete if we use BFS in both searches.


Time Complexity: Time complexity of bidirectional search using BFS is
O(bd). Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

1.6 HEURISTIC SEARCH STRATEGIES


A heuristic is a technique that is used to solve a problem faster than the classic
methods. These techniques are used to find the approximate solution of a problem
when classical methods do not. Heuristics are said to be the problem-solving
techniques that result in practical and quick solutions.
Heuristics are strategies that are derived from past experience with similar
problems. Heuristics use practical methods and shortcuts used to produce the
solutions that may or may not be optimal, but those solutions are sufficient in a given
limited timeframe.
Why do we need heuristics?

Heuristics are used in situations in which there is the requirement of a short-term


solution. On facing complex situations with limited resources and time, Heuristics
can help the companies to make quick decisions by shortcuts and approximated
calculations. Most of the heuristic methods involve mental shortcuts to make
decisions on past experiences.
The heuristic method might not always provide us the finest solution, but it is
assured that it helps us find a good solution in a reasonable time.
Based on context, there can be different heuristic methods that correlate with the
problem's scope. The most common heuristic methods are - trial and error,
guesswork, the process of elimination, historical data analysis. These methods

Page 26
Paavai Institutions Department of CSE
involve simply available information that is not particular to the problem but is most
appropriate. They can include representative, affect, and availability heuristics.

Heuristic search techniques in AI (Artificial Intelligence)

Fig. 1.10.

We can perform the Heuristic techniques into two categories:

Direct Heuristic Search techniques in AI


It includes Blind Search, Uninformed Search, and Blind control strategy. These
search techniques are not always possible as they require much memory and time.
These techniques search the complete space for a solution and use the arbitrary
ordering of operations.
The examples of Direct Heuristic search techniques include Breadth-First Search
(BFS) and Depth First Search (DFS).

Weak Heuristic Search techniques in AI


It includes Informed Search, Heuristic Search, and Heuristic control strategy.
These techniques are helpful when they are applied properly to the right types of
tasks. They usually require domain-specific information.
The examples of Weak Heuristic search techniques include Best First Search
(BFS) and A*.

Before describing certain heuristic techniques, let's see some of the techniques listed
below:
❖ Bidirectional Search

Page 27
Paavai Institutions Department of CSE
A* search

❖ Simulated Annealing
❖ Hill Climbing
❖ Best First search
❖ Beam search
First, let's talk about the Hill climbing in Artificial intelligence.

Hill Climbing Algorithm


It is a technique for optimizing the mathematical problems. Hill Climbing is
widely used when a good heuristic is available.
It is a local search algorithm that continuously moves in the direction of
increasing elevation/value to find the mountain's peak or the best solution to the
problem. It terminates when it reaches a peak value where no neighbor has a higher
value. Traveling-salesman Problem is one of the widely discussed examples of the
Hill climbing algorithm, in which we need to minimize the distance traveled by the
salesman.
It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that. The steps of a simple hill-climbing algorithm are
listed below:
Step 1: Evaluate the initial state. If it is the goal state, then return success and
Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:

If it is a goal state, then return to success and quit.


Else if it is better than the current state, then assign a new state as a
current state.
Else if not better than the current state, then return to step 2.
Step 5: Exit.

Best first search (BFS)


This algorithm always chooses the path which appears best at that moment. It is
the combination of depth-first search and breadth-first search algorithms. It lets us to
take the benefit of both algorithms. It uses the heuristic function and search. With the
help of the best-first search, at each step, we can choose the most promising node.

Page 28
Paavai Institutions Department of CSE
Best first search algorithm:

Step 1: Place the starting node into the OPEN list.


Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n from the OPEN list, which has the lowest value of
h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is the goal node, then return
success and stop the search, else continue to next step.
Step 6: For each successor node, the algorithm checks for evaluation function f
(n) and then check if the node has been in either OPEN or CLOSED
list. If the node has not been in both lists, then add it to the OPEN list.
Step 7: Return to Step 2.

A* Search Algorithm
A* search is the most commonly known form of best-first search. It uses the
heuristic function h(n) and cost to reach the node n from the start state g(n). It has
combined features of UCS and greedy best-first search, by which it solve the
problem efficiently.
It finds the shortest path through the search space using the heuristic
function. This search algorithm expands fewer search tree and gives optimal
results faster.
Algorithm of A* search:
Step 1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not. If the list is empty, then return
failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of the
evaluation function (g + h). If node n is the goal node, then return
success and stop, otherwise.
Step 4: Expand node n and generate all of its successors, and put n into the
closed list. For each successor n ′, check whether n ′ is already in the
OPEN or CLOSED list. If not, then compute the evaluation function
for n ′ and place it into the Open list.

Step 5: Else, if node n ′ is already in OPEN and CLOSED, then it should be


attached to the back pointer which reflects the lowest g(n ′) value.
Page 29
Paavai Institutions Department of CSE
Step 6: Return to Step 2.

Examples of heuristics in everyday life

Some of the real-life examples of heuristics that people use as a way to solve a
problem:
❖ Common sense: It is a heuristic that is used to solve a problem based on
the observation of an individual.
❖ Rule of thumb: In heuristics, we also use a term rule of thumb. This
heuristic allows an individual to make an approximation without doing an
exhaustive search.
❖ Working backward: It lets an individual solve a problem by assuming
that the problem is already being solved by them and working backward in
their minds to see how much a solution has been reached.
❖ Availability heuristic: It allows a person to judge a situation based on the
examples of similar situations that come to mind.
❖ Familiarity heuristic: It allows a person to approach a problem on the
fact that an individual is familiar with the same situation, so one should
act similarly as he/she acted in the same situation before.
❖ Educated guess: It allows a person to reach a conclusion without doing
an exhaustive search. Using it, a person considers what they have
observed in the past and applies that history to the situation where there is not
any definite answer has decided yet.

Types of heuristics
There are various types of heuristics, including the availability heuristic, affect
heuristic and representative heuristic. Each heuristic type plays a role in decision-
making. Let's discuss about the Availability heuristic, affect heuristic, and
Representative heuristic.

Page 30
Paavai Institutions Department of CSE

Availability heuristic
Availability heuristic is said to be the judgment that people make regarding the
likelihood of an event based on information that quickly comes into mind. On
making decisions, people typically rely on the past knowledge or experience of an
event. It allows a person to judge a situation based on the examples of similar
situations that come to mind.
Representative heuristic
It occurs when we evaluate an event's probability on the basis of its similarity
with another event.
Example: We can understand the representative heuristic by the example of
product packaging, as consumers tend to associate the products quality with the
external packaging of a product. If a company packages its products that remind you
of a high quality and well-known product, then consumers will relate that product as
having the same quality as the branded product.
So, instead of evaluating the product based on its quality, customers correlate the
products quality based on the similarity in packaging.
Affect heuristic
It is based on the negative and positive feelings that are linked with a certain
stimulus. It includes quick feelings that are based on past beliefs. Its theory is one's
emotional response to a stimulus that can affect the decisions taken by an individual.
When people take a little time to evaluate a situation carefully, they might base
their decisions based on their emotional response.
Example: The affect heuristic can be understood by the example of
advertisements. Advertisements can influence the emotions of consumers, so it
affects the purchasing decision of a consumer. The most common examples of
advertisements are the ads of fast food. When fast-food companies run the
advertisement, they hope to obtain a positive emotional response that pushes you to
positively view their products.
If someone carefully analyzes the benefits and risks of consuming fast food, they
might decide that fast food is unhealthy. But people rarely take time to evaluate
everything they see and generally make decisions based on their automatic emotional
response. So, Fast food companies present advertisements that rely on such type of
Affect heuristic for generating a positive emotional response which results in sales.

Page 31
Paavai Institutions Department of CSE

1.7 LOCAL SEARCH AND OPTIMIZATION PROBLEMS


Local search algorithms operate by searching from a start state to neighboring
states, without keeping track of the paths, nor the set of states that have been
reached. That means they are not systematic - they might never explore a portion of
the search space where a solution actually resides.
However, they have two key advantages: (1) they use very little memory; and (2)
they can often find reasonable solutions in large or infinite state spaces for which
systematic algorithms are unsuitable.
Local search algorithms can also solve optimization problems, in which the aim is
to find the best state according to an objective function.
To understand local search, consider the states of a problem laid out in a state-
space landscape, as shown in Figure.

Each point (state) in the landscape has an ―elevation, defined by the value of the
objective function. If elevation corresponds to an objective function, then the aim is
to find the highest peak - a global maximum - this is known as hill climbing. If
elevation corresponds to cost, then the aim is to find the lowest valley - a global
minimum - this is known as gradient descent.

Hill-climbing Search
The hill-climbing search algorithm, which is the most basic local search
technique. At each step the current node is replaced by the best neighbor. The hill-
climbing search algorithm keeps track of one current state and on each iteration
moves to the neighboring state with highest value - that is, it heads in the direction
Page 32
Paavai Institutions Department of CSE
that provides the steepest ascent. It terminates when it reaches a ―peakǁ
where no neighbor has a higher value. Hill climbing does not look ahead beyond the
immediate neighbors of the current state.

Algorithm: Hill Climbing Search


function HILL-CLIMBING(problem) returns a state that is a local maximum

current ← problem.INITIAL

while true do

neighbor ← a highest-valued successor state of current

if VALUE(neighbor) ≤ VALUE(current) then return current

current ← neighbor

Hill climbing is sometimes called greedy local search because it grabs a good
neighbor state without thinking ahead about where to go next.

Unfortunately, hill climbing can get stuck for any of the following reasons:
Local Maxima: A local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum.
Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move. Ridges result in a sequence of local maxima that is very difficult for
greedy algorithms to navigate.
Plateaus: A plateau is a flat area of the state-space landscape. It can be a flat local
maximum, from which no uphill exit exists, or a shoulder, from which progress is
possible.
Many variants of hill climbing have been invented. Stochastic hill climbing
chooses at random from among the uphill moves; the probability of selection can
vary with the steepness of the uphill move. This usually converges more slowly than
steepest ascent, but in some state landscapes, it finds better solutions.
First-choice hill climbing implements stochastic hill climbing by generating
successors randomly until one is generated that is better than the current state. This
is

Page 33
Paavai Institutions Department of CSE
a good strategy when a state has many (e.g., thousands) of successors.
Another variant is random-restart hill climbing, which adopts the quote ―If at
first you don‘t succeed, try, try again.ǁ It conducts a series of hill-climbing searches
from randomly generated initial states, until a goal is found. It is complete with
probability 1, because it will eventually generate a goal state as the initial state. If
each hill- climbing search has a probability of success, then the expected number of
restarts required is 1 / p. The expected number of steps is the cost of one successful
iteration plus (1 – p) / p times the cost of failure. For 8-queens, random-restart hill
climbing is very effective indeed. Even for three million queens, the approach can
find solutions in seconds

The success of hill climbing depends very much on the shape of the state-space
landscape: if there are few local maxima and plateaus, random-restart hill climbing
will find a good solution very quickly.
Example:
To illustrate hill climbing, Consider the 8-queens problem. The complete-state
formulation is used here, which means that every state has all the components of a
solution, but they might not all be in the right place. In this case every state has 8
queens on the board, one per column. The initial state is chosen at random, and the
successors of a state are all possible states generated by moving a single queen to
another square in the same column (so each state has 8 × 7 = 56 successors). The
heuristic cost function is the number of pairs of queens that are attacking each other;
this will be zero only for solutions. (It counts as an attack if two pieces are in the
same line, even if there is an intervening piece between them.)

Page 34
Paavai Institutions Department of CSE

Figure (a) : The 8-queens problem: place 8 queens on a chess board so that no queen
attacks another. (A queen attacks any piece in the same row, column, or diagonal.)
The figure (b) shows the h values of all its successors.

Blocks world problem


Consider the blocks world problem with the four blocks A, B, C, D with the start
and goal states given below.

Assume the following two operations:


(i) Pick a block and put it on the table.
(ii) Pick a block and place it on another block
Solve the above problem using Hill Climbing algorithm and a suitable heuristic
function. Show the intermediate decisions and states

Solution:
Define the heuristic function
h(x) = +1 for all the blocks in the structure if the block is correctly positioned or
h(x) = – 1 for all uncorrectly placed blocks in the structure.

1.8 ADVERSARIAL SEARCH


Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.
❖ In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.
❖ But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.

Page 35
Paavai Institutions Department of CSE
❖ The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other. Each agent needs to consider the action of
other agent and effect of that action on their performance.
❖ So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
❖ Games are modeled as a Search problem and heuristic evaluation function,
and these are the two main factors which help to model and solve games
in AI.
Types of Games in AI:

Deterministic Chance Moves


Perfect information Chess, Checkers, go, Backgammon, monopoly
Othello

Imperfect information Battleships, blind, tic- Bridge, poker, scrabble,


tac-toe nuclear war

❖ Perfect information: A game with the perfect information is that in which


agents can look into the complete board. Agents have all the information
about the game, and they can see each other moves also. Examples are
Chess, Checkers, Go, etc.
❖ Imperfect information: If in a game agents do not have all information
about the game and not aware with what's going on, such type of games
are called the game with imperfect information, such as tic-tac-toe,
Battleship, blind, Bridge, etc.
❖ Deterministic games: Deterministic games are those games which follow
a strict pattern and set of rules for the games, and there is no randomness
associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
❖ Non-deterministic games: Non-deterministic are those games which have
various unpredictable events and has a factor of chance or luck. This
factor of chance or luck is introduced by either dice or cards. These are
random, and each action response is not fixed. Such games are also called
as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Page 36
Paavai Institutions Department of CSE
Zero-Sum Game

❖ Zero-sum games are adversarial search which involves pure competition.


❖ In Zero-sum game each agent's gain or loss of utility is exactly
balanced by the losses or gains of utility of another agent.
❖ One player of the game try to maximize one single value, while
other player tries to minimize it.
❖ Each move by one player in the game is called as ply.
❖ Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking


The Zero-sum game involved embedded thinking in which one agent or player is
trying to figure out:
❖ What to do.
❖ How to decide the move
❖ Needs to think about his opponent as well
❖ The opponent also thinks what to do
Each of the players is trying to find out the response of his opponent to their
actions. This requires embedded thinking or backward reasoning to solve the game
problems in AI.

Formalization of the problem:


A game can be defined as a type of search in AI which can be formalized of
the following elements:
❖ Initial state: It specifies how the game is set up at the start.
❖ Player(s): It specifies which player has moved in the state space.
❖ Action(s): It returns the set of legal moves in state space.
❖ Result(s, a ): It is the transition model, which specifies the result of moves
in the state space.
❖ Terminal-Test(s): Terminal test is true if the game is over, else it is false
at any case. The state where the game ends is called terminal states.
❖ Utility(s, p): A utility function gives the final numeric value for a game
that ends in terminal states s for player p. It is also called payoff function.
For Chess, the outcomes are a win, loss, or draw and its payoff values are
+1, 0, ½. And for tic-tac-toe, utility values are +1, – 1, and 0.

Page 37
Paavai Institutions Department of CSE

Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the
tree are the moves by players. Game tree involves initial state, actions function, and
result Function.
Example: Tic-Tac-Toe game tree:
The following figure is showing part of the game-tree for tic-tac-toe game.
Following are some key points of the game:
❖ There are two players MAX and MIN.
❖ Players have an alternate turn and start with MAX.
❖ MAX maximizes the result of the game tree
❖ MIN minimizes the result.
Example Explanation:
❖ From the initial state, MAX has 9 possible moves as he starts first. MAX
place x and MIN place o, and both player plays alternatively until we
reach a leaf node where one player has three in a row or all squares are
filled.
❖ Both players will compute each node, minimax, the minimax value which
is the best achievable utility against an optimal adversary.
❖ Suppose both the players are well aware of the tic-tac-toe and playing the
best play. Each player is doing his best to prevent another one from
winning. MIN is acting against Max in the game.

❖ So in the game tree, we have a layer of Max, a layer of MIN, and each
layer is called as Ply. Max place x, then MIN puts o to prevent Max from
winning, and this game continues until the terminal node.

Page 38
Paavai Institutions Department of CSE

❖ In this either MIN wins, MAX wins, or it's a draw. This game-tree is the
whole search space of possibilities that MIN and MAX are playing tic-tac-
toe and taking turns alternately.
Hence adversarial Search for the minimax procedure works as follows:
❖ It aims to find the optimal strategy for MAX to win the game.
❖ It follows the approach of Depth-first search.
❖ In the game tree, optimal leaf node could appear at any depth of the tree.
❖ Propagate the minimax values up to the tree until the terminal
node discovered.
In a given game tree, the optimal strategy can be determined from the minimax
value of each node, which can be written as MINIMAX(n). MAX prefer to move to
a state of maximum value and MIN prefer to move to a state of minimum value then:
For a state S MINIMAX(s)


⎧UTILITY(s) If TERMINAL-TEST(s)

= ⎨max a∈Actions(s) MINIMAX(RESULT(s,al)) If PLAYER(s) = MAX



⎩mina∈Actions(s) MINIMAX(RESULT(s, a)) If PLAYER(s) = MIN

1.8 CONSTRAINT SATISFACTION PROBLEMS


A problem is solved when each variable has a value that satisfies all the
constraints on the variable. Such a problem is called a constraint satisfaction
problem, or CSP
Defining Constraint Satisfaction Problems

❖ A constraint satisfaction problem consists of three components, X, D and C :


X is a set of variables, {X1,…,Xn } .
D is a set of domains, one for each variable {D1,…,Dn}.
C is a set of constraints that specify allowable combinations of values.
A domain, Di , consists of a set of allowable values, {v1 ,…,vk} , for variable

Page 39
Paavai Institutions Department of CSE
Xi
❖ For example, a Boolean variable would have the domain {true, false}.
Different variables can have different domains of different sizes. Each
constraint consists of a pair C j (scope,rel) , where scope is a tuple of variables
that participate in the constraint and rel is a relation that defines the values that
those variables can take on.

❖ For example, if X1 and X2 both have the domain, {1, 2, 3}then the constraint

saying that X1 must be greater than X2 can be written


as ((X1,X2),{(3,1),(3,2),(2,1)}) ((X1, X2 ),X1 > X2 )
❖ CSPs deal with assignments of values to variables, {Xi = vi, Xj = vj ,…} .

❖ An assignment that does not violate any constraints is called a consistent or


legal assignment.
❖ A complete assignment is one in which every variable is assigned a value, and
a solution to a CSP is a consistent, complete assignment.
❖ A partial assignment is one that leaves some variables unassigned, and a
partial solution is a partial assignment that is consistent.

Example problem: Map coloring


Consider a map of Australia showing each of its states and territories .The task is
to color each region with either red, green, or blue in such a way that no two
neighboring regions have the same color. To formulate this as a CSP, the variables
are the regions defined as follows:
X = {WA, NT, Q, NSW, V, SA,T}

Page 40
Paavai Institutions Department of CSE
The domain of every variable is the set Di = {red, green, blue} The constraints
require neighboring regions to have distinct colors.
C = {SA ≠ WA,SA ≠ NT,SA ≠ Q,SA ≠ NSW,SA ≠ V,WA ≠ NT,NT ≠ Q,Q ≠
NSW, NSW ≠ V }.
There are many possible solutions to this problem, such as
{WA = red,NT = green,Q = red,NSW = green,V = red,SA = blue,T = red }.

It can be helpful to visualize a CSP as a constraint graph. The nodes of the graph
correspond to variables of the problem, and an edge connects any two variables that
participate in a constraint.
Example problem: Job-shop scheduling
Factories have the problem of scheduling a day‘s worth of jobs, subject to various
constraints. In practice, many of these problems are solved with CSP techniques.
Consider the problem of scheduling the assembly of a car. The whole job is
composed of tasks. Constraints can assert that one task must occur before another -
for example, a wheel must be installed before the hubcap is put on. Constraints can
also specify that a task takes a certain amount of time to complete.
Consider a small part of the car assembly, consisting of 15 tasks: install axles
(front and back), affix all four wheels (right and left, front and back), tighten nuts for
each wheel, affix hubcaps, and inspect the final assembly. The tasks can be
represented with 15 variables:
X = {AxleF, AxleB, WheelRF ,WheelLF ,WheelRB, WheelLB,NutsRF NutsLF
,NutsRB,NutsLB,CapRF ,CapLF ,CapRB,CapLB,Inspect}.
Next precedence constraints are represented between individual tasks. Whenever
a task T1 must occur before task T2, and task takes duration d to complete, an
arithmetic constraint of the form can be added
T1 + d 1 ≤ T2

In this example, the axles have to be in place before the wheels are put on, and it
takes 10 minutes to install an axle, hence the constraints can be written as
AxleF + 10 ≤ WheelRF ; AxleF + 10 ≤ WheelLF ; AxleB + 10 ≤ WheelRB; AxleB + 10 ≤
WheelLB.

Page 41
Paavai Institutions Department of CSE

Cryptarithmetic Problems : Examples

1. SEND + MORE = MONEY


5 4 3 2 1
S E N D
+ M O R E
c3 c2 c1
M O N E Y

Cryptarithmetic Problems: It is an arithmetic problem represented using letters.

It involves the decoding of digits represented by a character.

Constraints:
❖ Assign a decimal digit to each of the letters in such a way that the
answer is correct
❖ Assign decimal digit to letters
❖ Cannot assign different digits to same letters
❖ No two letters have the same digit
❖ Unique digit assigned to each letter
Rules:
❖ From Column 5, M = 1, since it is only carry-over possible from sum of
2 single digit number in column 4.
5 4 3 2 1
S E N D
+ 1 O R E
c3 c2 c1

1 O N E Y

Page 42
Paavai Institutions Department of CSE

❖ To produce a carry from column 4 to column 5 ‗S + M‘ is at least 9 so ‗S


= 8 or 9‘ so ‗S + M = 9 or 10‘ & so ‗O = 0 or 1‘ . But ‗M = 1‘, so ‗O =
0‘.
5 4 3 2 1
9 E N D
+ 1 O R E
c3 c2 c1
1 0 N E Y

❖ If there is carry from column 3 to 4 then ‗E = 9‘ & so ‗N = 0‘. But ‗O


= 0‘ so there is no carry & ‗S = 9‘ & ‗C3 = 0‘.
❖ If there is no carry from column 2 to 3 then ‗E = N‘ which is
impossible, therefore there is carry & ‗N = E + 1‘ & ‗C2 = 1‘.

❖ If there is carry from column 1 to 2 then ‗N + R = E mod 10‘ & ‗N = E +


1‘ so ‗E + 1 + R = E mod 10‘, so ‗R = 9‘ but ‗S = 9‘, so there must be
carry from column 1 to 2. Therefore ‗c1=1‘ & ‗R = 8‘.
❖ To produce carry ‗c1=1‘ from column 1 to 2, there must be ‗D + E = 10 +
Y′ as Y cannot be 0/1 so D + E is at least 12. As D is at most 7 & E is at
least 5(D cannot be 8 or 9 as it is already assigned). N is atmost 7 & ‗N =
E + 1‘ so ‗E = 5 or 6‘.
❖ If E were 6 & D + E atleast 12 then D would be 7, but ‗N = E + 1‘ & N
would also be 7 which is impossible. Therefore ‗E = 5‘ & ‗N = 6‘.
❖ D + E is atleast 12 hence ‗D = 7‘ & ‗Y = 2‘
Solution:

9 5 6 7

Page 43
Paavai Institutions Department of CSE
+ 1 0 8 5
1 0 6 5 2

Letter Digit Value


S 9
E 5
N 6
D 7
M 1
O 0
R 8
Y 2

Constraint Propagation : Inference in CSP


In CSPs there is a choice: an algorithm can search (choose a new variable
assignment from several possibilities) or do a specific type of inference called
constraint propagation: using the constraints to reduce the number of legal values for
a variable, which in turn can reduce the legal values for another variable, and so on.

The key idea is local consistency. If each variable is treated as a node in a graph
and each binary constraint as an arc, then the process of enforcing local consistency
in each part of the graph causes inconsistent values to be eliminated throughout the
graph.
There are different types of local consistency, which are as follows.

Node consistency
A single variable (corresponding to a node in the CSP network) is node-consistent
if all the values in the variable‘s domain satisfy the variable‘s unary constraints. For
example, in the variant of the Australia map-coloring problem where South
Australians dislike green, the variable SA starts with domain {red , green, blue}, and
this can be made node consistent by eliminating green, leaving SA with the reduced
domain {red , blue}. Thus a network is node-consistent if every variable in the
network is node-consistent. It is always possible to eliminate all the unary constraints
in a CSP by running node consistency.

Page 44
Paavai Institutions Department of CSE
Arc consistency

A variable in a CSP is arc-consistent if every value in its domain satisfies the


variable‘s binary constraints. More formally, X i is arc-consistent with respect to
another variable Xj if for every value in the current domain Di there is some value in
the domain Dj that satisfies the binary constraint on the arc (Xi , Xj ). A network is
arc-consistent if every variable is arc consistent with every other variable. For
example, consider the constraint Y = X2 where the domain of both X and Y is the set
of digits. This constraint can be explicitly written as {(X, Y), {(0, 0), (1, 1), (2, 4), (3,
9)) }

To make X arc-consistent with respect to Y, reduce X‘s domain to {0, 1, 2, 3}.


and also to make Y arc-consistent with respect to X, then Y ‘s domain becomes {0,
1, 4, 9} and the whole CSP is arc-consistent.

The most popular algorithm for arc consistency is called AC-3 .


function AC-3(csp) returns false if an inconsistency is found and true otherwise

queue ← a queue of arcs, initially all the arcs in csp

while queue is not empty


do (Xi , Xj
)
←Pop(queue)

if REVISE(csp, Xi, Xj ) then


if size of Di = 0 then return false
for each Xk in Xi, NEIGHBORS – {Xj }
do add (Xk , Xi) to queue
return true
function REVISE(csp, X i , Xj )returns true iff we revise the domain of

Xi revised ← false

for each x in Di do
if no value y in Dj allows (x, y) to satisfy the constraint between Xi and
Xj then

Page 46
Paavai Institutions Department of AI&DS

To make every variable arc-consistent, the AC-3 algorithm maintains a queue of


arcs to consider. Initially, the queue contains all the arcs in the CSP. AC-3 then pops
off an arbitrary arc (Xi , Xj ) from the queue and makes Xi arc-consistent with
respect to Xj . If this leaves Di unchanged, the algorithm just moves on to the next
arc. But if this revises Di (makes the domain smaller), then add all arcs (Xk , Xi) to
the queue where Xk is a neighbor of Xi . This has to be done because the change in Di
might enable further reductions in the domains of Dk, even if Xk is considered
previously. If Di is revised down to nothing, then the whole CSP has no consistent
solution, and AC-3 can immediately return failure. Otherwise, it keeps checking,
trying to remove values from the domains of variables until no more arcs are in the
queue. At that point, a CSP that is equivalent to the original CSP is left - they both
have the same solutions - but the arc-consistent CSP will in most cases be faster to
search because its variables have smaller domains

Path Consistency
Path consistency tightens the binary constraints by using implicit constraints that
are inferred by looking at triples of variables.
A two-variable set {Xi , Xj } is path-consistent with respect to a third variable Xm
if, for every assignment {Xi = a, Xj = b} consistent with the constraints on {Xi , Xj },
there is an assignment to Xm that satisfies the constraints on {Xi , Xm} and {Xm, Xj }.
This is called path consistency because one can think of it as looking at a path from
Xi to Xj with Xm in the middle.
Let‘s consider the path consistency fares in coloring the Australia map with two
colors. Consider the set {WA, SA} which is path consistent with respect to NT. By
enumerating the consistent assignments to the set. there are only two assignments:
{WA = red , SA = blue}and {WA = blue, SA = red}.In both of these assignments
NT can be neither red nor blue (because it would conflict with either WA or SA).
Because there is no valid choice for NT, both assignments can be eliminated there is
no valid assignments for {WA, SA}. Therefore, there can be no solution to this
problem.
K -consistency
Stronger forms of propagation can be defined with the notion of k -consistency. A
CSP is k- consistent if, for any set of k-1 variables and for any consistent assignment
to those variables, a consistent value can always be assigned to any kth variable
A CSP is strongly k -consistent if it is k-consistent and is also (k – 1) -consistent,
(k – 2) - consistent, all the way down to 1-consistent.

Page 47
Paavai Institutions Department of AI&DS

Global constraints
A global constraint is one involving an arbitrary number of variables (but not
necessarily all variables). Global constraints occur frequently in real problems and
can be handled by special- purpose algorithms. For example, the All diff constraint
says that all the variables involved must have distinct values (as in the
cryptarithmetic problem and Sudoku puzzles).
One simple form of inconsistency detection for Alldiff constraints works as
follows: if m variables are involved in the constraint, and if they n have possible
distinct values altogether, and m>n , then the constraint cannot be satisfied.
This leads to the following simple algorithm:
❖ First, remove any variable in the constraint that has a singleton domain,
and delete that variable‘s value from the domains of the remaining
variables.
❖ Repeat as long as there are singleton variables.
❖ If at any point an empty domain is produced or there are more variables
than domain values left, then an inconsistency has been detected.

Sudoku

❖ The popular Sudoku puzzle has introduced millions of people to


constraint satisfaction problems, although they may not realize it.
❖ A Sudoku board consists of 81 squares, some of which are initially filled
with digits from 1 to 9. The puzzle is to fill in all the remaining squares
such that no digit appears twice in any row, column, or 3 × 3 box

❖ A row, column, or box is called a unit.

Page 48
Paavai Institutions Department of AI&DS

Fig. 1.11.

A Sudoku puzzle can be considered a CSP with 81 variables, one for each square.
The variable names A1 through A9 is used for the top row (left to right), down to I1
through I9 for the bottom row. The empty squares have the domain{1, 2, 3, 4, 5, 6,
7, 8, 9} and the pre-filled squares have a domain consisting of a single value. In
addition, there are 27 different Alldiff constraints, one for each unit (row, column,
and box of 9 squares):
Alldi
ff(A1,A2,A3,A4,A5,A6,A7,A8,A9)
Alldi
ff(B1,B2,B3,B4,B5,B6,B7,B8,B9)

Alldi
ff(A1,B1,C1,D1,E1,F1,G1,H1,I1)
Alldi
ff(A2,B2,C2,D2,E2,F2,G2,H2,I2)

Alldi
ff(A1,A2,A3,B1,B2,B3,C1,C2,C3)
Alldi
ff(A4,A5,A6,B4,B5,B6,C4,C5,C6)

Backtracking Search for CSP


The term backtracking search is used for a depth-first search that chooses values
for one variable at a time and backtracks when a variable has no legal values left to
assign. The algorithm is shown in Figure.

function BACKTRACKING-SEARCH(csp) returns a solution, or


failure return BACKTRACK({},csp)
Page 49
Paavai Institutions Department of AI&DS

function BACKTRACK(assignment.csp) returns a solution, or

Page 50
Paavai Institutions Department of AI&DS

failure if assignment is complete then return

assignment var←SELECT-UNASSIGNED-

VARIABLE(csp)

for each value in ORDER-DOMAIN-VALUES(var,


assignment,csp) do if value is consistent with assignment then
add {var = value} to assignment

inferences ← INTERFENCE(csp, var,

value) if inferences ≠ failure then

add inferences to assignment

result ←BACKTRACK(assignment,

csp) if result ≠ failure

then return result


remove {var = value} and inferences
from assignment return failure
It repeatedly chooses an unassigned variable, and then tries all values in the
domain of that variable in turn, trying to find a solution. If an inconsistency is
detected, then BACKTRACK returns failure, causing the previous call to try another
value.
Part of the search tree for the Australia problem is shown in Figure ,

Fig. 1.12.

where variables are assigned in the order WA,NT,Q, . . .. Because the


representation of CSPs is standardized, there is no need to supply

Page 51
Paavai Institutions Department of AI&DS

BACKTRACKING-SEARCH with a domain-specific initial state, action function,

Page 52
Paavai Institutions Department of AI&DS

transition model, or goal test.

TWO MARKS QUESTIONS AND ANSWERS (PART A)


1. When did first A.I idea was Proposed?
In the year 1943, Warren Mc Culloch and Walter pits proposed a model of
Artificial neurons.
2. What is A.I?
Artificial Intelligence is a branch of computer science that deals with
developing intelligent machines which can behave like human, think like
human, and has ability to take decisions by their own.
Artificial Intelligence is a combination of two words Artificial and
Intelligence, which refers to man-made intelligence. Therefore, when machines
are equipped with man-made intelligence to perform intelligent tasks similar to
humans, it is known as Artificial Intelligence.

3. List Some Applications of A.I?


a. Game Playing:
AI is widely used in Gaming. Different strategic games such as Chess,
where the machine needs to think logically, and video games to provide
real-time experiences use Artificial Intelligence.
b. Robotics:
Artificial Intelligence is commonly used in the field of Robotics to
develop intelligent robots. AI implemented robots use real-time updates to
sense any obstacle in their path and can change the path instantly. AI
robots can be used for carrying goods in hospitals and industries and can
also be used for other different purposes.
c. Healthcare:
In the healthcare sector, AI has diverse uses. In this field, AI can be used

Page 53
Paavai Institutions Department of AI&DS

to detect diseases and cancer cells. It also helps in finding new drugs with
the use of historical data and medical intelligence.

4. What are the Four Phases of Problem solving Agents?


a. Goal Formulation
b. Problem Formulation
c. Search
d. Execution

5. List the Types of Artificial Intelligence?

6. Different Between Super A.I & Weak A.I?


S.No Weak AI Super AI
Super AI is a level of Intelligence
Narrow AI is a type of AI which
of Systems at which machines
is able to perform a dedicated task
could surpass human intelligence,
with intelligence. The most
1 and can perform any task better
common and currently available
AI is Narrow AI in the world of than human with cognitive
Artificial Intelligence. properties. It is an outcome of
general AI.
Narrow AI cannot perform
beyond its field or limitations, as Some key characteristics of strong
it is only trained for one specific AI include capability include the
2 task. Hence it is also termed as ability to think, to reason, solve the
weak AI. Narrow AI can fail in puzzle, make judgments, plan,
unpredictable ways if it goes learn, and communicate by its own.
beyond its limits.

7. What are the Search algorithm Terminologies?


Search: Searching is a step by step procedure to solve a search-problem in a
given search space. A search problem can have three main factors:

Page 54
Paavai Institutions Department of AI&DS

a. Search Space: Search space represents a set of possible solutions,


which a system may have
b. Start State: It is a state from where agent begins the search.
c. Goal Test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
d. Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.

e. Actions: It gives the description of all the available actions to the agent.

f. Transition model: A description of what each action do, can be represented as a


transition model.

g. Path Cost: It is a function which assigns a numeric cost to each path.

h. Solution: It is an action sequence which leads from the start node to the goal
node.

i. Optimal Solution: If a solution has the lowest cost among all solutions.

8. What are the types of Search Algorithm?

9. What is Blind Search and it types?


The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it only
includes information about how to traverse the tree and how to identify leaf and
goal nodes. Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state operators and
test for the goal, so it is also called blind search. It examines each node of the

Page 55
Paavai Institutions Department of AI&DS

tree until it achieves the goal node.


It can be divided into five main types:
a. Breadth-first search
b. Uniform cost search
c. Depth-first search
d. Iterative deepening depth-first search
e. Bidirectional Search

10. Write about Informed search?


Informed search algorithms use domain knowledge. In an informed search,
problem information is available which can guide the search. Informed search
strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions
but guaranteed to find a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved
in another way.
An example of informed search algorithms is a traveling salesman problem.
1. Greedy Search
2. A* Search

11. Discuss about BFS & DFS?


Breadth-first Search:
a. Breadth-first search is the most common search strategy for traversing a
tree or graph. This algorithm searches breadthwise in a tree or graph, so it
is called breadth-first search.
b. BFS algorithm starts searching from the root node of the tree and expands
all successor node at the current level before moving to nodes of next
level.

c. The breadth-first search algorithm is an example of a general-graph


search algorithm.
d. Breadth-first search implemented using FIFO queue data structure.
Depth-first Search
e. Depth-first search is a recursive algorithm for traversing a tree or graph
data structure.
f. It is called the depth-first search because it starts from the root node and

Page 56
Paavai Institutions Department of AI&DS

follows each path to its greatest depth node before moving to the next
path.
g. DFS uses a stack data structure for its implementation.
h. The process of the DFS algorithm is similar to the BFS algorithm.

12. Explain about heuristic search strategies?


A heuristic is a technique that is used to solve a problem faster than the
classic methods. These techniques are used to find the approximate solution of a
problem when classical methods do not. Heuristics are said to be the problem-
solving techniques that result in practical and quick solutions.
Heuristics are strategies that are derived from past experience with similar
problems. Heuristics use practical methods and shortcuts used to produce the
solutions that may or may not be optimal, but those solutions are sufficient in a
given limited timeframe.

13. Why do we need heuristics?


Heuristics are used in situations in which there is the requirement of a short-
term solution. On facing complex situations with limited resources and time,
Heuristics can help the companies to make quick decisions by shortcuts and
approximated calculations. Most of the heuristic methods involve mental
shortcuts to make decisions on past experiences.

14. What is Local maxima and Ridges in Hill Climbing?


Local Maxima: A local maximum is a peak that is higher than each of its
neighboring states but lower than the global maximum.
Ridges: A ridge is a special form of the local maximum. It has an area which
is higher than its surrounding areas, but itself has a slope, and cannot be reached
in a single move. Ridges result in a sequence of local maxima that is very
difficult for greedy algorithms to navigate.

15. Give an real life example of Heuristics search?

Page 57
Paavai Institutions Department of AI&DS

16. Types of games in A.I?


a. Perfect information: A game with the perfect information is that in
which agents can look into the complete board. Agents have all the
information about the game, and they can see each other moves also.
Examples are Chess, Checkers, Go, etc.
b. Imperfect information: If in a game agents do not have all information
about the game and not aware with what's going on, such type of games
are called the game with imperfect information, such as tic-tac-toe,
Battleship, blind, Bridge, etc.
c. Deterministic games: Deterministic games are those games which follow
a strict pattern and set of rules for the games, and there is no randomness
associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.

17. Define CSP.


A problem is solved when each variable has a value that satisfies all the
constraints on the variable. Such a problem is called a constraint satisfaction
problem, or CSP

18. What is K-Consistency?


Stronger forms of propagation can be defined with the notion of k -
consistency. A CSP is k - consistent if, for any set of k – 1 variables and for any
consistent assignment to those variables, a consistent value can always be
assigned to any kth variable
A CSP is strongly k - consistent if it is k - consistent and is also(k – 1)
- consistent,(k – 2) - consistent, all the way down to 1-consistent.

19. Draw a Game Tree?

Page 56
Paavai Institutions Department of AI&DS

PART B& C

1. Give your detail views on A.I

2. Solve the Cryptarithmetic problem CSP – SEND + MORE = MONEY

3. Explain Hill Climbing Search algorithm

4. Write briefly about Adversarial Search?

5. Write about Heuristic search techniques in AI?

6. Write about CSP with suitable example?

7. Explain the types of Uninformed search algorithms?

8. Explain the types of Informed search algorithm?

9. Steps in problem solving agents?

10. What are the application of A.I

******************

Page 57
Paavai Institutions Department of AI&DS

Page 58

You might also like