Unit - 1
Unit - 1
Unit - 1
Artificial Intelligence (AI) is a branch of Science which deals with helping machines finding
solutions to complex problems in a more human-like fashion. This generally involves borrowing
characteristics from human intelligence, and applying them as algorithms in a computer friendly
way. A more or less flexible or efficient approach can be taken depending on the requirements
established, which influences how artificial the intelligent behaviour appears. AI is generally
associated with Computer Science, but it has many important links with other fields such as
Maths, Psychology, Cognition, Biology and Philosophy, among many others. Our ability to
combine knowledge from all these fields will ultimately benefit our progress in the quest of
creating an intelligent artificial being.
1.1.2 HISTORY OF AI
The origin of artificial intelligence lies in the earliest days of machine computations. During the 1940s
and 1950s, AI begins to grow with the emergence of the modern computer. Among the first researchers to
attempt to build intelligent programs were Newell and Simon. Their first well known program, logic
theorist, was a program that proved statements using the accepted rules of logic and a problem solving
program of their own design. By the late fifties, programs existed that could do a passable job of
translating technical documents and it was seen as only a matter of extra databases and more computing
power to apply the techniques to less formal, more ambiguous texts. Most problem solving work revolved
around the work of Newell, Shaw and Simon, on the general problem solver (GPS). Unfortunately the
GPS did not fulfill its promise and did not because of some simple lack of computing capacity. In the
1970’s the most important concept of AI was developed known as Expert System which exhibits as a set
rules the knowledge of an expert. The application area of expert system is very large. The 1980’s saw the
development of neural networks as a method learning examples.
Prof. Peter Jackson (University of Edinburgh) classified the history of AI into three periods as:
1. Classical
2. Romantic
3. Modern
1. Classical Period:
It was started from 1950. In 1956, the concept of Artificial Intelligence came into existance. During this
period, the main research work carried out includes game plying, theorem proving and concept of state
space approach for solving a problem.
2. Romantic Period:
It was started from the mid 1960 and continues until the mid 1970. During this period people were
interested in making machine understand, that is usually mean the understanding of natural language.
During this period the knowledge representation technique “semantic net” was developed.
3. Modern Period:
It was started from 1970 and continues to the present day. This period was developed to solve more
complex problems. This period includes the research on both theories and practical aspects of Artificial
Intelligence. This period includes the birth of concepts like Expert system, Artificial Neurons, Pattern
Recognition etc. The research of the various advanced concepts of Pattern Recognition and Neural
Network are still going on.
1.1.2.1 COMPONENTS OF AI
There are three types of components in AI
1) Hardware Components of AI
a) Pattern Matching
b) Logic Representation
c) Symbolic Processing
d) Numeric Processing
e) Problem Solving
f) Heuristic Search
g) Natural Language processing
h) Knowledge Representation
i) Expert System
j) Neural Network
k) Learning
l) Planning
m) Semantic Network
2) Software Components
a) Machine Language
b) Assembly language
c) High level Language
d) LISP Language
e) Fourth generation Language
f) Object Oriented Language
g) Distributed Language
h) Natural Language
i) Particular Problem Solving Language
3) Architectural Components
a) Uniprocessor
b) Multiprocessor
c) Special Purpose Processor
d) Array Processor
e) Vector Processor
f) Parallel Processor
g) Distributed Processor
1. AI is the study of how to make computers do things which at the moment people do
better. This is ephemeral as it refers to the current state of computer science and it
excludes a major area ; problems that cannot be solved well either by computers or by
people at the moment.
2. AI is a field of study that encompasses computational techniques for performing tasks
that apparently require intelligence when performed by humans.
3. AI is the branch of computer science that is concerned with the automation of intelligent
behaviour. A I is based upon the principles of computer science namely data structures
used in knowledge representation, the algorithms needed to apply that knowledge and the
languages and programming techniques used in their implementation.
4. AI is the field of study that seeks to explain and emulate intelligent behaviour in terms of
computational processes.
5. AI is about generating representations and procedures that automatically or autonomously
solve problems heretofore solved by humans.
6. A I is the part of computer science concerned with designing intelligent computer
systems, that is, computer systems that exhibit the characteristics we associate with
intelligence in human behaviour such as understanding language, learning, reasoning and
solving problems.
7. A I is the study of mental faculties through the use of computational models.
8. A I is the study of the computations that make it possible to perceive, reason, and act.
9. A I is the exciting new effort to make computers think machines with minds, in the full
and literal sense.
10. AI is concerned with developing computer systems that can store knowledge and
effectively use the knowledge to help solve problems and accomplish tasks. This brief
statement sounds a lot like one of the commonly accepted goals in the education of
humans. We want students to learn (gain knowledge) and to learn to use this knowledge
to help solve problems and accomplish tasks.
In contrast, the weak AI is not so enthusiastic about the outcomes of AI and it simply says that some
thinking like features can be added to computers to make them more useful tools. It says that computers
to make them more useful tools. It says that computers cannot be made intelligent equal to human being,
unless constructed significantly differently. They claim that computers may be similar to human experts
but not equal in any cases. Generally weak AI refers to the use of software to study or accomplish specific
problem solving that do not encompass the full range of human cognitive abilities. An example of weak
AI would be a chess program. Weak AI programs cannot be called “intelligent” because they cannot
really think.
1.1.4 TASK DOMAIN OF AI
- Perception
• Machine Vision: It is easy to interface a TV camera to a computer and get an
image into memory; the problem is understandingwhat the image represents.
Vision takes lots of computation; in humans, roughly 10% of all calories
consumed are burned in vision computation.
• Speech Understanding: Speech understanding is available now. Some systems
must be trained for the individual user and require pauses between words.
Understanding continuous speech with a larger vocabulary is harder.
• Touch(tactile or haptic) Sensation: Important for robot assembly tasks.
- Robotics Although industrial robots have been expensive, robot hardware can be cheap: Radio
Shack has sold a working robot arm and hand for $15. The limiting factor in application of
robotics is not the cost of the robot hardware itself. What is needed is perception and intelligence
to tell the robot what to do; ``blind'' robots are limited to very well-structured tasks (like spray
painting car bodies).
- Planning Planning attempts to order actions to achieve goals. Planning applications include
logistics, manufacturing scheduling, planning manufacturing steps to construct a desired product.
There are huge amounts of money to be saved through better planning.
- Expert Systems Expert Systems attempt to capture the knowledge of a human expert and make
it available through a computer program. There have been many successful and economically
valuable applications of expert systems. Expert systems provide the following benefits
• Reducing skill level needed to operate complex devices.
• Diagnostic advice for device repair.
• Interpretation of complex data.
• ``Cloning'' of scarce expertise.
• Capturing knowledge of expert who is about to retire.
• Combining knowledge of multiple experts.
- Theorem Proving Proving mathematical theorems might seem to be mainly of academic interest.
However, many practical problems can be cast in terms of theorems. A general theorem prover can
therefore be widely applicable.
Examples:
• Automatic construction of compiler code generators from a description of a CPU's instruction set.
• J Moore and colleagues proved correctness of the floating-point division algorithm on AMD CPU
chip.
- Game Playing Games are good vehicles for research because they are well formalized, small, and
self-contained. They are therefore easily programmed. Games can be good models of competitive
situations, so principles discovered in game-playing programs may be applicable to practical
problems.
To understand what exactly artificial intelligence is, we illustrate some common problems. Problems
dealt with in artificial intelligence generally use a common term called 'state'. A state represents a
status of the solution at a given step of the problem solving procedure. The solution of a problem,
thus, is a collection of the problem states. The problem solving procedure applies an operator to a
state to get the next state. Then it applies another operator to the resulting state to derive a new state.
The process of applying an operator to a state and its subsequent transition to the next state, thus, is
continued until the goal (desired) state is derived. Such a method of solving a problem is generally
referred to as state space approach For example, in order to solve the problem play a game, which is
restricted to two person table or board games, we require the rules of the game and the targets for
winning as well as a means of representing positions in the game. The opening position can be
defined as the initial state and a winning position as a goal state, there can be more than one. legal
moves allow for transfer from initial state to other states leading to the goal state. However the rules
are far too copious in most games especially chess where they exceed the number of particles in the
universe 10. Thus the rules cannot in general be supplied accurately and computer programs cannot
easily handle them. The storage also presents another problem but searching can be achieved by
hashing. The number of rules that are used must be minimised and the set can be produced by
expressing each rule in as general a form as possible. The representation of games in this way leads
to a state space representation and it is natural for well organised games with some structure. This
representation allows for the formal definition of a problem which necessitates the movement from a
set of initial positions to one of a set of target positions. It means that the solution involves using
known techniques and a systematic search. This is quite a common method in AI.
The control strategy is again not fully discussed but the AI program needs a structure to facilitate the
search which is a characteristic of this type of program.
Example:
The water jug problem :There are two jugs called four and three ; four holds a maximum of four
gallons and three a maximum of three gallons. How can we get 2 gallons in the jug four. The state
space is a set of ordered pairs giving the number of gallons in the pair of jugs at any time ie (four,
three) where four = 0, 1, 2, 3, 4 and three = 0, 1, 2, 3. The start state is (0,0) and the goal state is
(2,n) where n is a don't care but is limited to three holding from 0 to 3 gallons. The major production
rules for solving this problem are shown below:
11 (four,three) if four<4 (4,three-diff) pour diff, 4-four, into four from three
12 (three,four) if three<3 (four-diff,3) pour diff, 3-three, into three from four and
00
032
307
332
4 2 11
023
2 0 10
Control strategies.
A good control strategy should have the following requirement: The first requirement is that it causes
motion. In a game playing program the pieces move on the board and in the water jug problem water
is used to fill jugs. The second requirement is that it is systematic, this is a clear requirement for it
would not be sensible to fill a jug and empty it repeatedly nor in a game would it be advisable to
move a piece round and round the board in a cyclic way. We shall initially consider two systematic
approaches to searching.
Monotonic learning is when an agent may not learn any knowledge that contradicts what it already
knows. For example, it may not replace a statement with its negation. Thus, the knowledge base may
only grow with new facts in a monotonic fashion. The advantages of monotonic learning are:
Non-monotonic learning is when an agent may learn knowledge that contradicts what it already
knows. So it may replace old knowledge with new if it believes there is sufficient reason to do so.
The advantages of non-monotonic learning are:
1. increased applicability to real domains,
A related property is the consistency of the knowledge. If an architecture must maintain a consistent
knowledge base then any learning strategy it uses must be monotonic.
A problem may have different aspects of representation and explanation. In order to choose the most
appropriate method for a particular problem, it is necessary to analyze the problem along several key
dimensions. Some of the main key features of a problem are given below.
(a) A set of production rules, which are of the form A→B. Each rule consists of left hand side
constituent that represent the current problem state and a right hand side that represent an output
state. A rule is applicable if its left hand side matches with the current problem state.
(b) A database, which contains all the appropriate information for the particular task. Some part of
the database may be permanent while some part of this may pertain only to the solution of the
current problem.
(c) A control strategy that specifies order in which the rules will be compared to the database of rules
and a way of resolving the conflicts that arise when several rules match simultaneously.
(d) A rule applier, which checks the capability of rule by matching the content state with the left hand
side of the rule and finds the appropriate rule from database of rules.
The important roles played by production systems include a powerful knowledge representation scheme.
A production system not only represents knowledge but also action. It acts as a bridge between AI and
expert systems. Production system provides a language in which the representation of expert knowledge
is very natural. We can represent knowledge in a production system as a set of rules of the form
along with a control system and a database. The control system serves as a rule interpreter and sequencer.
The database acts as a context buffer, which records the conditions evaluated by the rules and information
on which the rules act. The production rules are also known as condition – action, antecedent –
consequent, pattern – action, situation – response, feedback – result pairs.
For example,
The production system can be classified as monotonic, non-monotonic, partially commutative and
commutative.
1. Simplicity: The structure of each sentence in a production system is unique and uniform as they use
“IF-THEN” structure. This structure provides simplicity in knowledge representation. This feature of
production system improves the readability of production rules.
2. Modularity: This means production rule code the knowledge available in discrete pieces.
Information can be treated as a collection of independent facts which may be added or deleted from
the system with essentially no deletetious side effects.
3. Modifiability: This means the facility of modifying rules. It allows the development of production
rules in a skeletal form first and then it is accurate to suit a specific application.
4. Knowledge intensive: The knowledge base of production system stores pure knowledge. This part
does not contain any type of control or programming information. Each production rule is normally
written as an English sentence; the problem of semantics is solved by the very structure of the
representation.
Step 1:
Analyze the problem to get the starting state and goal state.
Step 2:
Step 3:
Find out the production rules from initial database for proceeding the problem to goal state.
Step 4:
Select some rules from the set of rules that can be applied to data.
Step 5:
Apply those rules to the initial state and proceed to get the next state.
Step 6:
Determine some new generated states after applying the rules. Accordingly make them as current state.
Step 7:
Finally, achieve some information about the goal state from the recently used current state and get the
goal state.
Step 8:
Exit.
After applying the above rules an user may get the solution of the problem from a given state to another
state. Let us take few examples.
Some jugs are given which should have non-calibrated properties. At least any one of the jugs should
have filled with water. Then the process through which we can divide the whole water into different jugs
according to the question can be called as water jug problem.
Procedure:
Suppose that you are given 3 jugs A,B,C with capacities 8,5 and 3 liters respectively but are not calibrated
(i.e. no measuring mark will be there). Jug A is filled with 8 liters of water. By a series of pouring back
and forth among the 3 jugs, divide the 8 liters into 2 equal parts i.e. 4 liters in jug A and 4 liters in jug B.
How?
In this problem, the start state is that the jug A will contain 8 liters water whereas jug B and jug C will be
empty. The production rules involve filling a jug with some amount of water, taking from the jug A. The
search will be finding the sequence of production rules which transform the initial state to final state. The
state space for this problem can be described by set of ordered pairs of three variables (A, B, C) where
variable A represents the 8 liter jug, variable B represents the 5 liter and variable C represents the 3 liters
jug respectively.
Figure
Step 1:
In this step, the initial state will be (8, 0, 0) as the jug B and jug C will be empty. So the water of jug A
can be poured like:
(0, 5, 3) means 5 liters to jug B and 3 liters to jug C and jug C and jug A will be empty.
Step2:
In this step, start with the first current state of step-1 i.e. (5, 0, 3). This state can only be implemented by
pouring the 3 liters water of jug C into jug B. so the state will be (5, 3, 0). Next, come to the second
current state of step-1 i.e. (3, 5, 0). This state can be implemented by only pouring the 5 liters water of jug
B into jug C. So the remaining water in jug B will be 2 liters. So the state will be (3, 2, 3). Finally come to
the third current state of step-1 i.e. (0, 5, 3). But from this state no more state can be implemented because
after implementing we may get (5, 0, 3) or (3, 5, 0) or (8, 0, 0) which are repeated state. Hence these
states are not considerably again for going towards goal.
(5, 0, 3) ‹ (5, 3, 0)
(3, 5, 0) ‹ (3, 2, 3)
(0, 5, 3) ‹ X
Step 3:
In this step, start with the first current state of step-2 i.e. (5, 3, 0) and proceed likewise the above steps.
(5, 3, 0) ‹ (2, 3, 3)
(3, 2, 3) ‹ (6, 2, 0)
Step 4:
In this step, start with the first current state of step-3 i.e. (2, 3, 3) and proceed.
(2, 3, 3) ‹ (2, 5, 1)
(6, 2, 0) ‹ (7, 0, 1)
Step 5:
(2, 5, 1) ‹ (7, 0, 1)
(6, 0, 2) ‹ (1, 5, 2)
Step6:
(7, 0, 1) ‹ (7, 1, 0)
(1, 4, 3) ‹ (1, 4, 3)
Step7:
(7, 1, 0) ‹ (4, 1, 3)
(1, 4, 3) ‹ (4, 4, 0) (Goal)
So finally the state will be (4, 4, 0) that means jug A and jug B contains 4 liters of water each which is our
goal state. One thing you have to very careful about the pouring of water from one jug to another that the
capacity of jug must satisfy the condition to contain that much of water.
Figure
Comments:
In Missionaries and Carnivals Problem, initially there are some missionaries and some carnivals will be at
a sideof a river. They want to cross the river. But there is only one boat available to cross the river. The
capacity of the boat is 2 and no one missionary or no Carnivals can cross the river together. So for solving
the problem and to find out the solution on different states is called the Missionaries and Carnival
Problem.
Procedure:
Let us take an example. Initially a boatman, Grass, Tiger and Goat is present at the left bank of the river
and want to cross it. The only boat available is one capable of carrying 2 objects of portions at a time. The
condition of safe crossing is that at no time the tiger present with goat, the goat present with the grass at
the either side of the river. How they will cross the river?
The objective of the solution is to find the sequence of their transfer from one bank of the river to the
other using the boat sailing through the river satisfying these constraints.
Let us use different representations for each of the missionaries and Carnivals as follows.
B: Boat
T: Tiger
G: Goat
Gr: Grass
Step 1:
According to the question, this step will be (B, T, G, Gr) as all the Missionaries and the Carnivals are at
one side of the bank of the river. Different states from this state can be implemented as
`The states (B, T, O, O) and (B, O, O, Gr) will not be countable because at a time the Boatman and the
Tiger or the Boatman and grass cannot go. (According to the question).
Step 2:
Now consider the current state of step-1 i.e. the state (B, O, G, O). The state is the right side of the river.
So on the left side the state may be (B, T, O, Gr)
(Right) (Left)
Step 3:
Now proceed according to the left and right sides of the river such that the condition of the problem must
be satisfied.
Step 4:
First, consider the first current state on the right side of step 3 i.e.
Now consider the second current state on the right side of step-3 i.e.
Step 5:
Step 7:
Hence the final state will be (B, T, G, Gr) which are on the right side of the river.
Comments:
Chess Problem
Definition:
It is a normal chess game. In a chess problem, the start is the initial configuration of chessboard. The final
state is the any board configuration, which is a winning position for any player. There may be multiple
final positions and each board configuration can be thought of as representing a state of the game.
Whenever any player moves any piece, it leads to different state of game.
Procedure:
Figure
The above figure shows a 3x3 chessboard with each square labeled with integers 1 to 9. We simply
enumerate the alternative moves rather than developing a general move operator because of the reduced
size of the problem. Using a predicate called move in predicate calculus, whose parameters are the
starting and ending squares, we have described the legal moves on the board. For example, move (1, 8)
takes the knight from the upper left-hand corner to the middle of the bottom row. While playing Chess, a
knight can move two squares either horizontally or vertically followed by one square in an orthogonal
direction as long as it does not move off the board.
The above predicates of the Chess Problem form the knowledge base for this problem. An unification
algorithm is used to access the knowledge base.
Suppose we need to find the positions to which the knight can move from a particular location, square 2.
The goal move (z, x) unifies with two different predicates in the knowledge base, with the substitutions
{7/x} and {9/x}. Given the goal move (2, 3), the responsible is failure, because no move (2, 3) exists in
the knowledge base.
Comments:
In this game a lots of production rules are applied for each move of the square on the chessboard.
A lots of searching are required in this game.
Implementation of algorithm in the knowledge base is very important.
8- Queen Problem
Definition:
“We have 8 queens and an 8x8 Chess board having alternate black and white squares. The queens are
placed on the chessboard. Any queen can attack any other queen placed on same row, or column or
diagonal. We have to find the proper placement of queens on the Chess board in such a way that no queen
attacks other queen”.
Procedure:
In figure , the possible board configuration for 8-queen problem has been shown. The board has
alternative black and white positions on it. The different positions on the board hold the queens. The
production rule for this game is you cannot put the same queens in a same row or same column or in same
diagonal. After shifting a single queen from its position on the board, the user have to shift other queens
according to the production rule. Starting from the first row on the board the queen of their corresponding
row and column are to be moved from their original positions to another position. Finally the player has
to be ensured that no rows or columns or diagonals of on the table is same.
Comments:
8- Puzzle Problem
Definition:
“It has set off a 3x3 board having 9 block spaces out of which 8 blocks having tiles bearing number from
1 to 8. One space is left blank. The tile adjacent to blank space can move into it. We have to arrange the
tiles in a sequence for getting the goal state”.
Procedure:
The 8-puzzle problem belongs to the category of “sliding block puzzle” type of problem. The 8-puzzle is
a square tray in which eight square tiles are placed. The remaining ninth square is uncovered. Each tile in
the tray has a number on it. A tile that is adjacent to blank space can be slide into that space. The game
consists of a starting position and a specified goal position. The goal is to transform the starting position
into the goal position by sliding the tiles around. The control mechanisms for an 8-puzzle solver must
keep track of the order in which operations are performed, so that the operations can be undone one at a
time if necessary. The objective of the puzzles is to find a sequence of tile movements that leads from a
starting configuration to a goal configuration such as two situations given below.
The state of 8-puzzle is the different permutation of tiles within the frame. The operations are the
permissible moves up, down, left, right. Here at each step of the problem a function f(x) will be defined
i.e.
F(x)=g(x) + h (x)
Where
g (x): how many steps in the problem you have already done or the current state from the initial state.
h (x): Number of ways through which you can reach at the goal state from the current state or
Or
h (x)is the heuristic estimator that compares the current state with the goal state note down how many
states are displaced from the initial or the current state. After calculating the f (x) value at each step
finally take the smallest f (x) value at every step and choose that as the next current state to get the goal
state.
Step1:
f (x)is the step required to reach at the goal state from the initial state. So in the tray either 6 or 8 can
change their portions to fill the empty position. So there will be two possible current states namely B and
C. The f (x) value of B is 6 and that of C is 4. As 4 is the minimum, so take C as the current state to the
next state.
Step 2:
In this step, from the tray C three states can be drawn. The empty position will contain either 5 or 3 or 6.
So for three different values three different states can be obtained. Then calculate each of their f (x) and
take the minimum one.
Here the state F has the minimum value i.e. 4 and hence take that as the next current state.
Step 3:
The tray F can have 4 different states as the empty positions can be filled with b4 values i.e.2, 4, 5, 8.
Step 4:
In the step-3 the tray I has the smallest f (n) value. The tray I can be implemented in 3 different states
because the empty position can be filled by the members like 7, 8, 6.
Hence, we reached at the goal state after few changes of tiles in different positions of the trays.
Comments:
This problem requires a lot of space for saving the different trays.
Time complexity is more than that of other problems.
The user has to be very careful about the shifting of tiles in the trays.
Very complex puzzle games can be solved by this technique.
“A monkey is in a room. A bunch of bananas is hanging from the ceiling. The monkey cannot reach the
bananas directly. There is a box in the corner of the room. How can the monkey get the bananas?”
Procedure:
The solution of the problem is of course that the monkey must push the box under the bananas, then stand
on the box and grab the bananas. But the solution procedure requires a lot of planning algorithms. The
purpose of the problem is to raise the question: Are monkeys intelligent? Both humans and monkeys have
the ability to use mental maps to remember things like where to go to find shelter or how to avoid danger.
They can also remember where to go to gather food and water, as well as how to communicate with each
other. Monkeys have the ability not only to remember how to hunt and gather but they also have the
ability to learn new things, as is the case with the monkey and the bananas. Even though that monkey may
never have entered that room before or had only a box for a tool to gather the food available, that monkey
can learn that it needs to move the box across the floor, position it below the bananas and climb the box to
reach for them. Some people believe that this is part instinct, part learned behaviour. It is most probably
both.
Initially, the monkey is at location ‘A’, the banana is at location ‘B’ and the box is at location ‘C’. The
monkey and box have height “low”; but if the monkey climbs onto the box will have height “High”, the
same as the bananas.
“Grasp” an object.
Grasping results in holding the object if the monkey and the object are in the same place at the same
height.
GO(A,C)
PUSH (Box, C, B, Low)
Climb Up(Box , B)
Grasp(banana, B, High)
Climb down(Box)
Push(Box, B, C, Low)
Comments:
One major application of the monkey banana problem is the toy problem of computer science.
One of the specialized purposes of the problem is to raise the question: Are monkeys intelligent?
This problem is very useful in logic programming and planning.
“We are given a tower of eight discs (initially) four in the applet below, initially stacked in increasing size
on one of three pegs. The objective is to transfer the entire tower to one of the other pegs (the right most
one in the applet below), moving only one disc at a time and never a larger one onto a smaller”.
Procedure:
The tower of Hanoi puzzle was invented by the French mathematician Eduardo Lucas in 1883. The
puzzle is well known to students of computer science since it appears in virtually any introductory text on
data structure and algorithms.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules.
There are many variations on this legend. For instance, in some tellings, the temple is a monastery and the
priests are monks. The temple or monastery may be said to be in different parts of the world including
Hanoi, Vietnam and may be associated with any religion. The flag tower of Hanoi may have served as the
inspiration for the name.
The puzzle can be played with any number of disks, although many toy versions have around seven to
nine of them. The game seems impossible to many novices yet is solvable with a simple algorithm. The
following solution is a very simple method to solve the tower of Hanoi problem.
Alternative moves between the smallest piece and a non- smallest piece. When moving
thesmallest piece, always move it in the same direction (to the right if starting number of pieces is
even, to the left if starting number of pieces is odd).
If there is no tower in the chosen direction, move the pieces to the opposite end, but
thencontinue to move in the correct direction, for example if you started with three pieces,
you would move the smallest piece to the opposite end, then continue in the left direction after
that.
When the turn is to move the non-smallest piece, there is only one legal move.
Doing this should complete the puzzle using the least amount of moves to do so. Finally, the user will
reach at the goal. Also various types of solutions may be possible to solve the tower of Hanoi problem
like recursive procedure, non-recursive procedure and binary solution procedure.
A key to solving this problem is to recognize that it can be solve by breaking the problem down into the
collection of smaller problems and further breaking those problems down into even smaller problems
until a solution is reached. The following procedure demonstrates this approach.
Comments:
The tower of Hanoi is frequently used in psychological research on problem solving.
This problem is frequently used in neuro-psychological diagnosis and treatment of executive
functions.
The tower of Hanoi is also used as backup rotation scheme when performing computer
databackups where multiple tabs/media are involved.
This problem is very popular for teaching recursive algorithm to beginning
programming students.
A pictorial version of this puzzle is programmed into emacs editor, accessed by typing M -
X Hanoi.
The tower of Hanoi is also used as a test by neuro-psychologists trying to evaluate frontal
lobedeficits.
Cryptarithmatic Problem
Definition:
“It is an arithmetic problem which is represented in letters. It involves the decoding of digit represented
by a character. It is in the form of some arithmetic equation where digits are distinctly represented by
some characters. The problem requires finding of the digit represented by each character. Assign a
decimal digit to each of the letters in such a way that the answer to the problem is correct. If the same
letter occurs more than once, it must be assigned the same digit each time. No two different letters may be
assigned the same digit”.
Procedure:
Cryptarithmatic problem is an interesting constraint satisfaction problem for which different algorithms
have been developed. Cryptarithm is a mathematical puzzle in which digits are replaced by letters of the
alphabet or other symbols. Cryptarithmatic is the science and art of creating and solving cryptarithms.
1) Each letter or symbol represented only one and a unique digit throughout the problem.
2) When the digits replace letters or symbols, the resultant arithmetical operation must be correct.
The above two constraints lead to some other restrictions in the problem.
For example:
Consider that, the base of the number is 10. Then there must be at most 10 unique symbols or letters in
the problem. Otherwise, it would not possible to assign a unique digit to unique letter or symbol in the
problem. To be semantically meaningful, a number must not begin with a 0. So, the letters at the
beginning of each number should not correspond to 0. Also one can solve the problem by a simple blind
search. But a rule based searching technique can provide the solution in minimum time.
Step 1:
In the above problem, M must be 1. You can visualize that, this is an addition problem. The sum of two
four digit numbers cannot be more than 10,000. Also M cannot be zero according to the rules, since it is
Step 2:
Now in the column s10, s+1 ≥ 10. S must be 8 because there is a 1 carried over from the column EON or
9. O must be 0 (if s=8 and there is a 1 carried or s = 9 and there is no 1 carried) or 1 (if s=9 and there is a
1 carried). But 1 is already taken, so O must be 0.
Step 3:
There cannot be carry from column EON because any digit +0 < 10, unless there is a carry from the
column NRE, and E=9; But this cannot be the case because then N would be 0 and 0 is already taken. So
E < 9 and there is no carry from this column. Therefore S=9 because 9+1=10.
Step 4:
In the column EON, E cannot be equal to N. So there must be carry from the column NRE; E+1=N. We
now look at the column NRE, we know that E+1=N. Since we know that carry from this column,
N+R=1E (if there is no carry from the column DEY) or N+R+1=1E (if there is a carry from the column
DEY).
No carry: N + R = 10 + ( N — 1) = N + 9
R=9
Carry: N + R + 1 = 9
Step 5:
Now just think what are the digits we have left? They are 7, 6, 5, 4, 3 and 2. We know there must be a
carry from the column DEY. So D + E Σ 10.N = E + 1, So E cannot be 7 because then N would be 8
which is already taken. D is almost 7, so E cannot be 2 because then D + E € 10 and E cannot be 3
because then D + E = 10 and Y = 0, but 0 is already taken. Also E cannot be 4 because if D Σ 6, D +
E € 10 and if D = 6 or D = 7 then Y = 0 or Y = 1, which are both taken. So E is 5 or 6. If E =
6, then D = 7 and Y = 3. So this part will work but look the column N8E. Point that there is a carry from
the column D5Y.N + 8 + 1 = 16(As there is a carry from this column). But then N=7 and 7 is taken by D
therefore E=5.
Step 6:
Now we have gotten this important digit, it gets much simpler from here. N+8+1=15, N=6
Step 7:
The digits left are 7, 4, 3 and 2. We know there is carry from the column D5Y, so the only pair that works
is D=7 and Y= 2.
Comments:
1.4 SEARCHING
Problem solving in artificial intelligence may be characterized as a systematic search through a range of
possible actions in order to reach some predefined goal or solution. In AI problem solving by search
algorithms is quite common technique. In the coming age of AI it will have big impact on the
technologies of the robotics and path finding. It is also widely used in travel planning. This chapter
contains the different search algorithms of AI used in various applications. Let us look the concepts for
visualizing the algorithms.
A search algorithm takes a problem as input and returns the solution in the form of an action sequence.
Once the solution is found, the actions it recommends can be carried out. This phase is called as the
execution phase. After formulating a goal and problem to solve the agent cells a search procedure to solve
it. A problem can be defined by 5 components.
a) The initial state: The state from which agent will start.
b) The goal state: The state to be finally reached.
c) The current state: The state at which the agent is present after starting from the initial state.
d) Successor function: It is the description of possible actions and their outcomes.
e) Path cost: It is a function that assigns a numeric cost to each path.
the searching algorithms can be various types. When any type of searching is performed, there may some
information about the searching or mayn’t be. Also it is possible that the searching procedure may depend
upon any constraints or rules. However, generally searching can be classified into two types i.e.
uninformed searching and informed searching. Also some other classifications of these searches are given
below in the figure .
Figure
Concept:
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 3: If the FRONT node of the queue is a goal node then stop and return success.
Step 4: Remove the FRONT node from the queue. Process it and find all its neighbours that are in ready
state then place them inside the queue in any order.
Step 5: Go to Step 3.
Step 6: Exit.
Implementation:
Let us implement the above algorithm of BFS by taking the following suitable example.
Figure
Consider the graph in which let us take A as the starting node and F as the goal node (*)
Step 1:
A
Step 2:
Now the queue is not empty and also the FRONT node i.e. A is not our goal node. So move to step 3.
Step 3:
So remove the FRONT node from the queue i.e. A and find the neighbour of A i.e. B and C
B C A
Step 4:
Now b is the FRONT node of the queue .So process B and finds the neighbours of B i.e. D.
C D B
Step 5:
D E C
Step 6:
Next find out the neighbours of D as D is the FRONT node of the queue
E F D
Step 7:
Now E is the front node of the queue. So the neighbour of E is F which is our goal node.
F E
Step 8:
Finally F is our goal node which is the FRONT of the queue. So exit.
Advantages:
Disadvantages:
Concept:
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 3: If the top node of the stack is the goal node, then stop and return success.
Step 4: Else POP the top node from the stack and process it. Find all its neighbours that are in ready state
and PUSH them into the stack in any order.
Step 5: Go to step 3.
Step 6: Exit.
Implementation:
Step 2: Now the stack is not empty and A is not our goal node. Hence move to next step.
Step 3: POP the top node from the stack i.e. A and find the neighbours of A i.e. B and C.
B C A
Step 4: Now C is top node of the stack. Find its neighbours i.e. F and G.
B F G C
Step 5: Now G is the top node of the stack. Find its neighbour i.e. M
B F M G
Step 6: Now M is the top node and find its neighbour, but there is no neighbours of M in the graph so
POP it from the stack.
B F M
Step 7: Now F is the top node and its neighbours are K and L. so PUSH them on to the stack.
B K L F
Step 8: Now L is the top node of the stack, which is our goal node.
B K L
Also you can traverse the graph starting from the root A and then insert in the order C and B into the
stack. Check your answer.
Advantages:
Generally heuristic incorporates domain knowledge to improve efficiency over blind search. In AI
heuristic has a general meaning and also a more specialized technical meaning. Generally a term heuristic
is used for any advice that is effective but is not guaranteed to work in every case. For example in case of
travelling sales man (TSP) problem we are using a heuristic to calculate the nearest neighbour. Heuristic
is a method that provides a better guess about the correct choice to make at any junction that would be
achieved by random guessing. This technique is useful in solving though problems which could not be
solved in any other way. Solutions take an infinite time to compute.
Concept:
Step 2: Traverse any neighbour of the root node, that is maintaining a least distance from the root node
and insert them in ascending order into the queue.
Step 3: Traverse any neighbour of neighbour of the root node, that is maintaining a least distance from
the root node and insert them in ascending order into the queue
Step 4: This process will continue until we are getting the goal node
Algorithm:
Step 1: Place the starting node or root node into the queue.
Step 3: If the first element of the queue is our goal node, then stop and return success.
Step 4: Else, remove the first element from the queue. Expand it and compute the estimated goal distance
for each child. Place the children in the queue in ascending order to the goal distance.
Step 5: Go to step-3
Step 6: Exit.
Implementation:
Figure
Step 1:
Consider the node A as our root node. So the first element of the queue is A whish is not our goal node,
so remove it from the queue and find its neighbour that are to inserted in ascending order.
Step 2:
The neighbours of A are B and C. They will be inserted into the queue in ascending order.
BC A
Step 3:
Now B is on the FRONT end of the queue. So calculate the neighbours of B that are maintaining a least
distance from the roof.
F E D C B
Step 4:
Now the node F is on the FRONT end of the queue. But as it has no further children, so remove it from
the queue and proceed further.
E D C F
Step 5:
Now E is the FRONT end. So the children of E are J and K. Insert them into the queue in ascending order.
K J D C E
Step 6:
Now K is on the FRONT end and as it has no further children, so remove it and proceed further
J D C K
Step7:
D C J
Step 8:
Now D is on the FRONT end and calculates the children of D and put it into the queue.
I C D
Step9:
Now I is the FRONT node and it has no children. So proceed further after removing this node from the
queue.
C I
Step 10:
Now C is the FRONT node .So calculate the neighbours of C that are to be inserted in ascending order
into the queue.
G H C
Step 11:
Now remove G from the queue and calculate its neighbour that is to insert in ascending order into the
queue.
M L H G
Step12:
Now M is the FRONT node of the queue which is our goal node. So stop here and exit.
L H M
Advantage:
Disadvantages:
Concept:
Step 2: Traverse any neighbour of the root node that is maintaining least distance from the root node.
Step 3: Traverse any neighbour of the neighbour of the root node that is maintaining least distance from
the root node.
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 3: If the top node of the stack is a goal node, then stop and return success.
Step 4: Else POP the node from the stack. Process it and find all its successors. Find out the path
containing all its successors as well as predecessors and then PUSH the successors which are belonging to
the minimum or shortest path.
Step 5: Go to step 5.
Step 6: Exit.
Implementation:
Let us take the following example for implementing the Branch and Bound algorithm.
Figure
Step 1:
Consider the node A as our root node. Find its successors i.e. B, C, F. Calculate the distance from the root
and PUSH them according to least distance.
A
B: 0+5 = 5 (The cost of A is 0 as it is the starting node)
F: 0+9 = 9
C: 0+7 = 7
5
B
Step 2:
C F B A
D: 0+5+4 = 9
E: 0+5+6 = 11
A
5
C
4
Step 3:
As the top of the stack is D. So calculate neighbours of D.
C F D B
C: 0+5+4+8 = 17
F: 0+5+4+3 = 12
The least distance is F from D and it is our goal node. So stop and return success.
Step 4:
C F D
Advantages:
As it finds the minimum path instead of finding the minimum successor so there should
not be any repetition.
The time complexity is less compared to other algorithms.
Disadvantages:
The load balancing aspects for Branch and Bound algorithm make it parallelization difficult.
The Branch and Bound algorithm is limited to small size network. In the problem of large
networks, where the solution search space grows exponentially with the scale of the network, the
approach becomes relatively prohibitive
1.7.2 A* SEARCH
A* is a cornerstone name of many AI systems and has been used since it was developed in 1968 by Peter
Hart; Nils Nilsson and Bertram Raphael. It is the combination of Dijkstra’s algorithm and Best first
search. It can be used to solve many kinds of problems. A* search finds the shortest path through a search
space to goal state using heuristic function. This technique finds minimal cost solutions and is directed to
a goal state called A* search. In A*, the * is written for optimality purpose. The A* algorithm also finds
the lowest cost path between the start and goal state, where changing from one state to another requires
some cost. A* requires heuristic function to evaluate the cost of path that passes through the particular
state. This algorithm is complete if the branching factor is finite and every action has fixed cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state. It can be
defined by following formula.
f (n) = g (n) + h (n)
Where
g (n): The actual cost path from the start state to the current state.
h (n): The actual cost path from the current state to goal state.
f (n): The actual cost path from the start state to the goal state.
For the implementation of A* algorithm we will use two arrays namely OPEN and CLOSE.
OPEN:
An array which contains the nodes that has been generated but has not been yet examined.
CLOSE:
Algorithm:
Step 1: Place the starting node into OPEN and find its f (n) value.
Step 2: Remove the node from OPEN, having smallest f (n) value. If it is a goal node then stop and return
success.
Step 3: Else remove the node from OPEN, find all its successors.
Step 4: Find the f (n) value of all successors; place them into OPEN and place the removed node into
CLOSE.
Step 5: Go to Step-2.
Step 6: Exit.
Implementation:
Advantages:
It is complete and optimal.
It is the best one from other techniques.
It is used to solve very complex problems.
It is optimally efficient, i.e. there is no other optimal algorithm guaranteed to expand fewer
nodes than A*.
Disadvantages:
This algorithm is complete if the branching factor is finite and every action has fixed cost.
The speed execution of A* search is highly dependant on the accuracy of the heuristic algorithm
that is used to compute h (n).
It has complexity problems.
Like A* algorithm here we will use two arrays and one heuristic function.
OPEN:
It contains the nodes that has been traversed but yet not been marked solvable or unsolvable.
CLOSE:
Algorithm:
Step 3: Select a node n that is both on OPEN and a member of T0. Remove it from OPEN and place it in
CLOSE
Step 4: If n is the terminal goal node then leveled n as solved and leveled all the ancestors of n as solved.
If the starting node is marked as solved then success and exit.
Step 5: If n is not a solvable node, then mark n as unsolvable. If starting node is marked as unsolvable,
then return failure and exit.
Step 6: Expand n. Find all its successors and find their h (n) value, push them into OPEN.
Step 8: Exit.
Implementation:
Figure
Step 1:
In the above graph, the solvable nodes are A, B, C, D, E, F and the unsolvable nodes are G, H. Take A as
the starting node. So place A into OPEN.
A A
i.e. OPEN = CLOSE = (NULL)
Step 2:
The children of A are B and C which are solvable. So place them into OPEN and place A into the
CLOSE.
A
i.e. OPEN = CLOSE =
B C A
B C
Step 3:
Now process the nodes B and C. The children of B and C are to be placed into OPEN. Also remove B and
C from OPEN and place them into CLOSE.
So OPEN = CLOSE =
G D E A B C
B C
(O)
E H
G D
(O)
Step 4:
As the nodes G and H are unsolvable, so place them into CLOSE directly and process the nodes D and E.
A A B C G D E H
D E
F
*
Step 5:
Now we have been reached at our goal state. So place F into CLOSE.
(O) (O) F
A B C G D E H
CLOSE =
Step 6:
AO* Graph:
B C
D E
Figure
Advantages:
It is an optimal algorithm.
If traverse according to the ordering of nodes.
It can be used for both OR and AND graph.
Disadvantages:
The name hill climbing is derived from simulating the situation of a person climbing the hill. The person
will try to move forward in the direction of at the top of the hill. His movement stops when it reaches at
the peak of hill and no peak has higher value of heuristic function than this. Hill climbing uses knowledge
about the local terrain, providing a very useful and effective heuristic for eliminating much of the
unproductive search space. It is a branch by a local evaluation function. The hill climbing is a variant of
generate and test in which direction the search should proceed. At each point in the search path, a
successor node that appears to reach for exploration.
Algorithm:
Step 1: Evaluate the starting state. If it is a goal state then stop and return success.
Step 2: Else, continue with the starting state as considering it as a current state.
Step 3: Continue step-4 until a solution is found i.e. until there are no new states left to be applied in the
current state.
Step 4:
a) Select a state that has not been yet applied to the current state and apply it to produce a new state.
b) Procedure to evaluate a new state.
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.
Step 5: Exit.
Advantages:
Hill climbing technique is useful in job shop scheduling, automatic programming,
circuitdesigning, and vehicle routing and portfolio management.
It is also helpful to solve pure optimization problems where the objective is to find the best state
according to the objective function.
It requires much less conditions than other search techniques.
Disadvantages:
The question that remains on hill climbing search is whether this hill is the highest hill possible.
Unfortunately without further extensive exploration, this question cannot be answered. This technique
works but as it uses local information that’s why it can be fooled. The algorithm doesn’t maintain a search
tree, so the current node data structure need only record the state and its objective function value. It
assumes that local improvement will lead to global improvement.
There are some reasons by which hill climbing often gets suck which are stated below.
Local Maxima:
A local maxima is a state that is better than each of its neighbouring states, but not better than some other
states further away. Generally this state is lower than the global maximum. At this point, one cannot
decide easily to move in which direction! This difficulties can be extracted by the process of backtracking
backtrack to any of one earlier node position and try to go on a different event direction. To
implement this strategy, maintaining in a list of path almost taken and go back to one of them. If the
path was taken that leads to a dead end, then go back to one of them.
Ridges:
It is a special type of local maxima. It is a simply an area of search space. Ridges result in a sequence of
local maxima that is very difficult to implement ridge itself has a slope which is difficult to traverse. In
this type of situation apply two or more rules before doing the test. This will correspond to move in
several directions at once.
Figure Ridges
Plateau:
It is a flat area of search space in which the neighbouringhave same value. So it is very difficult to
calculate the best direction. So to get out of this situation, make a big jump in any direction, which will
help to move in a new direction this is the best way to handle the problem like plateau.
Figure Plateau