AI Unit 1-Notes - Updated
AI Unit 1-Notes - Updated
Artificial Intelligence is concerned with the design of intelligence in an artificial device. The term
was coined by John McCarthy in 1956.
Intelligence is the ability to acquire, understand and apply the knowledge to achieve goals in the
world.
AI is the study of the mental faculties through the use of computational models
AI program will demonstrate a high level of intelligence to a degree that equals or exceeds the
intelligence required of a human in performing some task.
AI is unique, sharing borders with Mathematics, Computer Science, Philosophy, Psychology,
Biology, Cognitive Science and many others.
Although there is no clear definition of AI or even Intelligence, it can be described as an attempt
to build machines that like humans can think and act, able to learn and use knowledge to solve
problems on their own.
History of AI:
In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon In 1958,
John McCarthy (MIT) invented the Lisp language.
In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to achieve
sufficient skill to challenge a world champion.
In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of interactive
graphics into computing.
In 1968, Marvin Minsky & Seymour Papert publish Perceptrons, demonstrating limits of simple
neural nets.
In 1972, Prolog developed by Alain Colmerauer.
In Mid 80’s, Neural Networks become widely used with the Backpropagation algorithm (first
described by Werbos in 1974).
1990, Major advances in all areas of AI, with significant demonstrations in machine learning,
intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning,
data mining, natural language understanding and translation, vision, virtual reality, games, and
other topics.
In 1997, Deep Blue beats the World Chess Champion Kasparov
Roomba, a vacuum cleaning robot. By 2006, two million had been sold.
Philosophy
e.g., foundational issues (can a machine think?), issues of knowledge and believe, mutual knowledge
Psychology and Cognitive Science
e.g., problem solving skills
Neuro-Science
e.g., brain architecture
1) Game Playing
Credit card companies, mortgage companies, banks, and the U.S. government employ AI
systems to detect fraud and expedite financial transactions. For example, AMEX credit check.
d. Classification Systems
Put information into one of a fixed set of categories using several sources of information.
E.g., financial decision making systems. NASA developed a system for classifying very faint
areas in astronomical images into either stars or galaxies with very high accuracy by learning
from human experts' classifications.
5) Mathematical Theorem Proving
Use inference methods to prove new theorems.
9) Machine Learning
Application of AI:
AI algorithms have attracted close attention of researchers and have also been applied successfully
to solve problems in engineering. Nevertheless, for large and complex problems, AI algorithms
consume considerable computation time due to stochastic feature of the search approaches
Building AI Systems:
1) Perception
Intelligent biological systems are physically embodied in the world and experience the world
through their sensors (senses). For an autonomous vehicle, input might be images from a camera
and range information from a rangefinder. For a medical diagnosis system, perception is the set of
symptoms and test results that have been obtained and input to the system manually.
2) Reasoning
Inference, decision-making, classification from what is sensed and what the internal "model" is of
the world. Might be a neural network, logical deduction system, Hidden Markov Model induction,
heuristic searching a problem space, Bayes Network inference, genetic algorithms, etc.
Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into four categories
(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
Engineering Goal:To solve real world problems using AI techniques such as Knowledge
representation, learning, rule systems, search, and so on.
Traditionally, computer scientists and engineers have been more interested in the
engineering goal, while psychologists, philosophers and cognitive scientists have been more
interested in the scientific goal.
a. Requires a model for human cognition. Precise enough models allow simulation by computers.
b. Focus is not just on behavior and I/O, but looks like reasoning process.
c. Goal is not just to produce human-like behavior but to produce a sequence of steps of the reasoning
process, similar to the steps followed by a human in solving the same task.
a. The study of mental faculties through the use of computational models; that it is, the study of
computations that make it possible to perceive reason and act.
b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution.
c. Goal is to formalize the reasoning process as a system of logical rules and procedures of inference.
a. The art of creating machines that perform functions requiring intelligence when performed by
people; that it is the study of, how to make computers do things which, at the moment, people do
better.
b. Focus is on action, and not intelligent behavior centered around the representation of the world c.
The interrogator can communicate with the other 2 by teletype (to avoid the machine imitate the
appearance of voice of the person)
The interrogator tries to determine which the person is and which the machine is.
The machine tries to fool the interrogator to believe that it is the human, and the person also tries
to convince the interrogator that it is the human.
a. Tries to explain and emulate intelligent behavior in terms of computational process; that it is
concerned with the automation of the intelligence.
Strong AI
makes the bold claim that computers can be made to think on a level (at least) equal to humans.
Weak AI simply states that some "thinking-like" features can be added to computers to make
them more useful tools... and this has already started to happen (witness expert systems, drive-
by-wire cars and speech recognition software).
2. Expert tasks
Common-Place Tasks:
1. Recognizing people, objects.
Intelligent Agents
1. Intelligent Agent’s:
2.1 Agents andenvironments:
2.1.1 Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and other body
parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various motors for
actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and sending
network packets.
2.1.2 Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in
Fig
2.1.5. This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move right,
suck up the dirt, or do nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial tabulation of this agent
function is shown in Fig 2.1.6.
else if location = A then return Right else if location = B then return Left
Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time
Strategies of Solving Tic-Tac-Toe Game Playing
Fig 1.
The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the form of a 3x3 matrix
(Fig. 1). The game is played by 2 players and either of them can start. Each of the two players is
assigned a unique symbol (generally 0 and X). Each player alternately gets a turn to make a move.
Making a move is compulsory and cannot be deferred. In each move a player places the symbol
assigned to him/her in a hitherto blank cell.
Let a track be defined as any row, column or diagonal on the board. Since the board is a
square matrix with 9 cells, all rows, columns and diagonals have exactly 3 cells. It can be easily
observed that there are 3 rows, 3 columns and 2 diagonals, and hence a total of 8 tracks on the
board (Fig. 1). The goal of the game is to fill all the three cells of any track on the board with the
symbol assigned to one before the opponent does the same with the symbol assigned to
him/her. At any point of the game, if there exists a track whose all three cells have been
Let the priority of a cell be defined as the number of tracks passing through it. The priorities o f
the nine cells on the board according to this definition are tabulated in Table 1. Alternatively,
let the priority of a track be defined as the sum of the priorities of its three cells. The priorities of
the eight tracks on the board according to this definition are tabulated in Table 2. The prioritization
of the cells and the tracks lays the foundation of the heuristics to be used in this study. These
heuristics are somewhat similar to those proposed by Rich and Knight.
Strategy 1:
Algorithm:
2. Use the computed number as an index into Move-Table and access the vector stored there.
Procedure:
1) Elements of vector:
0: Empty
1: X
2: O
→ the vector is a ternary number
2) Store inside the program a move-table (lookup table):
a) Elements in the table: 19683 (39)
b) Element = A vector which describes the most suitable move from the
3. Difficult to extend
Data Structure:
1) Use vector, called board, as Solution 1
2: Empty
3: X
5: O
3) Turn of move: indexed by integer
1,2,3, etc
Function Library:
1.Make2:
a) Return a location on a game-board.
IF (board[5] = 2)
RETURN 5; //the center cell.
ELSE
RETURN 0;
ELSE
RETURN index to the empty cell on the line of can_win(P)
Go(5)
ELSE
Go(1)
3. Turn = 3: (X moves) IF board[9] is empty THEN
Go(9)
ELSE Go(3).
5. Turn = 5: (X moves)
IF Posswin(X) <> 0 THEN
Go(Posswin(X)) //Win for X.
Searching Solutions:
To build a system to solve a problem:
1. Define the problem precisely
2. Analyze the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem
4. Choose the best problem-solving techniques and apply it to the particular problem.
2. Specify one or more states within that space that describe possible situations from which the
problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states)
Specify a set of rules that describe the actions (operations) available
Which search algorithm one should use will generally depend on the problem domain.
There are four important factors to consider:
2. Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution if there exists
more than one solution?
3. Time Complexity – The upper bound on the time required to find a solution, as a function of the
complexity of the problem.
4. Space Complexity – The upper bound on the storage space (memory) required at any point during
the search, as a function of the complexity of the problem.
Let us discuss these strategies using water jug problem. These may be applied to any search
problem.
Generate all the offspring of the root by applying each of the applicable rules to the initial state.
8 Puzzle Problem.
The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the frame
is always empty thus making it possible to move an adjacent numbered tile into the empty cell.
Such a puzzle is illustrated in following diagram.
The program is to change the initial configuration into the goal configuration. A solution to the
problem is an appropriate sequence of moves, such as “move tiles 5 to the right, move tile 7 to
the left, move tile 6 to the down, etc”.
Solution:
To solve a problem using a production system, we must specify the global database the rules, and
the control strategy. For the 8 puzzle problem that correspond to these three components. These
elements are the problem states, moves and goal. In this problem each tile configuration is a state.
The set of all configuration in the space of problem states or the problem space, there are only 3,
62,880 different configurations o the 8 tiles and blank space. Once the problem states have been
conceptually identified, we must construct a computer representation, or description of them . this
description is then used as the database of a production system. For the 8-puzzle, a straight forward
description is a 3X3 array of matrix of numbers. The initial global database is this description of
the initial problem state. Virtually any kind of data structure can be used to describe states.
A move transforms one problem state into another state. The 8-puzzle is conveniently interpreted
as having the following for moves. Move empty space (blank) to the left, move blank up, move
The rules each have preconditions that must be satisfied by a state description in order for them to
be applicable to that state description. Thus the precondition for the rule associated with “move
blank up” is derived from the requirement that the blank space must not already be in the top row.
The problem goal condition forms the basis for the termination condition of the production system.
The control strategy repeatedly applies rules to state descriptions until a description of a goal state
is produced. It also keeps track of rules that have been applied so that it can compose them into
sequence representing the problem solution. A solution to the 8-puzzle problem is given in the
following figure.
Example:- Depth – First – Search traversal and Breadth - First - Search traversal
Search is the systematic examination of states to find path from the start/root state to the goal state.
Many traditional search algorithms are used in AI applications. For complex problems, the
traditional algorithms are unable to find the solution within some practical time and space limits.
Consequently, many special techniques are developed; using heuristic functions. The algorithms
that use heuristic functions are called heuristic algorithms. Heuristic algorithms are not really
intelligent; they appear to be intelligent because they achieve better performance.
Heuristic algorithms aremore efficient because they take advantage of feedback from the data to
direct the search path.
Uninformed search
Also called blind, exhaustive or brute-force search, uses no information about the problem to
guide the search and therefore may not be very efficient.
Informed Search:
Also called heuristic or intelligent search, uses information about the problem to guide the search,
usually guesses the distance to a goal state and therefore efficient, but the search may not be always
possible.
Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit
b. For each way that each rule can match the state described in E do:
Step 1: Initially fringe contains only one node corresponding to the source state A.
Figure 1
FRINGE: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put
at the back of fringe.
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.
Figure 4
FRINGE: D E D G
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the back
of fringe.
Figure 5
FRINGE: E D G C F
Figure 6
FRINGE: D G C F
Figure 7
FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the
path A C G by following the parent pointers of the node corresponding to G. The algorithm
terminates.
Disadvantages:
Requires the generation and storage of a tree whose size is exponential the depth of the
shallowest goal node.
The breadth first search algorithm cannot be effectively used unless the search space is quite
small.
Algorithm:
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit
b. For each way that each rule can match the state described in E do:
Figure 1
FRINGE: A
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.
Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.
Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Figure 4
FRINGE: C F E C
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.
Figure 6
FRINGE: G F E C
2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will
d
take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).
Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the
search tree has infinite depth, the algorithm may not terminate. This can happen if the search space
is infinite. It can also happen if the search space contains cycles. The latter case can be handled by
checking for cycles in the algorithm. Thus Depth First Search is not complete.
Description:
It is a search strategy resulting when you combine BFS and DFS, thus combining the advantages
of each strategy, taking the completeness and optimality of BFS and the modest memory
requirements of DFS.
IDS works by looking for the best search depth d, thus starting with depth limit 0 and make a
BFS and if the search failed it increase the depth limit by 1 and try a BFS again with depth 1
and so on – first d = 0, then 1 then 2 and so on – until a depth d is reached where a goal is found.
Algorithm:
procedure IDDFS(root) for depth from 0 to ∞ found ← DLS(root, depth) if found ≠ null
return found
procedure DLS(node, depth) if depth = 0 and node is a goal return node else if depth > 0
foreach child of node
found ← DLS(child, depth−1) if found ≠ null return found return null
Performance Measure:
Completeness: IDS is like BFS, is complete when the branching factor b is finite.
Optimality: IDS is also like BFS optimal when the steps are of the same cost.
One may find that it is wasteful to generate nodes multiple times, but actually it is not that costly
compared to BFS, that is because most of the generated nodes are always in the deepest level
reached, consider that we are searching a binary tree and our depth limit reached 4, the nodes
generated in last level = 24 = 16, the nodes generated in all nodes before last level = 2 0 + 21 + 22
+ 23= 15
Imagine this scenario, we are performing IDS and the depth limit reached depth d, now if you
remember the way IDS expands nodes, you can see that nodes at depth d are generated once,
nodes at depth d-1 are generated 2 times, nodes at depth d-2 are generated 3 times and so on,
until you reach depth 1 which is generated d times, we can view the total number of generated
nodes in the worst case as:
Space Complexity:
Weblinks:
i. https://www.youtube.com/watch?v=7QcoJjSVT38 ii.
https://mhesham.wordpress.com/tag/iterative-deepening-depth-first-search
Conclusion:
We can conclude that IDS is a hybrid search strategy between BFS and DFS inheriting their
advantages.
It is said that “IDS is the preferred uniformed search method when there is a large search space
and the depth of the solution is not known”.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
Algorithm:
Function HILL-CLIMBING(Problem) returns a solution state
Inputs: Problem, problem
Local variables: Current, a node
Next, a node
Current = MAKE-NODE(INITIAL-STATE[Problem])
Loop do
Next = a highest-valued successor of Current
If VALUE[Next] < VALUE[Current] then returnCurrent Current = Next
End
Also, if two neighbors have the same evaluation and they are both the best quality, then the
algorithm will choose between them at random.
You can see that we will eventually reach a state that has no better neighbours but there are better
solutions elsewhere in the search space. The problem we have just described is called a local
maxima.
Figure 10.7 shows simulated annealing algorithm. It is quite similar to hill climbing. Instead of
picking the best move, however, it picks the random move. If the move improves the situation, it
is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1.
The probability decreases exponentially with the “badness” of the move – the amount E by which
the evaluation is worsened. The probability also decreases as the "temperature" T goes down: "bad
moves are more likely to be allowed at the start when temperature is high, and they become more
unlikely as T decreases. One can prove that if the schedule lowers T slowly enough, the algorithm
will find a global optimum with probability approaching 1.
Simulated annealing was first used extensively to solve VLSI layout problems. It has been applied
widely to factory scheduling and other large-scale optimization tasks.
We have considered algorithms that work only in discrete environments, but real-world
environment are continuous.
Local search amounts to maximizing a continuous objective function in a multi-dimensional vector
space.
This is hard to do in general. Can immediately retreat
Discretize the space near each state
Apply a discrete local search strategy (e.g., stochastic hill climbing, simulated annealing)
Often resists a closed-form solution
OPEN is a priorityqueue of nodes that have been evaluated by the heuristic function but which
have not yet been expanded into successors. The most promising nodes are at the front.
Algorithm:
Example:
The A* search algorithm (pronounced "Ay-star") is a tree search algorithm that finds a path
from a given initial node to a given goal node (or one passing a given goal test). It employs a
"heuristic estimate" which ranks each node by an estimate of the best route that goes through that
node. It visits the nodes in order of this heuristic estimate.
Similar to greedy best-first search but is more accurate because A* takes into account the nodes
g is a measure of the distance/cost to go from the initial node to the current node his an estimate
Thus fis an estimate of how long it takes to go from the initial node to the solution
Algorithm:
save n in CLOSED
Insert m in OPEN
Move m to OPEN.
Description:
A* begins at a selected node. Applied to this node is the "cost" of entering this node (usually zero
for the initial node). A* then estimates the; the exact way of doing this depends on the problem at
hand. For each successive node, A* calculates the "cost" of entering the node and saves it with
the node. This cost is calculated from the cumulative sum of costs stored with its ancestors, plus
the cost of the operation which reached this new node.
The algorithm also maintains a 'closed' list of nodes whose adjoining nodes have been checked.
If a newly generated node is already in this list with an equal or lower cost, no further processing
is done on that node or with the path associated with it. If a node in the closed list matches the
new one, but has been stored with a higher cost, it is removed from the closed list, and processing
continues on the new node.
Next, an estimate of the new node's distance to the goal is added to the cost to form the heuristic
for that node. This is then added to the 'open' priority queue, unless an identical node is found
there.
Once the above three steps have been repeated for each new adjoining node, the original node
taken from the priority queue is added to the 'closed' list. The next node is then popped from the
priority queue and the process is repeated The heuristic costs from each city to Bucharest:
The algorithm A* is admissible. This means that provided a solution exists, the first
solution found by A* is an optimal solution. A* is admissible under the following conditions:
Heuristic function: for every node n , h(n) ≤ h*(n) .
A* is also complete.
A* is optimally efficient for a given heuristic.
A* is much more efficient that uninformed search.
Iterative Deeping A* Algorithm:
Algorithm:
else
extract a node n from the front of L
4) If n is a goal node,
SUCCEED and return the path from the initial state to n
5) Remove n from L. If the level is smaller than C, insert at the front of L all the children n' of n
with f(n') ≤ C
6) Goto 3)
IDA* is complete & optimal Space usage is linear in the depth of solution. Each iteration is depth
first search, and thus it does not require a priority queue.
Iterative deepening A* (IDA*) eliminates the memory constraints of A* search algorithm without
sacrificing solution optimality.
Each iteration of the algorithm is a depth-first search that keeps track of the cost, f(n) = g(n) +
h(n), of each node generated.
As soon as a node is generated whose cost exceeds a threshold for that iteration, its path is cut
off, and the search backtracks before continuing.
The cost threshold is initialized to the heuristic estimate of the initial state, and in each successive
iteration is increased to the total cost of the lowest-cost node that was pruned during the pre