AI Unit 1
AI Unit 1
Introduction: History of AI - problem spaces and search- Heuristic Search techniques –Best-first search-
Problem reduction-Constraint satisfaction-Means Ends Analysis. Intelligent agents: Agents and environment
The automation of activities that we associate with human thinking, activities such
as decision-making, problem solving, learning.
2. Define AI as” Systems that think rationally”?
The study of mental faculties through the use of computational models. The study of
the computations that make it possible to perceive, reason, and act.
3. Define AI as” Systems that act like humans”?
The art of creating machines that perform functions that require intelligence when
performed by people.
The study of how to make computers do things at which, at the moment, people are
better.
4. Define AI as” Systems that act rationally”?
Computational intelligence is the study of design of intelligent agents.
The test he proposed is that the computer should be interrogated by a human via a
teletype, and passes the test if the interrogator cannot tell if there is a computer or a human
at the other end.
6. What are the things the computer needs to act as human?
2
8. Explain the cognitive modelling approach for’ Thinking humanly’?
If we say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are two ways to do this: through introspection—trying to catch our own thoughts as
they go by—or through psychological experiments.
12. What are all the fields from which AI can be inherited?
AI can be inherited from
Psychology
Linguistic
Philosophy
Mathematics
CS
3
Economics
Neuro science
Control theory and cybernetics
14. What is the first successful rule based expert system. Explain?
The first successful knowledge-intensive system is MYCIN.
MYCIN is used to diagnose the blood infectious disease.
17. Give the sensors and actuators for human, robotics and software agent?
Robotics agent:
Sensors: infrared rays, cameras
Actuators: motor
Software agent:
Sensors: keys in keyboard
Actuators: output displayed on screen
4
Human agent:
Sensors: eyes, ears, and other organs
Actuators: other body parts
18. How does an agents interact with environments through sensors and effectors?
PERCEPTS SENSORS
ENVIRONMENT AGENT
ACTION
ACTUATORS
For each possible percept sequence, an ideal rational agent should do whatever action
is expected to maximize its performance measure, on the basis of the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
20. What are the four things rational agent depend on?
The rational agent depends on four things:
• The performance measure that defines degree of success. (P)
• What the agent knows about the environment. (E)
• Everything that the agent has perceived so far. We will call this complete perceptual history
the percept sequence. (A)
• The actions that the agent can perform. (S)
An agent program tries to implement the agent architecture is called agent function.
If the next state of the environment is completely determined by the current state
and the actions selected by the agents, then we say the environment is deterministic. In
principle, an agent need not worry about uncertainty in an accessible, deterministic
environment. If the environment is inaccessible, however, then it may appear to be
nondeterministic.
If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise it is static. Static environments are easy to
deal with because the agent need not keep looking at the world while it is deciding on an
action, nor need it worry about the passage of time. If the environment does not change with
6
the passage of time but the agent's performance score does, then we say the environment is
semi dynamic.
If there are a limited number of distinct, clearly defined percepts and actions we
say that the environment is discrete. Chess is discrete—there are a fixed number of possible
moves on each turn. Taxi driving is continuous—the speed and location of the taxi and the
other vehicles sweep through a range of continuous values.
7
32. What is utility based agent?
Goals alone are not really enough to generate high-quality behavior. For example,
there are many action sequences that will get the taxi to its destination, thereby achieving the
goal, but some are quicker, safer, more reliable, or cheaper than others.
The customary terminology is to say that if one world state is preferred to another,
then it has higher utility for the agent.
Utility is therefore a function that maps a state9 onto a real number, which
describes the associated degree of happiness.
state UPDATE-STATE(state,
percept) if seq is empty then do
goal FORMULATE-GOAL(state)
problem FORMULATE-PROBLEM(state,
goal) seq SEARCH(problem)
action FIRST(seq)
seq REST(seq)
return action
9
Depth first search
Depth limited search
Iterative deepening depth first search
Bidirectional search
11
52. What is A* search?
It evaluates nodes by combining g(n), the cost to reach the node, and h(n),the cost
to get from the node to the goal.
f(n)=g(n)+h(n).
Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost
of the cheapest path from n to the goal, we have
f (n) = estimated cost of the cheapest solution through n
The theory and development of computer systems able to perform tasks normally requiring
human intelligence, such as visual perception, speech recognition, decision-making, and translation
between languages.
DFS requires less memory, since only the nodes on the current path are stored.
DFS may find the solution without examining much of the search space at all.
12
57. What are the components of Production System?(Nov-Dec’14)
Set of Rules
Control Stratergy
Knowledge Database
Rule Applier
A heuristic function is a function that maps from problem state description to the measures
of desirability. It is usually represented as numbers. Well designed heuristic function can play
important part in efficiently guiding a search process towards a solution
Unit I- 11 Marks
13
– "Can machines think?" "Can machines behave intelligently?"
– Operational test for intelligent behavior: the Imitation Game
– Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for
5 minutes
– Anticipated all major arguments against AI in following 50 years
– Suggested major components of AI: knowledge, reasoning, language understanding,
learning
Turing test
– Three rooms contain a person, a computer, and an interrogator.
– The interrogator can communicate with the other two by teleprinter.
– The interrogator tries to determine which the person is and which is the machine.
– The machine tries to fool the interrogator into believing that it is the person.
– If the machine succeeds, then we conclude that the machine can think.
– A program that succeeded would need to be capable of:
– natural language understanding & generation;
– knowledge representation;
– learning;
– Automated reasoning.
– ELIZA: A program that simulated a psychotherapist interacting with a patient and
successfully passed the Turing Test.
15
– For any given class of environments and tasks, we seek the agent (or class of agents)
with the best performance
– Caveat: computational limitations make perfect rationality unachievable
Design best program for given machine resources
PREHISTORY
History
– The birth of AI (1943 – 1956)
– Pitts and McCulloch (1943): simplified mathematical model of neurons (resting/firing
states) can realize all propositional logic primitives (can compute all Turing
computable functions)
– Allen Turing: Turing machine and Turing test (1950)
– Claude Shannon: information theory; possibility of chess playing computers
– Tracing back to Boole, Aristotle, Euclid (logics, syllogisms)
16
– Early enthusiasm (1952 – 1969)
– 1956 Dartmouth conference
o John McCarthy (Lisp);
o Marvin Minsky (first neural network machine);
o Alan Newell and Herbert Simon (GPS);
17
Route planning
Automated scheduling of actions in space craft’s Game playing
IBM's Deep Blue defeated G.Kasparov (the human world champion) (1997)
The program FRITZ running on an ordinary PC drawed with V.Kramnik (the
human world champion) (2002) Autonomous control
Automated car steering
The Mars mission Diagnosis
Literature describes a case where a leading expert was convinced by a
computer diagnostic Logistic planning
Defence Advanced Research Project Agency stated that this single application
more than paid back DARPA's 30-year investment in AI Robotics
Microsurgery RoboCup . By the year 2050, develop a team of fully autonomous
humanoid robots that can win against the human world soccer champion team
Agents
– An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators
– Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other
body parts for actuators
– Robotic agent: cameras and infrared range finders for sensors; various motors for
actuators
Agents and environments
– The agent function maps from percept histories to actions: [f: P* A]
– The agent program runs on the physical architecture to produce f
18
– Agent = architecture + program
Vacuum-cleaner world
Rational agents
– An agent should strive to "do the right thing", based on what it can perceive and the
actions it can perform. The right action is the one that will cause the agent to be most
successful
– Performance measure: An objective criterion for success of an agent's behavior
– E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned
up, amount of time taken, amount of electricity consumed, amount of noise generated,
etc.
– Rational Agent: For each possible percept sequence, a rational agent should select an
action that is expected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent has.
– Rationality is distinct from omniscience (all-knowing with infinite knowledge)
– Agents can perform actions in order to modify future percepts so as to obtain useful
information (information gathering, exploration)
– An agent is autonomous if its behavior is determined by its own experience (with
ability to learn and adapt)
PEAS
– PEAS: Performance measure, Environment, Actuators, Sensors
– Must first specify the setting for intelligent agent design
– Consider, e.g., the task of designing an automated taxi driver:
19
Performance measure
Environment
Actuators
Sensors
– Must first specify the setting for intelligent agent design
– Consider, e.g., the task of designing an automated taxi driver:
Performance measure: Safe, fast, legal, comfortable trip, maximize profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,
keyboard
Fully observable (vs. partially observable): An agent's sensors give it access to the
complete state of the environment at each point in time.
Episodic (vs. sequential): The agent's experience is divided into atomic "episodes"
(each episode consists of the agent perceiving and then performing a single action),
and the choice of action in each episode depends only on the episode itself.
Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and
actions.
Single agent (vs. multiagent): An agent operating by itself in an environment.
21
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the table entries
Agent types
Four basic types in order of increasing generality:
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
[A,Dirty] Suck
[B,Clean] Left
[A,clean],[A,clean] Right
[A,clean],[A,Dirty] Suck
The vacuum agent program is very small. But some processing is done on the visual input to
establish the condition-action rule. For eg., if car-in-front-is-braking then initiate braking.
The following figure shows how the condition-Action rules allow the agent to make the
connection from percept to action.
Return action
23
Function
RULE-MATCH: returns the first rule in the set of rules that matches the given state
description.
This agent will work only if the correct decision can be made Ion the basis of only the current
percept. I.e. only if the environment is fully observable.
Updating this internal state information requires two kinds of knowledge to be encoded in
the agent program.
The following figure shows the structure of the reflex agent with internal state,
showing how the current percept is combined with the old internal state to generate the
updated description of the current state.
24
The agent program is shown below:
Function REFLEX-AGENT-WITH-STATE (percept) returns an action
Return action
Goal-based agents
Here, along with current-state description, the agent needs some sort of goal information
that describes situations that are desirable – for eg, being at the passenger’s destination.
Utility-based agents
Goals alone are not enough to generate high-quality behavior in most environments. A more
general performance measure should allow a comparison of different world states according
to exactly how happy they would make the agent if they could be achieved.
25
A utility function maps a state onto a real number, which describes the associated degree of
happiness. The utility-based agent structure appears in the following figure.
Learning agents
It allows the agent to operate in initially unknown environments and to become more
competent than its initial knowledge alone might allow. A learning agent can be divided into
four conceptual components, as shown in figure:
The learning element uses feedback from the critic on how the agent is doing and determines
how the performance element should be modified to do better in the future. The critic tells
the learning element how well the agent is doing with respect to a fixed performance
standard. The critic is necessary because the percepts themselves provide no indication of
the agent’s success. The last component of the learning agent is the problem generator. It is
responsible for suggesting actions that will lead to new and informative experiences.
26
5. Discuss Problem spaces and search (Apr – May’14)(Nov’15)
Solving problems by searching
Problem-solving agents
Problem types
Problem formulation
Example problems
Basic search algorithms
PROBLEM-SOLVING AGENTS
Problem solving agent is a goal-based agent decides what to do by finding sequences
of actions that lead to desirable states. Let us take for an example, an agent in the city of Arad,
Romania, enjoying a touring holiday. Goal formulation, based on the current situation and
the agent’s performance measure, is the first step in problem solving. We will consider a goal
to be a set of world states- exactly those states in which the goal is satisfied.
Problem formulation is the process of deciding what actions and states to consider,
given a goal. Let us assume that the agent will consider actions at the level of driving from
one major town to another. Our agent has now adopted the goal of driving to Bucharest, and
is considering where to go from Arad. There are three roads out of Arad. The agent will not
know which of its possible actions is best, because it does not know enough about the state
that results from taking each action. If the agent has a map, it provides the agent with
information about the states it might get itself into, and the actions it can take.
An agent with several immediate options of unknown value can decide what to do by
first examining different possible sequences of actions that lead to states of known value, and
then choosing the best sequence. The process of looking for such a sequence is called a
search. A search algorithm takes a problem as input and returns a solution in the form of an
action sequence. Once a solution is found, the actions it recommends can be carried out. This
is called the execution phase. The design for such an agent is shown in the following
function:
27
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal: be in Bucharest
Formulate problem: states: various cities
actions: drive between cities
Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
states?
actions?
goal test?
path cost?
Vacuum world state space graph
29
Example: The 8-puzzle
states?: real-valued coordinates of robot joint angles parts of the object to be assembled
actions?: continuous motions of robot joints
goal test?: complete assembly
path cost?: time to execute
30
The following figure shows some of the expansions in the search tree for finding a
route from Arad to Bucharest. The root of the search tree is a search node corresponding to
the initial state, Arad. The first step is to test whether this is a goal state. If this is not the goal
state, expand the current state by applying the successor function to the current state,
thereby generating a new set of states.
The choice of which state to expand is determined by the search strategy. The general tree-
search algorithm is given below:
31
Assume that a node is a data structure with five components:
STATE: the state in the state space to which the node corresponds
PARENT-NODE: the node in the search tree that generated this node
ACTION: the action that was applied to the parent to generate the node
PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state
to the node, as indicated by the parent pointers; and
DEPTH: the number of steps along the path from the initial state.
The collection of nodes is implemented as a queue. The operations on a queue are as follows:
MAKE-QUEUE(element….) creates a queue with the given element(s)
EMPTY?(queue)returns true only if there are no more elements in the queue
FIRST(queue) returns the first element of the queue
REMOVE-FIRST (queue) returns FIRST (queue) and removes it from the queue.
INSERT (element, queue) inserts an element into the queue and returns the resulting
queue.
INSERT-ALL (elements, queue) inserts a set of elements into the queue and returns
the resulting queue.
With these definitions, the more formal version of the general tree search algorithm is
shown below:
32
– The Expand function creates new nodes, filling in the various fields and using the
Successor Fn of the problem to create the corresponding states.
Measuring problem-solving performance
– A search strategy is defined by picking the order of node expansion
– Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Algorithm:
Figure: Breadth first search on a simple binary tree. At each state, the node to be expanded
next is indicated by a marker.
34
2
b more nodes, for a total of b at the second level, and so on. Now suppose that the
solution is at depth d.
Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
– Complete? Yes, if step cost >=
– Time? # of nodes with g <= cost of optimal solution, O(bceiling(C*/)) where C* is the cost of
the optimal solution
Depth-first search
– Expand deepest unexpanded node
– Implementation:
– fringe = LIFO queue, i.e., put successors at front
Algorithm:
35
Figure: DFS on a binary tree. Nodes that have been expanded and have no descendants in
the fringe can be removed from memory; these are shown in black. Nodes at depth 3 are
assumed to have no successors and M is the only goal node.
Depth-limited search
The problem of unbounded trees can be alleviated by supplying DFS with a pre-determined
depth limit.
= depth-first search with depth limit l, i.e., nodes at depth l have no successors
l
Depth-limited search will also be nonoptimal if we choose l<d. Its time complexity is O(b )
and its space complexity is O(bl).
Recursive implementation:
Depth-limited search can terminate with two kinds of failure: the standard failure
value indicates no solution; the cutoff value indicates no solution within the depth limit.
37
Iterative deepening combines the benefits of DFS and BFS. Like DFS, its memory
requirements are very modest: O(bd). Like BFS, it is complete when the branching factor is
finite and optimal when the path cost is a non-decreasing function of the depth of the node.
38
Number of nodes generated in a depth-limited search to depth d with branching factor b:
0 1 2 d-2 d-1 d
NDLS = b + b + b + … + b + b + b
Number of nodes generated in an iterative deepening search to depth d with branching
factor b:
0 1 2 d-2 d-1 d
NIDS = (d+1)b + d b^ + (d-1)b^ + … + 3b +2b + 1b
For b = 10, d = 5,
NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
39
Summary of algorithms
Bidirectional Search
The idea behind bi-directional search is to run two simultaneous searches – one forward
from the initial state and the other backward from the goal, stopping when the two searches
meet in the middle.
Bidirectional search is implemented by having one or both of the searches check each node
before it is expanded to see if it is in the fringe of the other search tree; if so, a solution has
been found. Checking a node for membership in the other search tree can be done in constant
d/2
time with a hash table, so the time complexity of bi-directional search is O(b ). At least one
of the search trees must be kept in memory so that the membership check can be done, hence
d/2
the space complexity is O(b ) which is the weakness of the algorithm. The algorithm is
complete and optimal if both searches are breadth-first;
– Best-first search
– Greedy best-first search
– A* search
– Heuristics
– Local search algorithms
– Hill-climbing search
40
– Simulated annealing search
– Local beam search
– Genetic algorithms
Best-first search
Best first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH algorithm in
which a node is selected for expansion based on an evaluation function f (n). The node with
the lowest evaluation is selected for expansion, because the evaluation measures distance to
the goal. It can be implemented using a priority queue, a data structure that will maintain the
fringe in ascending order of f – values.
Algorithm:
h (n)= estimated cost of the cheapest path from node n to a goal node.
For example, in Romania, one might estimate the cost of the cheapest path from Arad to
Bucharest via the straight-line distance from Arad to Bucharest which is shown below:
41
Romania with step costs in km
The progress of a greedy best-first search using hSLD to find a path from Arad to Bucharest is
shown in the following figure:
42
The first node to be expanded from Arad will be Sibiu, because it is closer to
Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest. Fagaras in turn generates Bucharest, which is the goal. Greedy best-first
search using hSLD finds a solution without ever expanding a node that is not on the solution
path; hence its search cost is minimal.
*
A search: Minimizing the total estimated solution cost
– Idea: avoid expanding paths that are already expensive
– Evaluation function f(n) = g(n) + h(n)
– g(n) = cost so far to reach n
– h(n) = estimated cost from n to goal
– f(n) = estimated total cost of path through n to goal
Algorithm:
43
’
4. Expand n, generating all of its successors n and place n on closed. For every successor
’ ’ * ’
n , if n is not already on open or closed attach a back-pointer to n, compute f (n ) and
place it on open.
’
5. Each n that is already on open or closed should be attached to back-pointers which
* ’
reflect the lowest g (n’) path. If n was on closed and its pointer was changed, remove
it and place it on open.
6. Return to step 2.
*
The following figure shows an A tree search for Bucharest.
44
Admissible heuristics
– A heuristic h(n) is admissible if for every node n,
* *
h(n) <= h (n), where h (n) is the true cost to reach the goal state from n.
– An admissible heuristic never overestimates the cost to reach the goal, i.e., it is
optimistic
– Example: hSLD(n) (never overestimates the actual road distance)
– Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal
*
Optimality of A (proof)
• Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an
unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.
45
• f(G2) = g(G2) since h(G2) = 0
• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
• Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an
unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.
*
Hence f(G2) > f(n), and A will never select G2 for expansion
Consistent heuristics
– A heuristic is consistent if for every node n, every successor n' of n generated by any
action a,
– If h is consistent, we have
46
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') +
h(n') ¡Ã g(n) + h(n)
= f(n)
*
Optimality of A
– A* expands nodes in order of increasing f value
– Gradually adds "f-contours" of nodes
– Contour i has all nodes with f=fi, where fi < fi+1
Properties of A*
– Complete? Yes (unless there are infinitely many nodes with f <= f(G) )
– Time? Exponential
– Space? Keeps all nodes in memory
– Optimal? Yes
Heuristic Functions
47
E.g., for the 8-puzzle:
– h1(n) = number of misplaced tiles
– h2(n) = the sum of the distances of the tiles from their goal positions. This is sometimes
called the city block distance or Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8, 8 tiles are out of position, so the start state would have h 1=8. h1 is an
admissible heuristic, because it is clear that any tile that is out of place must be moved at
least once.
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18 . h2 is also admissible, because all any move can do is
move one tile one step closer to the goal.
Relaxed problems
– A problem with fewer restrictions on the actions is called a relaxed problem
– The cost of an optimal solution to a relaxed problem is an admissible heuristic for the
original problem
– If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives
the shortest solution
– If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the
shortest solution
Example: n-queens
Put n queens on an n × n board with no two queens on the same row, column, or diagonal
To understand local search, we will find it very useful to consider the state space landscape
as shown in the following figure:
Hill-climbing search
– "Like climbing Everest in thick fog with amnesia"
The hill-climbing search algorithm is shown in the following function. It is simply a loop that
continually moves in the direction of increasing value- that is, uphill. It terminates when it
reaches a “peak” where no neighbor has a higher value.
49
Problem: depending on initial state, can get stuck in local maxima
• h = number of pairs of queens that are attacking each other, either directly or
indirectly
• h = 17 for the above state
A local minimum in the 8-queens state space; the state has h=1 but every successor has a
higher cost.
50
Hill climbing is sometimes called greedy local search because it grabs a good neighbour state
without thinking ahead about where to go next. Hill climbing often gets stuck for the
following reasons:
Local Maxima: a local maximum is a peak that is higher than each of its neighboring
states, but lower than the global maximum.
Ridges: Ridges result in a sequence of local maxima that is very difficult for greedy
algorithms to navigate.
Plateaux: a plateau is an area of the state space landscape where the evaluation
function is flat. It can be a flat local maximum, from which no uphill exit exists, or a
shoulder, from which it is possible to make progress.
Simulated annealing search
– Idea: escape local maxima by allowing some "bad" moves but gradually decrease their
frequency
A hill climbing algorithm that never makes “downhill” moves towards states with lower
value is guaranteed to be incomplete, because it can get stuck on a local maximum. In
contrast, a purely random walk – that is, moving to a successor chosen uniformly at random
from the set of successors – is complete, but extremely inefficient. Simulated annealing is the
combination of hill climbing with a random walk.
The innermost loop of the simulated-annealing algorithm shown below is quite similar to hill
climbing.
51
– Widely used in VLSI layout, airline scheduling, etc.
When a problem can be divided into a set of sub problems, where each sub problem
can be solved separately and a combination of these will be a solution, AND-OR graphs or
AND - OR trees are used for representing the solution. The decomposition of the problem or
problem reduction generates AND arcs. One AND are may point to any number of successor
nodes. All these must be solved so that the arc will rise to many arcs, indicating several
possible solutions. Hence the graph is known as AND - OR instead of AND. Figure shows an
AND - OR graph.
In figure (a) the top node A has been expanded producing two area one leading to B
and leading to C-D. The numbers at each node represent the value of f ' at that node (cost of
getting to the goal state from current state). For simplicity, it is assumed that every operation
(i.e. applying a rule) has unit cost, i.e., each are with single successor will have a cost of 1 and
each of its components. With the available information till now, it appears that C is the most
52
promising node to expand since its f ' = 3 , the lowest but going through B would be better
since to use C we must also use D' and the cost would be 9(3+4+1+1). Through B it would be
6(5+1).
Thus the choice of the next node to expand depends not only n a value but also on whether
that node is part of the current best path form the initial mode. Figure (b) makes this clearer.
In figure the node G appears to be the most promising node, with the least f ' value. But G is
not on the current beat path, since to use G we must use GH with a cost of 9 and again this
demands that arcs be used (with a cost of 27). The path from A through B, E-F is better with a
total cost of (17+1=18). Thus we can see that to search an AND-OR graph, the following three
things must be done.
1. Traverse the graph starting at the initial node and following the current best path, and
accumulate the set of nodes that are on the path and have not yet been expanded.
2. Pick one of these unexpanded nodes and expand it. Add its successors to the graph and
computer f ' (cost of the remaining distance) for each of them.
3. Change the f ' estimate of the newly expanded node to reflect the new information
produced by its successors. Propagate this change backward through the graph. Decide
which of the current best path.
The propagation of revised cost estimation backward is in the tree is not necessary in
A* algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the
current best path can be selected. The working of AO* algorithm is illustrated in figure as
follows:
53
Referring the figure. The initial node is expanded and D is marked initially as promising node.
D is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we
can see that the AND arc B-C is better. It is now marked as current best path. B and C have to
be expanded next. This process continues until a solution is found or all paths have led to
dead ends, indicating that there is no solution. An A* algorithm the path from one node to the
other is always that of the lowest cost and it is independent of the paths through other nodes.
The algorithm for performing a heuristic search of an AND - OR graph is given below.
Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single
structure G. G represents the part of the search graph generated so far. Each node in G points
down to its immediate successors and up to its immediate predecessors, and also has with it
the value of h' cost of a path from itself to a set of solution nodes. The cost of getting from the
start nodes to the current node "g" is not stored as in the A* algorithm. This is because it is
not possible to compute a single such value since there may be many paths to the same state.
In AO* algorithm serves as the estimate of goodness of a node. Also a there should value
called FUTILITY is used. The estimated cost of a solution is greater than FUTILITY then the
search is abandoned as too expansive to be practical.
AO* ALGORITHM:
1. Let G consists only to the node representing the initial state call this node INTT. Compute
h' (INIT).
2. Until INIT is labeled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the
following procedure.
(I) Trace the marked arcs from INIT and select an unbounded node NODE.
(II) Generate the successors of NODE. If there are no successors then assign FUTILITY as h'
(NODE). This means that NODE is not solvable. If there are successors then for each
one
called SUCCESSOR, that is not also an ancestor of NODE do the following
54
(b) if successor is not a terminal node, mark it solved and assign zero to its h '
value.
(III) Propagate the newly discovered information up the graph by doing the following. Let S
be a
set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat
the following procedure;
(b) Compute h' of each of the arcs emerging from CURRENT, Assign minimum h'
to CURRENT.
(c) Mark the minimum cost path an s the best out of CURRENT.
(d) Mark CURRENT SOLVED if all of the nodes connected to it through the new
marked are have been labeled SOLVED.
(e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status
must be propagate backwards up the graph. Hence all the ancestors of CURRENT are
added to S.
A Constraint Satisfaction Problem (or CSP) is defined by a set of variables, X1, X2,
….Xn, and a set of constraints C1,C2,…,Cm. Each variable Xi has a nonempty domain D, of
possible values. Each constraint Ci involves some subset of variables and specifies the
allowable combinations of values for that subset.
55
Example for Constraint Satisfaction Problem:
The below Figure shows the map of Australia showing each of its states and territories. We
are given the task of coloring each region either red, green, or blue in such a way that the
neighboring regions have the same color. To formulate this as CSP, we define the variable to
be the regions: WA, NT, Q, NSW, V, SA, and T. The domain of each variable is the set {red,
green, blue}.The constraints require neighboring regions to have distinct colors; for example,
the allowable combinations for WA and NT are the pairs
{(red,green),(red,blue),(green,red),(green,blue),(blue,red),(blue,green)}.
The constraint can also be represented more succinctly as the inequality WA not = NT,
provided the constraint satisfaction algorithm has some way to evaluate such expressions.)
There are many possible solutions such as
56
Every solution must be a complete assignment and therefore appears at depth n if there are
n variables.
Varieties of CSPs
Finite domains
The simplest kind of CSP involves variables that are discrete and have finite domains.
Map coloring problems are of this kind. The 8-queens problem can also be viewed as finite-
domain CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns 1,….8
and each variable has the domain {1,2,3,4,5,6,7,8}. If the maximum domain size of any
n
variable in a CSP is d, then the number of possible complete assignments is O(d ) – that is,
exponential in the number of variables. Finite domain CSPs include Boolean CSPs, whose
variables can be either true or false.
Infinite domains
Discrete variables can also have infinite domains – for example, the set of integers or the set
of strings. With infinite domains, it is no longer possible to describe constraints by
enumerating all allowed combination of values. Instead a constraint language of algebraic
inequalities such as
CSPs with continuous domains are very common in real world. For example, in operation
research field, the scheduling of experiments on the Hubble Telescope requires very precise
timing of observations; the start and finish of each observation and maneuver are
continuous-valued variables that must obey a variety of astronomical, precedence and power
constraints. The best known category of continuous-domain CSPs is that of linear
programming problems, where the constraints must be linear inequalities forming a convex
region. Linear programming problems can be solved in time polynomial in the number of
variables.
57
Varieties of constraints:
Example: SA # green
Example: SA # WA
Figure. A simple backtracking algorithm for constraint satisfaction problem. The algorithm is
modeled on the recursive depth-first search
Most of the search strategies either reason forward or backward however, often a
mixture of the two directions is appropriate. Such mixed strategy would make it possible to
solve the major parts of problem first and solve the smaller problems the arise when
combining them together. Such a technique is called "Means - Ends Analysis".
58
The means ends analysis process centers around finding the difference between
current state and goal state. The problem space of means - ends analysis has an initial state
and one or more goal state, a set of operate with a set of preconditions their application and
difference functions that computes the difference between two state a(i) and s(j). A problem
is solved using means - ends analysis by
1. Computing the current state s1 to a goal state s2 and computing their difference D12.
2. Satisfy the preconditions for some recommended operator op is selected, then to reduce
the difference D12.
3. The operator OP is applied if possible. If not the current state is solved a goal is created
and means- ends analysis is applied recursively to reduce the sub goal.
4. If the sub goal is solved state is restored and work resumed on the original problem.
Means- ends analysis I useful for many human planning activities. Consider the example of
planning for an office worker. Suppose we have a different table of three rules:
1. If in our current state we are hungry, and in our goal state we are not hungry, then either
the "visit hotel" or "visit Canteen” operator is recommended.
2. If our current state we do not have money, and if in your goal state we have money, then
the "Visit our bank" operator or the "Visit secretary" operator is recommended.
3. If our current state we do not know where something is , need in our goal state we do
know, then either the "visit office enquiry" , "visit secretary" or "visit coworker " operator is
recommended.
59
QUESTION BANK & UNIVERSITY QUESTIONS
11 MARKS
1. What is an AI technique? Discuss the problems coming under the purview of AI.
2. Explain the heuristic search techniques in detail.
3. (a) What are the four basic types of agent program in any intelligent system?
(b) Explain how did you convert them into learning agents]
4. Explain the following uninformed search strategies with examples.(Nov-Dec’14)
(Nov’15)( Q. No. 6)
a. Breadth First Search (Nov’13)
b. Uniform Cost Search
c. Depth First Search
d. Depth Limited Search
5. (a) Explain A* algorithm in detail.
(b) Discuss eight queens problem in detail.
6. (a) Discuss in detail about DFS and BFS with suitable example. (Nov’13)( Q.
No. 7)
(b) Write in detail about History of AI.
7. Define AI. Explain foundations of AI in detail
8. Explain the structure of agents in detail.
9. Discuss the structure of AI agents and their functions. (Nov’13)(Nov-Dec’14)(Nov’15)
( Q. No. 4)
10. Explain in detail about Best-First search technique.(Nov’15)( Q. No. 7)
11. State AO* algorithm and describe in detail. (Apr-May’14)( Q. No. 8)
12. How do you define a problem as a state space search? Explain in detail with an example.
(Apr-May’14)(Nov’15)( Q. No. 5)
13. Explain constraint satisfaction in detail.(Nov’13)( Q. No. 9)
60