0% found this document useful (0 votes)
89 views104 pages

Unit I Introduction To Al and Production Systems

Here is the program to play Tic-Tac-Toe: 1. Represent the board as a 9 element vector with values 0, 1, 2 for empty, X, O 2. Check for wins after each move by checking rows, columns, diagonals 3. Alternate placing Xs and Os by choosing empty squares 4. Return winner or draw after 9 moves if no winner This program uses a simple data structure and checks for wins after each move by iterating through the possible win combinations. It alternates placing symbols without any search or strategy. 26 Program 2 - Data Structure: - Board: 9 element vector as before - Move: Structure containing square number and player (
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views104 pages

Unit I Introduction To Al and Production Systems

Here is the program to play Tic-Tac-Toe: 1. Represent the board as a 9 element vector with values 0, 1, 2 for empty, X, O 2. Check for wins after each move by checking rows, columns, diagonals 3. Alternate placing Xs and Os by choosing empty squares 4. Return winner or draw after 9 moves if no winner This program uses a simple data structure and checks for wins after each move by iterating through the possible win combinations. It alternates placing symbols without any search or strategy. 26 Program 2 - Data Structure: - Board: 9 element vector as before - Move: Structure containing square number and player (
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 104

CS 6659- ARTIFICIAL

INTELLIGENCE UNIT I
INTRODUCTION TO Al AND PRODUCTION SYSTEMS
Introduction to AI-Problem formulation, Problem
Definition -Production systems, Control strategies,
Search strategies. Problem characteristics,
Production system characteristics -Specialized
production system- Problem solving methods -
Problem graphs, Matching, Indexing and Heuristic
functions -Hill Climbing-Depth first and Breath
first, Constraints satisfaction - Related algorithms,
Measure of performance and analysis of search
algorithms.

Prepared by, Mr.R.Karthiban M.E.,


Assistant Professor, Department of Computer Science and Engineering,
Sri Shakthi Institute of Engineering and Technology, Coimbatore, Tamilnadu, India.
Content
1. Intelligence
2. Artificial Intelligence
3. The history of AI
4. The foundation of AI
5. AI Technique
6. Formulating problems
7. Production systems
8. Search strategies
9. Heuristic search
2
1. INTELLIGENCE

• The capacity to learn and solve problems.


• In particular,
• The ability to solve novel problems (i.e. solve
new problems)
• The ability to act rationally (i.e. act based on
reason)
• The ability to act like humans 3
1.1 What is involved in
intelligence?
• Ability to interact with the real world
• Reasoning and Planning
• Learning and Adaptation

4
2. ARTIFICIAL INTELLIGENCE
• Artificial Intelligence is a way of making a computer, a
computer-controlled robot, or a software think intelligently,
in the similar manner the intelligent humans think.
• 4 categories AI is definition:
1. The system that think like humans.
2. System that act like humans.
3. Systems that think rationally.
4. Systems that act rationally.
5
2.1. Building systems that think like
humans
• “The exciting new effort to make computers think …
machines with minds, in the full and literal sense” --
Haugeland, 1985
• “The automation of activities that we associate with
human thinking, … such as decision-making, problem
solving, learning, …” -- Bellman, 1978

6
2.2 Building systems that act
like humans
• “The art of creating machines that perform functions that
require intelligence when performed by people” --
Kurzweil, 1990
• “The study of how to make computers do things at which,
at the moment, people are better” -- Rich and Knight,
1991

7
2.3 Building systems that think
rationally
• “The study of mental faculties through the use of
computational models” -- Charniak and McDermott, 1985
• “The study of the computations that make it possible to
perceive, reason, and act” -Winston, 1992

8
2.4 Building systems that act
rationally
• “A field of study that seeks to explain and emulate
intelligent behaviour in terms of computational processes”
-- Schalkoff, 1990
• “The branch of computer science that is concerned with
the automation of intelligent behaviour” -- Luger and
Stubblefield, 1993

9
Acting Humanly: The Turing Test
Approach
• Alan Turing (1912-1954)
• “Computing Machinery and Intelligence” (1950)
Human

Human Interrogator

10
Acting Humanly: The Turing Test
Approach
• The computer need to possess the following capabilities:
• Natural language processing to enable it to communicate
successfully in English.
• Knowledge representation to store what it knows or hears
• Automated reasoning to use the stored information to answer
questions and to draw new conclusions.
• Machine learning to adapt to new circumstances and to
detect and extrapolate patterns.
11
Thinking humanly: The cognitive
modelling approach
• Machines program to think like human
• Requires the actual working of human mind.
• Express theory as a compute program.
• Compare human behaviour with program’s output.
• Example: GPS (General Problem Solver)
• Requires testable theories of the workings of the human
mind: cognitive science.
12
Thinking rationally : The “laws of
thought approach”
• The right thinking introduced the concept of logic.
• Example:
• Arul is the student of III year CSE
• All students are good in III year CSE
• Arul is good student.
• Obstacles:
• Informal knowledge representation.
• Computational complexity and resources.
13
Acting rationally : The rational
agent approach
• Acting so as to achieve one’s goals, given one’s beliefs.
• Does not necessarily involve thinking.
• Advantages:
• Correct inference selected and applied.
• It concentrates on scientific development rather than other
methods.

14
3. The History of AI
• 1943: early beginnings
• McCulloch & Pitts: Boolean circuit model of brain
• 1950: Turing
• Turing's "Computing Machinery and Intelligence
• 1956: birth of AI
• Dartmouth meeting: "Artificial Intelligence―name adopted

15
The History of AI (Cnt.,)
• 1950s: initial promise
• Early AI programs, including
• Samuel's checkers program
• Newell & Simon's Logic Theorist
• 1955-65: great enthusiasm
• Newell and Simon: GPS, general problem solver
• Gelernter: Geometry Theorem Prover
• McCarthy: invention of LISP
16
The History of AI (Cnt.,)
• 1966—73: Reality dawns
• Realization that many AI problems are intractable
• Limitations of existing neural network methods identified
• Neural network research almost disappears
• 1969—85: Adding domain knowledge
• Development of knowledge-based systems
• Success of rule-based expert systems,
• E.g., DENDRAL, MYCIN
• But were brittle and did not scale well in practice
17
The History of AI (Cnt.,)
• 1986-- Rise of machine learning
• Neural networks return to popularity
• Major advances in machine learning algorithms and applications
• 1990-- Role of uncertainty
• Bayesian networks as a knowledge representation framework
• 1995--AI as Science
• Integration of learning, reasoning, knowledge representation
• AI methods used in vision, language, data mining, etc
18
4. The foundation of AI
• Philosophy (423 BC -- present):
• Logic, methods of reasoning.
• Mind as a physical system.
• Foundations of learning, language, and rationality.
• Mathematics (c.800 -- present):
• Formal representation and proof.
• Algorithms, computation, decidability, tractability.
• Probability.
19
The foundation of AI(Cnt.,)
• Psychology (1879 -- present):
• Adaptation.
• Phenomena of perception and motor control.
• Experimental techniques.
• Computer Engineering and Linguistics (1957 -- present):
• Knowledge representation.
• Grammar.

20
Domain that are the targets of
work in AI
• 4.1 Mundane Tasks:
• Perception – Vision and Speech
• Natural Languages – Understanding, Generation, Translation
• Common sense reasoning
• Robot Control

21
Domain that are the targets of work
in AI (Cnt.,)
• 4.2 Formal Tasks:
• Games : chess, checkers, Backgammon etc.
• Mathematics: Geometry, logic, Integral calculus and Proving properties of
programs
• 4.3 Expert Tasks:
• Engineering ( Design, Fault finding, Manufacturing planning)
• Scientific Analysis
• Medical Diagnosis
• Financial Analysis
22
5. AI Technique
• Intelligence requires Knowledge
• Knowledge possesses less desirable properties such as:
• Voluminous
• Hard to characterize accurately
• Constantly changing
• Differs from data that can be used
• AI technique is a method that exploits knowledge that should be represented in such a way that:
• Knowledge captures generalization
• It can be understood by people who must provide it
• It can be easily modified to correct errors.
• It can be used in variety of situations

23
Example
• Tic – Tac - Toe

24
Tic – Tac - Toe
• To play Tic – Tac – Toe displayed program increased in :
• Series increase
• Their complexity
• Use of generalization
• Clarity of their knowledge
• Extensibility of their approach

25
Program 1
• Data Structure:
• Board: 9 element vector representing the board, with 1-9 for each
square. An element contains the value 0 if it is blank, 1 if it is filled
by X, or 2 if it is filled.
• Movetable: A large vector of 19,683 elements ( 3^9), each
element is 9- element vector.
• Elements of vector: 0 : Empty, 1 : X, 2: O

26
• Algorithm:
1. View the vector as a ternary number. Convert it to a
decimal number.
2. Use the computed number as an index into Move-Table
and access the vector stored there.
3. Set the new board to that vector.

27
• Comments or disadvantages:
1. A lot of space to store the Move-Table.
2. A lot of work to specify all the entries in the Move-Table.
3. Difficult to extend.

28
Program 2
• Data Structures:
• Board: A nine element vector representing the board. But instead
of using 0,1 and 2 in each element, we store 2 for blank, 3 for X
and 5 for O
• Turn: An integer indicating which move of the game is about to be
played.
• 1 indicates the first
• 9 indicates the last

29
• Algorithm:
• Uses three functions:
• Make 2 – Returns 5 if the centre of the square board is blank
otherwise, this returns any blank noncorner square(2,4,6 or 8).
• Posswin(p) – Returns 0 if the player p cannot win on his next move;
otherwise it returns the number of the square that constitutes a
winning move. If the product is 18 (3x3x2), then X can win. If the
product is 50 ( 5x5x2) then O can win.
• Go(n) – placing board[n] to X or O odd – X turn, Even – O turn
30
• Strategy:
• Turn = 1 Go(1)
• Turn = 2 If Board[5] is blank, Go(5), else Go(1)
• Turn = 3 If Board[9] is blank, Go(9), else Go(3)
• Turn = 4 If Posswin(X) != 0, then Go(Posswin(X))
• .......

31
• Comments:
• Not efficient in time, as it has to check several conditions
before making each move.
• Easier to understand the program’s strategy.
• Hard to generalize.

32
Comments:
• Checking for a possible win is quicker.
• Human finds the row-scan approach easier, while computer finds
the number-counting approach more efficient.

33
Program 3
• Algorithm:
• If it is a win, give it the highest rating.
• Otherwise, consider all the moves the opponent could
make next. Assume the opponent will make the move that
is worst for us. Assign the rating of that move to the
current node.
• The best node is then the one with the highest rating.
34
• Comments:
• Require much more time to consider all possible moves.
• Could be extended to handle more complicated games.

35
6. FORMULATING PROBLEMS
• Problem formulation is the process of deciding what
actions and states Formulate
to consider, given a goal
Goal,
Formulate
problem

Search

Execute

36
• Four sequence of operations need to perform during
problem formation:
1. Define the problem precisely
2. Analyse the problem
3. Isolate and present the background knowledge that is necessary
to solve the problem
4. Choose the best problem solving technique of particular
problem.

37
Problem as a state space search
• The process of applying an operator to a state and its
subsequent transition to the next state as a continuous
process until the goal(desired) is achieved.

38
• Types of problem
1. Single state problem
2. Multiple state problem
3. Contingency problem
4. Exploration problem

39
• A problem can be defined formally by four components (well defined problem and
solutions)
1. Initial state
2. Operator
3. Successor function
1. State Space
2. Path
4. Goal test
5. Path cost
Calculated by , C(x,a,y)
Total cost = Path cost + Search cost

40
Example
• The 8-Puzzle problem
• The 8-Queens problem
• Traveling salesman problem
• Water Jug Problem
• Crypt arithmetic problem
• The vacuum world problem
• Missionaries and cannibals problem
41
Problem Characteristics
• Is the problem decomposable into set of sub problems?
• Can solution steps been ignored or undone?
• Is the problem universe predictable?
• Is the good solution for the problem absolute or relative?
• Is the solution a state or a path for the problem?
• What is role of knowledge?
• Does the task require interaction with the person?
• Problem Classification
42
7. Production Systems

43
Production Systems
• Kind of intelligence process useful to structure AI Programs for
describing and performing the search process.
Problem solving = Searching for a goal state
• Components of production system
1. Production Rule: Ci → Ai where Ci is the condition part and Ai is the action
part.
2. Knowledge / Database
3. Control strategy
4. Rule applier
44
State Space Search: Water Jug
Problem
“You are given two jugs, a 4‐litre one and a 3‐
liter one. Neither has any measuring
markers on it. There is a pump that can be
used to fill the jugs with water. How can
you get exactly 2 litres of water into 4‐litre
jug.”
45
State Space Search: Water Jug
Problem
• State: (x, y)
x = 0, 1, 2, 3, or 4
• Start state: (0, 0).
• Goal state: (2, n) for any n.
46
State Space Search: Water Jug
Problem
1. (x, y)  (4,
if x  y)
4  (x,
2. (x, y) 3)
if y   (x  d,
3 y)
3. (x, y)  (x, y 
if x  d) 47

0
State Space Search: Water Jug
5. (x, y) Problem
 (0,
if x  y)
0  (x,
6. (x, y) 0)
7. if  (4, y  (4  x)) if x 
(x,y y)
0
y  4, y  0
8. (x, y) (x  (3  y), 3) if x 
y  3, x  0
48
State Space Search: Water Jug
Problem
9. (x, y) (x  y, 0) if x 
y  4, y  0
10.(x, y) (0, x  y) if x 
y  3, x  0
11.(0, 2)  (2, 0)

12.(2, y)  (0, y) 49
State Space Search: Water Jug
Problem
1. current state = (0, 0)
2. Loop until reaching the goal state (2, 0)
 Apply a rule whose left side matches the current state
 Set the new current state to be the resulting state

(0, 0)
(0, 3)
(3, 0)
(3, 3)
(4, 2)
50
(0, 2)
(2, 0)
• Production system Characteristics:
• Monotonic- application of a rule never prevents later application of
the another rule
• Non- Monotonic- application of a rule prevents the later
application of the another rule
• Partially commutative – Application of a particular sequence rules
transforms state x to y then it is applicable to its permutation also.
• Commutative- Monotonic + partially commutative.

51
52
• Feature of production systems
• Expressiveness and intuitiveness
If this happen – do that
If this is so – then this should happen
• Simplicity
• Modularity
• Modifiability
• Knowledge intensive
53
• Disadvantage of production system
• Opacity
• Inefficiency
• Absence of learning
• Conflict resolution.

54
Control strategies
• Flow of execution of production rules to obtain desired goal state.
• It causes motion
• Otherwise, it will never lead to a solution.
• It is systematic
• Otherwise, it may use more steps than necessary.
• It is efficient
• Find a good, but not necessarily the best, answer.

55
Example – Production systems
• Water jug problem
• Checking duplicate nodes
• The tower of Hanoi
• The monkey and banana problem

56
8.Search Strategies
• Uninformed search (blind search)
• Having no information about the number of steps from the
current state to the goal.
• Informed search (heuristic search)
• More efficient than uninformed search.

57
Search Strategies: Blind Search
• Breadth‐first search
Expand all the nodes
of one level first.

• Depth‐first
search
Expand one of the nodes 58

at the deepest level.


Search Strategies: Blind Search
Criterion Breadth- Depth-
First First
Time

Space

Optimal?

Complete?

b: branching factor d: solution depth m: maximum depth


59
Search Strategies: Blind Search
Criterion Breadth- Depth-
First First
Time
bd bm
Space bm
bd
Optimal? Yes No

Complete? Yes No

b: branching factor d: solution depth m: maximum depth


60
Systematic control strategy for the
water jug problem
• Breadth First Search (Blind Search)
1. Create a variable called NODE-LIST and set it to initial state.
2. Unit a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. if NODE-LIST is
empty, quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
ii. If the new state is a goal state, quit and return this state.
iii. Otherwise, add the new state to the end of NODE-LIST.

61
62
• Advantages of Breath First search
• BFS cannot be trapped exploring a blind alley.
• If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be
found.

63
• Depth First Search
1. If the initial state is the goal state, quit and return success.
2. Otherwise do the following until success or failure is signalled:
a. Generate a successor, E, of initial state. If there are no more successor,
signal failure.
b. Call depth first search, with E as the initial state.
c. If the success is returned, signal success. Otherwise continue in this
loop.

64
• Advantages of Depth First search
• DFS requires less memory since only the nodes on the
current path are stored.
• By chance DFS may find a solution without examining
much of the search space at all.

65
• Measuring problem solving performance:
• Completeness
• Time complexity
• Space complexity
• Optimality

66
• Issues in the design of search programs:
• Direction of the search
• Selection of appropriate rules
• Representation of each node in the search process

67
Matching
• Problem solving can be done through search.
• Search involves choosing among the rules that can be
applied at a particular point, the ones that are most likely
to lead to a solution.
• This can be done by extracting rules from large number of
collections.
• How to extract from the entire collection of rules that can
be applied at a given point?
68
• This can be done by Matching between current state and the
precondition of the rules.
• Do simple search through all the rules comparing each one‘s
preconditions to the current state and extracting the one that match.
• But there are two problems with this simple solution.
1. In big problems large number of rules are used. Scanning through all
of this rules at every step of the search would be hopelessly
inefficient.
2. It is not clearly visible to find out which condition will be satisfied.
69
Indexing
• To overcome above problems indexing is used.
• Instead of searching all the rules the current state is used
as index into the rules, and select the matching rules
immediately.

70
9. Heuristic Search
• Heuristics are criteria for deciding which among several
alternatives be the most effective in order to achieve some
goal.
• Heuristic is a technique that improves the efficiency of a
search process possibly by sacrificing claims of systematic
and completeness.
• It no longer guarantees to find the best answer but almost
always finds a very good answer.
• Using good heuristics, we can hope to get good solution to
71

hard problems.
9.1 Generate-and-Test
• Algorithm: GENERATE-AND-TEST
• Step 1: Generate a possible solution.
• Generating particular point in the problem space
• Generating a path from start state.
• Step 2: Test to see if this actually a solution by comparing the
chosen point or the endpoint of the chosen path to the set of
acceptable goal state.
• Step 3: If solution has been found, quit otherwise return to step 1.
72
• It only tells us with ‘yes’ or ‘no’ .
• Generate-and-test is a BFS so its need to know complete
solution its can be tested.
• Generate-and-Test best model when it combines with BFS
and backtracking.

73
9.2 Local Search Algorithm
• Operate using single current state (Rather than multiple
paths) and generally move only to neighbours of that
state.
• Not systematic.
• Advantage:
• Uses little memory usually constant.
• Find reasonable solutionn in large or infinite space.(Systemic
algorithms are unsuitable).
74
• Optimization - Aims to find the best state according to an
objective function using state space landscape.
• Global minimum.
• Global maximum.
• A complete local search algorithm always finds a goal if exists.
• An optimal algorithm always finds a global maximum/
Minimum.

75
• Some of Local Search algorithms: State
Space Landscape
• Hill Climbing Search
• Steepest – Ascent Hill Climbing
• Simulated Annealing
• Local beam search
• Genetic algorithm

76
9.2.1 Hill Climbing
• Always trying to improve the current state.
• A search technique that move in the direction of
increasing value to reach the peak.
• If more than one successor is exists, one is selected from
the neighbourhood of the current one.(Local search)
• Drawback: Defining the term “better” has vagueness.

77
• Algorithm: Simple Hill Climbing
1. Evaluate the initial state. It is also a goal state, then return
it and quit. Otherwise continue with the initial state as the
current state.
2. Loop until a solution is found or until there are no new
operators left to be applied in the current state:
a. Select an operator that has not yet been applied to the current
state and apply it to produce a new state.
b. Evaluate the new state.
i. If it is a goal state, then return it and quit.
ii. If it is not a goal state, but it is better than the current state, then
make it the current state.
iii. If it is not better than the current state, then continue in the loop.
78
9.2.2 Steepest – Ascent Hill
Climbing
• Considers all the moves form the current state and selects the best one
as the next state.
• This method is called steepest – ascent hill climbing or gradient search.
• Hill climbing Disadvantages
• Local Maximum - better than all its neighbours but is not better than some other
states farther away
• Plateau - whole set of neighbouring states have the same value
• Ridge - It is an area of the search space that is higher than surrounding areas and
that itself has a slope

79
Algorithm: Steepest – Ascent Hill
Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit
.Otherwise, continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no
change to current state:
A. Let SUCC be a state such that any possible successor of the current state will be better
than SUCC.
B. For each operator that applies to the current state do:
i. Apply the operator and generate a new state.
ii. Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to SUCC.
If it is better then set SUCC to this state. If it is not better, leave SUCC alone.
C. If the SUCC is better than current state, then set current state to SUCC.
80
9.2.3 Simulated Annealing
• Hill climbing search cannot guarantee complete solutions.
– Why?
• Random walk - Successor chosen randomly from the set of
successor will be complete.
• Combines Hill climbing with random walk.
• The problem of local maxima has been overcome in
simulated annealing search.

81
•• Notations:
 
• ΔE – Probability of worsened.
• Rate at which -ΔE is cooled is called annealing schedule.
• T – Determines the probability which measures temperature.
• P`=
• How simulated annealing differs from Hill climbing
• The annealing schedule is maintained.
• Moves to worse states are also accepted.
• In addition to current state, the best state record is maintained.

82
Simulated Annealing Algorithm
1. Evaluate the initial state. Mark it as current state. If the
initial state is the best state, return it and quit.
Otherwise continue with the initial state as current state.
2. Initialize best state to current state.
3. Initialize T according to annealing schedule.

83
4. Repeat the following until a solution is obtained or operators are not
left:
a. Apply yet unapplied operator to produce a new state
b. b. For new state compute - ΔE = value of current state
i. If the new state is the goal state then return and quit
ii. if it is better than current state, make it as current state and record as best state.
iii. If it is not better than the current state, then make it current state with probability P`.
iv. Revise T according to annealing schedule

5. Return best state as answer.

84
9.3 Best – First Search
• Combines advantage of Depth first search and Breath first
search.
• Advantages of Depth First Search(DFS)?
• Advantages of Breath First Search(BFS)?

85
9.3.1 OR Graph
• Two lists of nodes are used to implement a graph search
procedure.
1. OPEN: Nodes that have been generated and have had the
heuristic function applied to them but not have been examined
yet.
2. CLOSED: Nodes that have already been examined. These nodes
are kept in the memory if we want to search a graph rather than
a tree because whenever a node will be generated, we will have
to check whether it has been generated earlier.
86
OR Graph Example

87
Algorithm: Best first search
1. Put the initial node on the list say ‘OPEN‘.
2. If (OPEN = empty or OPEN= goal) terminate search, else
3. Remove the first node from open( say node is a)
4. If (a=goal) terminate search with success else
5. Generate all the successor node of ‘a‘. Send node ‘a‘ to a list
called ‘CLOSED‘. Find out the value of heuristic function of all
nodes. Sort all children generated so far on the basis of their
utility value. Select the node of minimum heuristic value for
further expansion.
6. Go back to step 2.
The best first search can be implemented using priority queue.
88
9.3.2 The A* Algorithm
• The A* algorithm is a specialization of best first search.
• The form of heuristic estimation function for A* is defined as follows:
f(n)=g(n)+h(n)
• where f(n)= evaluation function
g(n)= cost (or distance) of current node from start node.
h(n)= cost of current node from goal node.
• In A* algorithm the most promising node is chosen from expansion.
• The promising node is decided based on the value of heuristic function.
89
• The node having least value of heuristic function would be
most promising node, where in some situation, the node
having maximum value of heuristic function is chosen for
expansion.
• A* algorithm is an example of optimal search algorithm.
• Uses open and closed list data structure.

90
A* algorithm:
1. Place the starting node ‘s‘ on ‘OPEN‘ list.
2. If OPEN is empty, stop and return failure.
3. Remove from OPEN the node ‘n‘ that has the smallest value of
f*(n). if node ‘n’ is a goal node, return success and stop otherwise.
4. Expand ‘n‘ generating all of its successors ‘n‘ and place ‘n‘ on
CLOSED. For every successor ‘n‘ if ‘n‘ is not already OPEN , attach
a back pointer to ‘n‘. compute f*(n) and place it on CLOSED.
5. Each ‘n‘ that is already on OPEN or CLOSED should be attached to
back pointers which reflect the lowest f*(n) path. If ‘n‘ was on
CLOSED and its pointer was changed, remove it and place it on
OPEN.
6. Return to step 2.

91
9.3.3. Problem reduction
• Problem reduction: Decomposing the problem.
• AND OR Graph : All decomposed problems must be
solved.
• AND represented by arc symbol.
• Choice of selection depends on the f’ value
• Disadvantage of problem reduction: Fails to take into
account any interaction between sub goals.
92
• Algorithm : Problem Reduction
1. Initialize the graph to the starting node.
2. Loop until the starting node is labelled SOLVED or until
its cost goes above FUTILITY.
i. Traverse the graph, starting at initial node and following the
current best path, and accumulate the set of nodes that are on
that path and have not yet been expanded or labelled as solved.
ii. Pick one of these unexpanded nodes and expand it.
iii. If there are no successor, assign FUTILITY as the value of this
node. Otherwise, add its successor to the graph and for each of
them compute f’. If f’ any value =0 then mark that as solved.

93
9.3.3.1 AO* Search
• Rather than two list it uses single structure Graph.
• Each node in the graph will point both down to its
immediate successor and up to its immediate predecessor.
• Each node in the graph associate with h’ value.(Path cost
from the particular node to goal node).

94
Algorithm: AO* Search
1. Let G consists only to the node representing the initial state call this node INTT. Compute h' (INIT).
2. Until INIT is labelled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the following procedure.
(I) Trace the marked arcs from INIT and select an unbounded node NODE.
(II) Generate the successors of NODE. if there are no successors then assign FUTILITY as h' (NODE). This means that NODE
is not solvable. If there are successors then for each one called SUCCESSOR, that is not also an ancestor of NODE do
the following,
(a) add SUCCESSOR to graph G
(b) if successor is not a terminal node, mark it solved and assign zero to its h ' value.
(c) If successor is not a terminal node, compute it h' value.
(III) propagate the newly discovered information up the graph by doing the following. let S be a set of nodes that have been
marked SOLVED. Initialize S to NODE. Until S is empty repeat the following procedure;
(a) select a node from S call if CURRENT and remove it from S.
(b) compute h' of each of the arcs emerging from CURRENT, Assign minimum h' to CURRENT.
(c) Mark the minimum cost path a s the best out of CURRENT.
(d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked are have been labelled
SOLVED.
(e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status must be propagating backwards up
the graph. hence all the ancestors of CURRENT are added to S.

95
9.2.4.Constraint Satisfaction
Algorithm: Constraint Satisfaction
1. 1. Propagate available constraints:
i. Open all objects that must be assigned values in a complete solution.
ii. Repeat until inconsistency or all objects are assigned valid values:
Select an object and strengthen as much as possible the set of
constraints that apply to object.
If set of constraints different from previous set then open all objects
that share any of these constraints. Remove selected object.

96
2. If union of constraints discovered above defines a solution return
solution.
3. If union of constraints discovered above defines a contradiction
return failure.
4. Make a guess in order to proceed. Repeat until a solution is found or
all possible solutions exhausted:
i. Select an object with a no assigned value and try to strengthen its constraints.
ii. Recursively invoke constraint satisfaction with the current set of constraints
plus the selected strengthening constraint.

97
9.2.5 Means – end Analysis
• Means-ends analysis allows both backward and forward searching.
• Solves the major parts of a problem first and then goes back and solves
the small problems that arise by putting the big pieces together.
• The means-end analysis process detects the differences between the
current state and goal state. Once such difference is isolated an
operator that can reduce the difference has to be found.
• The operator may or may not be applied to the current state. So a sub
problem is set up of getting to a state in which this operator can be
applied.

98
• A separate data structure called a difference table which
uses the rules by the differences that they can be used to
reduce.

99
• Algorithm: means-ends analysis
1. Until the goal is reached or no more procedures are
available:
i. Describe the current state the goal state and the differences
between the two.
ii. Use the difference the describes a procedure that will hopefully
get nearer to goal.
iii. Use the procedure and update current state
2. If goal is reached then success otherwise fail. 100
Example
• Solve the following problem using means-ends analysis
• A farmer wants to cross the river along with fox, chicken
and grain. He can take only one along with him at a time.
If fox and chicken are left alone the fox may eat the
chicken. If chicken and grain are left alone the chicken
may eat the grain. Give the necessary solution.

101
102
103
Thank
You…! 104

You might also like