0% found this document useful (0 votes)
15 views

Module - 4

Uploaded by

inzamamgmd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Module - 4

Uploaded by

inzamamgmd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Principles of Artificial Intelligence/18AI55 Module – 4

Syllabus
Module – 1

Introduction to AI: history, Intelligent systems, foundation and sub area of AI, applications, current trend and
development of AI. Problem solving: state space search and control strategies.

Module – 2
Problem reduction and Game playing: Problem reduction, game playing, Bounded look-ahead strategy, alpha-
beta pruning, Two player perfect information games.

Module – 3
Logic concepts and logic Programming: propositional calculus, Propositional logic, natural deduction system,
semantic tableau system, resolution refutation, predicate logic, Logic programming.

Module – 4

Advanced problem solving paradigm: Planning: types of planning system, block world problem, logic based
planning, Linear planning using a goal stack, Means-ends analysis, Non-linear planning strategies, learning plans.

Module – 5
Knowledge Representation, Expert system
Approaches to knowledge representation, knowledge representation using semantic network, extended
semantic networks for KR, Knowledge representation using Frames.
Expert system: introduction phases, architecture ES verses Traditional system.

Course Learning Objectives: This course will enable students to:


1. Gain a historical perspective of AI and its foundations.
2. Become familiar with basic principles of AI toward problem solving.
3. Get to know approaches of inference, perception, knowledge representation, and learning.
Course outcomes: The students should be able to:
1. Apply the knowledge of Artificial Intelligence to write simple algorithm for agents.
2. Apply the AI knowledge to solve problem on search algorithm.
3. Develop knowledge base sentences using propositional logic and first order logic.
4. Apply first order logic to solve knowledge engineering process.
Textbooks:
1. Saroj Kaushik, Artificial Intelligence, Cengage learning, 2014.

Reference Books:
1. Elaine Rich, Kevin Knight, Artificial Intelligence, Tata McGraw Hill.
2. Nils J. Nilsson, Principles of Artificial Intelligence, Elsevier, 1980.
3. StaurtRussel, Peter Norvig, Artificial Intelligence: A Modern Approach, Pearson Education, 3rd Edition,
2009.
4. George F Lugar, Artificial Intelligence Structure and strategies for complex, Pearson Education, 5 th
Edition, 2011.

Web Resource Link:

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 1


Principles of Artificial Intelligence/18AI55 Module – 4

Module – 3
Advanced problem solving paradigm: Planning: types of planning system, block world problem, logic based
planning, Linear planning using a goal stack, Means-ends analysis, Non-linear planning strategies, learning plans.

Introduction
Planning refers to the process of computing several steps of problem-solving before executing any of them.
Planning increases the autonomy and flexibility of systems by constructing sequences of actions that help them
in achieving their goals.

Planning involves the representation of actions and world models, reasoning about the effects of actions, and
techniques for efficiently searching the space of possible plans.

A planner may therefore be either a program that searches for a solution or one that proves the existence of a
solution.

In general, problem-solving systems, elementary techniques are required to perform the following functions:

 Choosing the Best Rule (Based on Heuristics) to be Applied For selecting appropriate rules, the most
widely used technique is to first isolate a set of differences between the desired goal state and current
state. Then, those rules are identified that are relevant for reducing this difference. If more than one
rule is found, then heuristic information is applied to choose an appropriate rule.

 Applying the Chosen Rule to Obtain a New Problem State In simple-problem-solving systems, it is easy
to apply rules as each rule specifies the problem state that would result from its application. However,
in complex problems, we have to deal with rules that only specify a small of the complete problem state.

 Detecting When a Solution is Found By the time the required goal state is reached, the solution of each
sub problem is found. The solutions so obtained are then combined to give the final solution of the
problem.

 Detecting Dead Ends so that New Directions may be Explored When a situation arises where we are
not able to proceed further from a particular state, which is not the desired goal state, we can declare
the state to be a dead end and can proceed further through a new direction.

Planning can also be done by proving a theorem.


Planning techniques are applied in a variety of tasks including robotics, process planning, web-based
information gathering, autonomous agents, spacecraft mission control, and so on. Three phases of advanced
problem-solving are recognized: planning, execution, and learning. The functions carried out in each phase are
described as follows:
• Planning phase: In this phase, a set of actions called a plan is generated, which transforms a
given start state to the goal state.
• Execution phase: In this phase, each action of the plan generated in the preceding phase is
performed starting with the start state.
• Learning phase: In this phase, plan generation and plan execution are learned through
experience.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 2


Principles of Artificial Intelligence/18AI55 Module – 4

Types of Planning Systems


Planning problems involve, representing states, actions, and goals. An ideal language would be one which is
expressive enough to describe a wide variety of problems, but restrictive enough to allow efficient algorithms
to operate over it.

The different components may be represented in the following manner:


• Representation of States For representation of states, planners decompose the world into logical
conditions and then represent a state as a conjunction of predicate atoms (positive literals).

• Representation of Goals A goal is a partially specified state and is represented as a conjunction of


predicate atoms (ground literals).

• Representation of Actions An action is specified in terms of preconditions (that must hold true before
the action can be executed) and effects (that ensue when the action has been executed).

Operator-Based Planning
• To solve a problem with the help of planning, we need to have a start state, a set of actions, a goal state,
and a database of logical sentences about the start state.
• Once these are specified, the planner will try to generate a plan, which when executed by the executor
in a state S (start state), satisfying the description of the start state, will result in the state G (goal state),
satisfying the description of the final state.
• Here, actions are represented as operators. This approach, also known as the STRIPS approach
(explained later), utilizes various operator schemas and plan representations. The major design issues
and concepts are given as follows:
• Operator schema Add-delete-precondition lists, procedural vs declarative representations, etc.
• Plan representations Linear plans, non-linear plans, hierarchical plans, partial-order
plans, conditional plans, etc.
• Planning algorithms Planning as search, world-space vs plan-space, partial-order planning, total-
order planning, progression, goal-regression, etc.
• Plan generation Plan reformulation, repair, total-ordering, etc.

Planning Algorithms
• Search techniques for planning involve searching through a search space. We now introduce the concept
of planning as a search strategy.
• In search technique method, there are basically two approaches: searching a world (or state) space or
searching a plan space. The concepts of world (or state) space and plan space may be defined as given
below.
• World-Space In world space, the search space constitutes a set of states of the world, action is defined
as transitions between states, and a plan is described as a path through the state space.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 3


Principles of Artificial Intelligence/18AI55 Module – 4

• Plan-Space In plan space, the search space is a set of plans (including partial plans). The start state is a
null plan and transitions are plan operators. The order of the search is not the same as plan execution
order. A shortcoming of the plan space is that it is hard to determine what is true in a plan.

Searching a World Space


• Each node in the state search graph denotes a state of the world, while arcs in the graph correspond to
the execution of a specific action. The planning problem is to find a path from a given start state to the
desired goal state in the search graph. For developing planning algorithms, one of the following two
approaches may be used:
• Progression This approach refers to the process of finding the goal state by searching through the states
generated by actions that can be performed in the given state, starting from the start state. It is also
referred to as the forward chaining approach. Here at a given state, an action (may be non-
deterministic) is chosen whose preconditions are satisfied.
• Regression In this approach, the search proceeds in the backward direction, that is, it starts with the
goal state and moves towards the start state. This is done by finding actions whose effects satisfy one
or more of the posted goals. Posting the preconditions of the chosen action as goals is called goal
regression. It is also known as backward chaining approach.

Searching a Plan Space


Each node in plan space graph represents partial plans, while arcs denote plan refinement operations. One
can search for either a plan with a totally-ordered sequence of actions or a plan with a partially-ordered set
of actions. A partial-order plan has the following three components:
i. Set of actions As an example of a set of actions, we can consider some of the activities
from our daily routine, such as go-for-morning-walk, wake-up, take-bath, go-to-work, go-
to sleep, and so on.
ii. Set of ordering constraints In the actions mentioned above, the action wake-up is
performed before go-for-morning-walk; therefore, we can represent it as [wake-up go-
for-morning-walk]. Some of the partial ordering constraints in the set of actions given
above may be written as follows:
wake-up --- go-for-morning-walk
wake-up --- take-bath
wake-up --- go-to-sleep
wake-up --- go-to-work
go-for-morning-walk --- go-to-work
go-for-morning-walk --- go-to-sleep
take-bath --- go-to-sleep
go-to-work --- go-to-sleep

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 4


Principles of Artificial Intelligence/18AI55 Module – 4

iii. Set of causal links We observe that awake is a link from the action wake-up to the action
go-for- morning-walk with an awakening state of the person between them represented
as 'wake- up-awake- go-for-morning-walk'. This denotes a casual link.

Case-Based Planning
In case-based planning, for a given case (consisting of start and goal states) of a new problem, the library of
cases in case base is searched to find a similar problem, with similar start and goal states. The retrieved solution
is then modified or tailored according to the new problem.
Thus, case-based planning helps in utilizing specific knowledge obtained by previous experience. This approach
is based on human methodology of tackling a problem. Humans also obtain solutions their problems by learning
from their experience and solve new problem by finding a similar case handled in the past.
So, in case-based planning, a new problem is matched against the cases stored in the case base (past
experience) and one or more similar cases are retrieved.
A solution suggested by the matched cases is then reused and tested for success. Unless the retrieved case is a
close match, the solution will have to be revised, which produces a new case that can then be retained as a part
of learning. An initial description of a problem defines a new case. This planning procedure is described as a
cyclical process consisting of the following steps:
• Retrieve The most similar cases are retrieved from the case base using various methods.
• Reuse The cases are reused in an attempt to solve a new problem;
• Revise The proposed solution of the retrieved case is revised and modified, if necessary, and
• Retain The new solution is retained as a part of learning if it is very different from the
retrieved case.

State-Space Linear Planning


In the process of linear planning, only one goal is solved at a time. It requires a simple search strategy that uses
a stack of unachieved goals. The advantages and disadvantages of state-space linear planning are discussed as
follows:
Advantages of Linear Planning
• Since the goals are solved one at a time, the search space is considerably reduced.
• Linear planning is especially advantageous if goals are (mainly) independent.
• Linear planning is sound.
Disadvantages of Linear Planning
• It may produce sub-optimal solutions (based on the number of operators in the plan).
• Linear planning is incomplete.
• In linear planning, the planning efficiency depends on ordering of goals.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 5


Principles of Artificial Intelligence/18AI55 Module – 4

State-Space Non-Linear Planning


In non-linear planning, the sub goals may be solved in any order and they may be interdependent. In this
process, the basic idea is to use a goal set instead of a goal stack. All possible sub-goal orderings are included in
the search space and the goal interactions are handled by interleaving. The advantages and disadvantages of
state-space non-linear planning are discussed as follows:
Advantages of Non-Linear Planning
• Non-linear planning is sound and complete.
• It may be optimal with respect to plan length (depending on the search strategy employed).
Disadvantages of Non-Linear Planning
• Since all possible goal orderings may have to be considered, a larger search space is required in
non-linear planning.
• Non-linear planning requires a more complex algorithm and a lot of book-keeping.

Block World Problem: Description


A block world problem basically consists of handling blocks and generating a new pattern from a given pattern
(Rich and Knight, 2003). This problem closely resembles a game of block arrangement and construction usually
played by children. Our aim is to explain strategies that may lead to a plan (or steps), which may help a robot
to solve such problems. For this, let us consider the following assumptions:
• All blocks are of the same size (square in shape).
• Blocks can be stacked on each other.
• There is a flat surface (table) on which blocks can be placed.
• There is a robot arm that can manipulate the blocks. The arm can hold only one block at a time.
In the block world problem, a state is described by a set of predicates, which represent the facts that are true
in that state. For every action, we describe the changes that the action makes to the state description. In
addition, some statements regarding the things which remain unchanged by applying actions are also to be
specified. For example, if a robot picks up a block, the colour of the block will not change. This is the simplest
possible approach and is described below. Table 6.1 provides descriptions of the operators or actions used for
this problem.

Actions (Operations) Performed by Robot


To understand the concept of block world problem, let us use the following convention:
• Capital letters X, Y, Z, are used to denote variables
• Lowercase letters a, b, c, .... are used to represent specific blocks

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 6


Principles of Artificial Intelligence/18AI55 Module – 4

In block world problem, certain predicates are used to represent the problem states for performing the
operations given in Table 6.1. These predicates are described in Table 6.2.

Logic-Based Planning
In this approach, we have to explicitly state all possible logical statements that are true in the block world
problem. Some of these logical statements are described as follows:
• If the robot arm is holding an object, then arm is not empty
(∃X) HOLDING(X)→~ARMEMPTY
• If the robot arm is empty, then arm is not holding anything
ARMEMPTY →~(∃X) HOLDING(X)
• If X is on a table, then X is not on the top of any block
(∀X) ONTABLE(X) →~(∃Y) ON(X, Y)
• If X is on the top of a block, then X is not on the table
(∀X) (∃Y) ON(X, Y) →~ONTABLE(X)
• If there is no block on top of block X, then top of block X is clear
(∀X) (~(∃Y) ON(X, Y)) → CLEAR(X)
• If the top of block X is clear, then there is no block on the top of X
(∀X) CLEAR(X) →~(∃Y) ON(Y, X)

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 7


Principles of Artificial Intelligence/18AI55 Module – 4

Further, the axioms reflecting the effect of the operations mentioned in Table 6.2 have to be described on a
given state. Let us assume that a function named GEN generates a new state from a given state as a result of
the application of some operator/action. For example, if the action OP is applied on a state S and a new state
S1 is generated then S1 is written as S1= GEN(OP, S).
The effect of UNSTACK(X, Y) in state S is described by the following axiom.
[CLEAR(X, S) Ʌ ON (X, Y, S) Ʌ ARMEMPTY(S)] → [HOLDING(X, S1) Ʌ CLEAR (Y, S1)]
Here, S1 is a new state obtained after performing the UNSTACK operation in state S. If we execute UNSTACK(X,
Y) in S, then we can prove that HOLDING(X, S1) A CLEAR (Y, S1) holds true. Here state is introduced as an
argument of an operator.
Since the interpretation of the axioms showing effect of the following operations is self explanatory, we will
omit these now onwards.
The effect of STACK(X, Y) in state S is described as follows.
[HOLDING (X,S) Ʌ CLEAR (Y, S)] → [ON(X, Y, S1) Ʌ CLEAR(X, S1) Ʌ ARMEMPTY(S1)]
The effect of PU(X) in state S is described by the following axiom.
[CLEAR(X, S) Ʌ ONTABLE(X, S) Ʌ ARMEMPTY(S)] → [HOLDING(X, S1)]
The effect of PD(X) in state S is described by as follows.
[HOLDING(X,S)] → [ONTABLE(X, S1) Ʌ CLEAR(X, S1) Ʌ ARMEMPTY(S1)]

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 8


Principles of Artificial Intelligence/18AI55 Module – 4

The advantage of this approach is that only a simple mechanism of resolution needs to be performed for all the
operations that are required on the state descriptions. On the other hand, the disadvantage of this approach is
that in case of complex problems, the number of axioms becomes very large as we have to enumerate all those
properties which are not affected, separately for each operation.
For handling the complex problem domain, we need a mechanism that does not require a large number of
frame axioms to be specified explicitly. Such mechanism was used in early robot problem-solving system known
as STRIPS (Stanford Research Institute Problem Solver), which was developed by Fikes and Nilsson in 1971.
Each operator in this approach is described by a list of new predicates that become true and a list of old
predicates that become false after the said operator is applied. These are called ADD and DEL (delete) lists,
respectively. There is another list called PRE (preconditions), which is specified for each operator; these
preconditions must be true before an operator is applied.

STRIPS-Style Operators
We observe that by making the frame axioms implicit, we have greatly reduced the amount of information that
needs to be provided for each operator. Now, we only need to specify the required effect; the unaffected
attributes are not included. Therefore, in this representation, if a new attribute is added, the operator lists do
not get changed.

The three lists (PRE, DEL, and ADD) required for each operator are given in Table 6.5.

Linear Planning Using a Goal Stack


The goal stack method is one of the earliest methods that were used for solving compound goals, which may
not interact with each other. This approach was used by STRIPS systems. In this system, the problem solver
makes use of a single stack containing goals as well as operators that have been proposed to satisfy these goals.
In goal stack method, individual sub goals are solved linearly and then, at the final stage, the conjoined sub
goals are solved. Plans generated by this method contain complete sequence of operations for solving the first
goal followed by complete sequence of operations for the next one, and so on.
The problem solver uses database that describes the current state and the set of operators with PRE, ADD, and
DEL lists.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 9


Principles of Artificial Intelligence/18AI55 Module – 4

Simple Planning using a Goal Stack


In this section, we will discuss the method of simple planning using a goal stack. Let us explain the method by
using hypothetical goals. Consider the following goal that consists of sub goals G1, G2, ..., Gn.
GOAL = G1 A G2 A... A Gn
The sub goals G1,..., Gn are stacked (in any order) with a compound goal G1 A...A Gn at the bottom of the stack.
The status of stack is shown in Fig. 6.1.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 10


Principles of Artificial Intelligence/18AI55 Module – 4

Solving Block World problem using Goal Stack method


To illustrate the working of Algorithm 6.1, consider the following example, where the start and goal states of
block world problem are shown in Fig. 6.2. Here a, b, c, and d are specific blocks.

The logical representations of start and goal states may be written as


Start State: O(b, a) Ʌ T(a) Ʌ T(c) Ʌ T(d) Ʌ C(b) Ʌ C(c) Ʌ C(d) Ʌ AE
Goal State: O(a, c) Ʌ O(b, d) Ʌ T(c) Ʌ T(d) Ʌ C(a) Ʌ C(b) Ʌ AE
PLAN_QUEUE = ɸ
We notice that (T(c) Ʌ T(d) Ʌ C(b) Ʌ AE) is true in both start and goal, states. Hence, for the sake of convenience,
we can represent it by CSG (conjoined sub goals present in both the states). We will work to solve sub goals O(a,
c), O(b, d), and C(a) and while solving these sub goals, we will make sure that CSG remains true. We will first put
these sub goals in some order in the stack. Let the initial status of the stack be as shown in Fig. 6.3.

Now, we need to identify an operator that can solve C(a). We notice that the operator US(X, a) can only be
applied, where X gets bound to the actual block on top of 'a'. Here pop C(a) and-push US(X, a) in the goal stack.
The status of the goal stack changes as shown in Fig. 6.4.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 11


Principles of Artificial Intelligence/18AI55 Module – 4

Since stack operator US(X, a) can be applied only if its preconditions are true, therefore, we add its preconditions
on top of the stack. The changed status of stack now looks as shown in Fig. 6.5

The start state of the problem may be written as State 0


State 0 (Start state) O(b, a) Ʌ T(a) Ʌ T(c) Ʌ T(d) Ʌ C(b) Ʌ C(c) Ʌ C(d) Ʌ AE
From State 0, we find that 'b' is on top of 'a', so the variable X is unified with block 'b'. Now, all preconditions
{O(b, a), C(b), AE} of US(b, a) are satisfied. Therefore, the next step is to pop these preconditions along with its
conjoined/compound sub goal if it is still true. In this case, we find compound sub goal to be true. We now pop
the top operator US(b, a) and add it in a PLAN_QUEUE of the sequence of operators. Initially, PLAN_QUEUE was
empty but now it contains US(b, a).
PLAN_QUEUE = US(b, a)
A new state State 1 produced by using its ADD and DEL lists is written as
State 1 T(a) Ʌ T(c) Ʌ T(d) Ʌ H(b) Ʌ C(a) Ʌ C(c) Ʌ C(d)
The transition from State 0 to State 1 is shown in Fig. 6.6.

The new goal stack is shown in Fig. 6.7.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 12


Principles of Artificial Intelligence/18AI55 Module – 4

Now, let us solve the top sub goal O(a, c). For solving this, we can only apply the operator ST(a, c). So, we pop
O(a, c) and push ST(a, c) along with its preconditions in the stack. The changed goal stack is given in Fig. 6.8.

From State 1: {T(a) AT(c) A T(d) A H(b) A C(a) A C(c) A C(d)}, we notice that C(c) is true, so we pop it. Then, we
observe that the next sub goal H(a) is unachieved (not true), so we will solve this. For making H(a) to be true,
we apply operator PU(a) or UN(a, X). In fact, any of the two operators can be applied but let us choose PU(a)
initially. Now pop H(a) and push PU(a) with its preconditions to the stack. The current stack status looks like
that shown in Fig. 6.9.

Now top sub goal AE above stack is to be solved. We notice that AE is not true as the arm is holding 'b' (as given
in State 1).
In order to make the arm empty, we need to perform either ST(b, X) or PD(b). Let us choose ST(b, X). If we look
a little ahead, we note that we want 'b' on 'd'. Therefore, we need to unify X with 'd'. So, we replace AE by ST(b,
d) along with its preconditions. The goal stack now to that shown in Fig. 6.10.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 13


Principles of Artificial Intelligence/18AI55 Module – 4

Now, C(d) and H(b) are both true in State 1. So, we pop these predicates along with the compound sub goal
{C(d) A H(b)} which is also true. We notice that operator ST(b, d) can be applied as its preconditions are satisfied.
Therefore, we pop ST(b, d) and add ST(b, d) in the queue sequence PLAN_QUEUE of operators. Thus, we can
write
PLAN_QUEUE = US (b, a), ST(b, d)
Now we produce a new state State 2 using its ADD and DEL lists that can be written as
State 2 T(a) Ʌ T(c) Ʌ T(d) Ʌ O(b, d) Ʌ C(a) Ʌ C(c) Ʌ C(b) Ʌ AE
The transition from State I to State 2 is shown in Fig. 6.11.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 14


Principles of Artificial Intelligence/18AI55 Module – 4

The goal stack obtained as a result of this transition is shown in Fig. 6.12.

Now, AE, C(a) and T(a) are true (from State 2); hence, preconditions of PU(a) are satisfied. As a result, the
operation PU(a) can be performed. Therefore, we pop it and add it in the queue of the sequence of operators
and generate new state State 3. We can now write

PLAN_QUEUE = US (b, a), ST(b, d), PU(a)


State 3 of the problem can be written as
State 3: T(c) Ʌ H(a) Ʌ T(d) Ʌ O(b, d) Ʌ C(c) Ʌ C(b)
The transition from State 2 to State 3 is shown in Fig. 6.13.

The goal stack thus obtained as a result of this change is shown in Fig. 6.14.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 15


Principles of Artificial Intelligence/18AI55 Module – 4

Further, C(c) and H(a) both are true in State 3, we pop these and since all preconditions of ST(a, c) are met, we
pop the operator ST(a, c) and add to the solution queue. Thus, we can write
PLAN_QUEUE = US (b, a), S(b, d), PU(a), ST(a, c)
State 4 of the problem can be written as
State 4 T(c) Ʌ O(a, c) Ʌ T(d) Ʌ O(b, d) Ʌ C(a) Ʌ C(b) Ʌ AE
The transition from State 3 to State 4 is shown in Fig. 6.15

The goal stack obtained as a result of this change is shown in Fig. 6.16.

From the database of State 4, we see that O(b, d) is already solved (or true) and the conjoined sub goal is also
true, so we pop these sub goals. Now, the goal stack becomes empty, so the problem solver will return the plan
from PLAN_QUEUE containing the sequence of operators to be applied as US(b, a), ST(b, d), PU(a), ST(a, c)

Sussman Anomaly Problem


To begin with, the start state and goal state of Sussman anomaly problem may be written as follows and shown
in Fig. 6.17.
Start State (State 0) O(c, a) Ʌ T(a) Ʌ T(b) Ʌ C(c) Ʌ C(b) Ʌ AE
Goal State O(a, b) Ʌ O(b, c) Ʌ T(c) Ʌ C(a) Ʌ AE

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 16


Principles of Artificial Intelligence/18AI55 Module – 4

For solving sub goal O(a, b), we have to apply the following operators obtained using the goal stack method:
• US(c, a)
• PD(c)
• PU(a)
• ST(a, b)

The new state State I generated after applying these operations is represented as
State 1: O(a, b) Ʌ T(b) Ʌ T(c) Ʌ C(c) Ʌ C(a) Ʌ AE
The transition from State 0 to State 1 is shown in Fig. 6.18.

Now, our aim is to satisfy the sub goal O(b, c). The sequence of operators US(a, b), PD(a), PU(b), and ST(b, c) is
applied and State 2 is generated. This state is represented as
State 2 O(b, c) Ʌ T(c) Ʌ T(a) Ʌ C(b) Ʌ C(a) Ʌ AE
The transition from State 1 to State 2 due to the application of the sequence of operators mentioned above is
given in Fig. 6.19.
Finally, we need to satisfy the conjoined goal O(a, b) Ʌ O(b, c) Ʌ T(c) Ʌ C(a) Ʌ AE. We notice that while satisfying
O(b, c), we have undone the already solved sub goal O(a, b).

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 17


Principles of Artificial Intelligence/18AI55 Module – 4

In order to solve O(a, b) again, we apply the operations PU(a) and ST(a, b). The conjoined goal is checked again
and it is found to be satisfied now. We obtain the goal state, and therefore, can now collect the total plan. The
transition from State 2 to the goal state is shown in Fig. 6.20.

The complete plan thus generated is the sequence of all the sub plans generated above. Therefore, the following
operations will be present in the solution sequence:

Although this plan eventually achieves the desired goal, it is not considered to be efficient because of the
presence of a number of redundant steps, such as stacking and unstacking of the same blocks, one performed
immediately after the other. We can get an efficient plan from this plan simply by repairing it.
Repairing is done by looking at those steps where operations are done and undone immediately, such as ST(X,
Y) and US(X, Y) or PU(X) and PD(X). In the above plan, we notice that stacking and unstacking are done at steps
(iv) and (v). By removing these complimentary sub goals, we obtain the new plan as follows:

We notice that in this new revised plan, PU(a) and PD(a) at steps (iii) and (iv) are again complimentary
operations, so we remove them too. The final plan is as follows:

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 18


Principles of Artificial Intelligence/18AI55 Module – 4

For the sake of completeness, the sequence of operations applied from start state to goal state is shown in Fig.
6.21

Although we a great deal of problem-solving efforts. For solving such problems there exist other approaches
known as non- linear planning methods, which construct efficient plans directly.

Means-Ends Analysis
In the means-ends analysis (MEA) approach, the basic idea is to search only the relevant aspects of a given
problem. In this method, we try to find the difference between the current state and the goal state.
Once such a difference is isolated, an operator that can reduce this difference must be found. It is quite possible
that the chosen operator cannot be applied to the current state immediately.
For example, let S be the start state and G be the goal state of a problem as shown in Fig. 6.22. Suppose, some
operator transforms state A to state B, then the difference between {S and A} and {B and G} must be reduced
to obtain the goal state. This reduction of difference is depicted in Fig. 6.22.

In this approach, a kind of backward chaining is used in which operators are selected to satisfy preconditions of
Operator; this method is called operator sub goaling.
Thus, to solve the problem, the difference between S and A is reduced and similarly the difference between B
and G is reduced. The MEA also relies on a set of rules just like in other problem- solving techniques.
These rules are usually not represented with complete state description on each side. Instead they are
represented in such a manner that their left side describes preconditions, while right side describes those

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 19


Principles of Artificial Intelligence/18AI55 Module – 4

aspects of the problem state that will be changed by the application of the rule. (Newell, et. al, 1960) developed
General Problem Solver using the concept of MEA.
This approach essentially uses linear planning with recursive procedure calls as the goal-stack mechanism.

Consider a robot that is required to move a large table with two objects on top of it from one location to another.
We are required to find the sequence of actions or operations that the robot needs to perform for moving the
table. There are two objects on top of the table that must also be moved.
Assume that certain operators are available for solving this problem. Description of these operators along with
preconditions and result are given in Table 6.6. Here, O is a variable used to denote an object and L is a variable
used to denote location.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 20


Principles of Artificial Intelligence/18AI55 Module – 4

We will be using the following predicates to describe the situation.

The main difference between start and goal states of this problem is the change of location of the table (along
with the objects lying on top of it) from start position to goal position. A data structure called difference table
is used to store the differences and the operators that can reduce these differences.
For example, the difference move object' can be achieved by two operators PUSH and CARRY. Similarly, the
difference 'make arm empty' can be achieved by PD and PLACE operators. A difference table for this example is
given in Table 6.7.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 21


Principles of Artificial Intelligence/18AI55 Module – 4

To reduce the difference between Start and Goal states, the table from one location to another to be moved.
For this, we need to apply either CARRY or PUSH operation. If CARRY is chosen first, then its preconditions must
be met. This results in the following differences:
AT(robot, table) Ʌ SMALL (table) Ʌ CLEAR(table) Ʌ AE
We notice that table is not small and also its top is not clear, so CARRY operation cannot be applied. Therefore,
we choose PUSH operation which has the following preconditions:
AT(robot, table) Ʌ LARGE (table) Ʌ CLEAR(table) Ʌ AE
Figure 6.23 shows the current status of the problem after applying PUSH operater.

There are only two main differences: AT(robot, table) A CLEAR(table). The other two predicates LARGE(table)
and AE are already true at goal position. The differences AT(robot, table) A CLEAR(table) can be reduced as
follows:
• AT(robot, table) is satisfied by operation WALK(table_loc)
• CLEAR(table) is satisfied by clearing the two objects present on the table.
Now, clearing objects can be done by two uses of PU operators. After one pickup operation, an attempt to
pickup the second object results in another difference: the arm of the robot should be empty before the second
pickup operation. This difference is reduced by using the PD operator. Hence, we get sequence of operators as
shown in Fig. 6.24.

Once PUSH is performed, the problem state becomes closer to the goal state. Now, the objects must be placed
back on the top of the table. This is achieved by PLACE operator. The changed status of the problem is shown
in Fig. 6.25.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 22


Principles of Artificial Intelligence/18AI55 Module – 4

The final difference between B and C can be reduced by using WALK to get the robot back to the objects,
followed by PU and CARRY operations. Hence, the final plan is written as follows:

The order in which the differences are considered is critical. It is important that a significant difference is
reduced before trying to reduce a less significant one otherwise we may end up wasting a great deal of effort.

Non-Linear Planning Strategies


Goal Set Method
In this method, a plan is generated by doing some work on one sub goal, then on another sub goal, and finally
some more on the first sub goal. Such plans are called non-linear plans as they do not contain a linear sequence
of complete sub plans (Rich & Knight, 2003).
Let us consider again the Sussman anomaly problem to illustrate the working of goal set method. The start and
goal states are described as
Start State O(c, a) Ʌ T(a) Ʌ T(b) Ʌ C(c) Ʌ C(b) Ʌ AE
Goal State O(a, b) Ʌ O(b, c) Ʌ T(c) Ʌ C(a) Ʌ AE
Figure 6.26 shows the start state and goal state of the problem.

A good plan for solution of the above problem may ideally be written as follows:
• Start some work on sub goal O(a, b) by clearing 'a' using unstack operator that is, unstack 'c' from
'a' and then put 'c' on the table.
• Achieve O(b, c) by stacking 'b' on 'c'.
• Complete the goal O(a, b) by stacking 'a' on 'b'.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 23


Principles of Artificial Intelligence/18AI55 Module – 4

One way to find such a plan is to consider the collection of desired goals as a set. Here, a backward search
procedure is used from the goal state to the start state, with no operators being actually applied along the way.
The idea is to look first for that operator which will be applied last in the final solution. It assumes that all sub
goals except one have already been satisfied or solved.
We need to find all the operators that might satisfy this final sub goal. Each of these operators may have certain
preconditions that must be met so that a new combined goal set is generated, which includes the preconditions
as well as the original sub goals. Many of the paths can quickly be eliminated because of contradiction in the
goal set.
For example, if we have an arm empty as well as an arm holding both in the goal set, then evidently there is a
contradiction in the set. This path is then pruned and not explored further.
Let us proceed to solve the Sussman anomaly problem. Consider that the last sub goal to be proved is either
O(a, b) or O(b, c). Let us remove CLEAR and AE predicates from the goal state for the sake of simplicity. If we
assume that O(b, c) is the last sub goal and it is solved by an application of some operator 'OP', then all its
preconditions and sub goal O(a, b) must be assumed to be true prior to the application of the chosen operator
'OP'.
All the operators that satisfy this final sub goal are identified and new goal sets are created. When an operator
is applied, it may cause the sub goals in a set to be no longer true. So, non-selected goals are not directly copied
to the new goal set but a process called regression is applied.
Regression can be thought of as the backward application of operators. Each goal is regressed through an
operator where, we try to determine which of them must be true before the operator is applied. For example,
some of the predicates regressed under various operators are as follows:

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 24


Principles of Artificial Intelligence/18AI55 Module – 4

Interpretation of Reg(O (X, Y), ST(Z, X) = O(X, Y) is that the predicate O(X, Y) does not change when operator
ST(Z, X) is applied on it. If O(X, Y) is regressed under US(X, Y) then O(X,Y) is no longer true and hence is false: Reg
(O(X, Y), US(X, Y)) = false.
The backward search tree generated during the process of solving this problem is shown in Fig. 6.27. Note that
many paths are pruned whenever a contradiction occurs in a goal set such as {H(a), AE} or if it contains a false
value. For the sake of simplicity, pruned nodes are not shown. Contradiction might occur somewhere on these
paths. The nodes are numbered in the order in which they are generated and are connected by arrow lines. The
final plan generated for Sussman anomaly problem is written as
US(c, a) → PD(c) → PU(b) → ST(b, c)→ PU(a) → ST(a, b)

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 25


Principles of Artificial Intelligence/18AI55 Module – 4

The algorithm that is used for this method is given below:

Forward and backward state-space searches are particular forms of totally ordered plan searches. They can
explore strictly linear sequences of actions that are directly connected to the start state or goal state and cannot
take advantage of problem decomposition.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 26


Principles of Artificial Intelligence/18AI55 Module – 4

Partial-Order Planning
Any planner that places two actions into a plan without specifying which comes first is called partial-order
planner. In partial-order planning, actions dependent on each other are ordered in relation to themselves but
not necessarily in relation to other independent actions. The solution is represented as a graph of actions
instead of a sequence of actions.
The following two macro- operators may be defined here for the sake of simplicity. Macro-operator is one that
can be defined by a sequence of simple operators.

Consider the block world example given in Fig. 6.28 to illustrate the concept of partial-order planning method.

In the problem shown in Fig. 6.28, we notice that in goal state 'a' on top of 'b'; and to obtain this configuration
we have to first move 'b' onto the table from the start state, that is, MOT(b) and then MOVE(a, b).
Hence, we can apply partial ordering as MOT(b)← MOVE(a, b) that holds true, that is, MOT(b) should come
before MOVE(a, b) in the final plan. Similarly, to move 'c' on top of 'd', we first have to move 'd' onto the table
and then move 'c' on top of 'd'. In this case, partial ordering MOT(d) MOVE(c, d) is established.
A partial-order plan can also be represented in the form of a graph as shown in Fig. 6.29. The directed edge
from node MOT(d) to node MOVE(c, d) means that partial order MOT(d) MOVE(c, d) holds true. It should be
noted that the dummy actions START and FINISH mark beginning and end of the plan in the graph, respectively.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 27


Principles of Artificial Intelligence/18AI55 Module – 4

This representation for partial plans enables the planner to consider a number of different total plans without
bothering about ordering actions that are not dependent on cach other. For example, six total plans can be
generated as described in Table 6.8. Each of these is referred to as a linearization of the partial-order plan.

Constraint Posting Method


The basic idea behind constraint posting method is also to generate a plan by incrementally hypothesizing
operators, partial ordering between operators, and binding of variables within operators.
At any given time in the planning process, a solution is a partially ordered sequence of operators with partially
instantiated set of operators. Any number of total orders may be obtained from the partially ordered sequence
of operators to generate the actual plan (Rich & Knight, 2003).
Let us consider the Sussman anomaly problem again as shown in the Fig. 6.30 and solve it by incrementally
generating a non-linear plan. The start and goal states may be represented as
Start State O(c, a) Ʌ T(a) Ʌ T(b) Ʌ C(c) Ʌ C(b) Ʌ AE
Goal State O(a, b) Ʌ O(b, c) Ʌ T(c) Ʌ C(a) Ʌ AE

For the sake of simplicity, we remove the sub goals T(c), C(a), and AE because once we satisfy O(a, b) A O(b, c),
the sub goals T(c), C(a), and AE will be automatically satisfied. Therefore, we will consider the goal state as
Goal State O(a, b) Ʌ O(b, c)
We will start with the null plan (i.e., no operators). Now, we try to look at the goal state and find the operators
that can achieve it. Clearly, we see that there are two operators ST(a, b) and ST(b, c) that when used will lead
to post conditions such as O(a, b) and O(b, c). Table 6.9 contains both operators for solving both the goals with
their preconditions and post conditions.
The symbol ~ is used for negation of predicate such as ~C(b) means that top of 'b' is not clear.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 28


Principles of Artificial Intelligence/18AI55 Module – 4

The unachieved goals (predicates) in preconditions on both the paths are marked with * as shown in Table 6.9.
We notice that H (Holding) is unachieved (not true) because of the arm being empty initially on both the paths.
Now, we introduce a new operator to achieve H(a) and H(b) goals. This process is called operator addition or
step addition. These goals may be achieved by Pickup (PU) or Unstack(US) operators.
We choose PU operator instead of US as ST and US are complimentary operators. Therefore, add PU operator
to both the paths to make the arm hold an object along with their corresponding preconditions and post
conditions. We can now put √ mark beside H(a) and H(b) as both of these sub goals are satisfied with PU
operators. The revised status of Table 6.9 is shown in Table 6.10.

In Table 6.10, we see four operators PU(a), PU(b), ST(a, b), and ST(b, c) and four unachieved preconditions:
{C(a), AE} and {C(b), AE} on both the paths. It is clear that in the final plan, PU (Pickup) operator must precede
ST (Stack) operations. Whenever we employ an operator, we need to introduce certain ordering constraints
called promotion. Thus, we now introduce ordering in the operators as shown below.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 29


Principles of Artificial Intelligence/18AI55 Module – 4

We notice that the sub goal C(a) cannot be achieved as top of 'a' is not clear in the start state (O(c, a)A T(a) Ʌ
T(b) Ʌ E Ʌ C(c) Ʌ C(b)). Further, the sub goal C(b) is also unachieved (even though top of 'b' is clear in start state)
as there exists an operator ST(a, b) with the post condition that ~C(b). If we make sure that PU(b) precedes ST(a,
b), then C(b) is achieved. So, we post another constraint as given below:

At this point, note that preconditions C(a) and AE of PU(a) are still not achieved. Let us try to achieve AE (the
precondition of each Pickup operator) before C(a). We know that start state has AE. So, one PU can achieve its
precondition but the other PU operator could be prevented from being executed.
Assume that precondition AE of PU(b) is achieved in order to make sure that all preconditions of PU(b) are
satisfied (true). So, we need to put an ordering constraint on PU(b) and PU(a) as preconditions of PU(a) are still
not achieved as given below. Therefore, PU(b) is to be carried out before PU(a).

Since PU(b) will make arm not empty (~AE) which is precondition for PU(a), we need to have some operators in
between that can make AE for PU(a) to be carried out. We notice that PU(b) precedes ST(b, c) from Partial plan
1 and ST(b, c) will make arm empty for PU(a).
So we can safely place ST(b, c) between PU(b) and PU(a) as there is no ordering so far between ST(b, c) and
PU(a). This process is known as declobbering, where an operator Op2 is placed between two operators Op1 and
Op3 in such a way that Op2 reasserts some preconditions of Op3 that were negated by Op1. Partial plan 4 is
generated by puting the following constraint:

Here, PU(b) is said to clobber the precondition of PU(a), while ST(b, c) is said to declobber it, that is, it removes
the deadlock. We will now try to achieve preconditions {C(a), AE} of PU(a). Consider C(a) that can be achieved
by using US(c, a). Add preconditions and postconditions of US(c, a) on first path as shown in Table 6.11

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 30


Principles of Artificial Intelligence/18AI55 Module – 4

O(c, a) can easily be seen to be true in the start state {O(c, a) Ʌ T(a) Ʌ T(b) Ʌ AE Ʌ C(c) Ʌ C(b)}. Even though *C(c)
is also true in start state, it may be denied by operator ST(b, c), which has been used earlier. Similarly, *AE may
be denied by operators PU(a) or PU(b). So, we have to put the following constraints in order to make sure that
C(c) and AE remain true.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 31


Principles of Artificial Intelligence/18AI55 Module – 4

Adding a new operator requires checking as the new step may clobber some preconditions required by later
operator. Here, we notice that PU(b) requires arm empty AE, which is denied by the new operator US(c, a). One
way to resolve this problem is to add a new declobbering operator that makes arm empty AE for PU(b) operator.

An operator PD(c) can be used for this purpose as shown in the Partial plan 6. It is clear that the postcondition
of US(c, a) do not contradict preconditions of PD(c) and postconditions of PD(c) do not contradict preconditions
of PU(b).

Since all the unachieved goals on both the paths are solved, now, we need to combine the sub plans to generate
the final plan. All possible partial plans, are listed in Table 6.12.

We get the final plan by combining declobbering plans from last sub plan to first sub plan as follows. Here we
notice that there is only one plan PU(a) ← ST(a, b) which is also included in the final plan.
US(c, a) ← PD(c) ← PU(b) ← ST(b, c) ← PU(a) ← ST(a, b)
The main steps that are involved in non-linear plan generation using constraint posting method may be
summarized as follows:

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 32


Principles of Artificial Intelligence/18AI55 Module – 4

The algorithm of this process may be written as given below.

Compact Representation

Now we will present the steps involved in non-linear planning using constraint posting.
Start State O(c, a) Ʌ T(a) Ʌ T(b) Ʌ E Ʌ C(c) Ʌ C(b)
Goal State O(a, b) Ʌ O(b, c)

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 33


Principles of Artificial Intelligence/18AI55 Module – 4

Here, * predicates in precondition of operators are not true. C(a) in preconditions of PU(a) is unachieved as top
of 'a' is not clear in start state. Also, C(b) in preconditions of PU(b) is unachieved even though top of 'b' is clear
in start state; however, there exists an operator ST(a, b) post condition as ~C(b). So, in order to make sure that
C(b) is achieved, we post constraints that PU(b) must precede ST(a, b).

Now, one PU operator could prevent the other PU operator from being executed. Initially AE is true, so one PU
can be performed. Assume AE as a precondition of PU(b) be achieved first so that all pre conditions of PU(b) are
satisfied.

PU(b) leads to ~AE; however, ST(b, c) will make AE, which is precondition of PU(a). So, we introduce ST(b, c)
between PU(b) and PU(a).← P

Now, let us try to satisfy the precondition C(a) of PU(a). This can be achieved by using US(c, a).

O(c, a) can be easily seen to be true in start state. However, C(c) may be denied by operator ST(b, c), which has
already been used earlier, and AE may be denied by operators PU(a) and PU(b).

Promotion: US(c, a) ← ST(b, c); US(c, a) ← PU(a); US(c, a) ← PU(b)

Now, we add a new clobbering operator PD(c) to make arm empty (AE) between US (c, a) and PU(b).

Final plan:
US(c, a) ← PD(c) ← PU(b) ← ST(b, c) ← PU(a) ← ST(a, b)

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 34


Principles of Artificial Intelligence/18AI55 Module – 4

Linear Plans
In many problems, plans may share a large number of common sequences of actions. So, a planner is required
to possess the ability to recall and modify or reuse the existing plans. For this purpose, we define macro
operators, which consist of sequence of operators used for performing a task. These operators along with the
plans are learnt and saved for future use.

For example, the operator Reverse_blocks(X, Y) (Fig. 6.31) can be seen to be a macro operator with plan - US(X,
Y), PD(X), PU(Y), ST(Y, X).

The generalized plan schema is called MACROP and is stored in a data structure called triangle table. It helps
the planner to build new plans efficiently by using existing plans. The concept of a triangle table is discussed in
the following subsection.

Triangle Table
The triangle table provides a useful graphical mechanism which is used to show the plan evolution as well as
link the succession of operators as shown in Table 6.14. The structure of the table is in the form of a staircase
which provides a compact summary of the plan (Schalkoff, 1990 and Padhy, 2005). To understand the triangle
table, we use the following acronyms:
A(OP) - add-list of OP
CC - copy content of above cell
D(OP) - del-list of OP

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 35


Principles of Artificial Intelligence/18AI55 Module – 4

Assume that we require successive use of k operators OP, OP2, ..., OP, for solving some problem. The first cell
Cell(0,0) of triangle table contains the start state of the problem.

Each column of the table is headed by one of the operators in the plan in the order of their occurrence. The
cells below each operator contain predicates added by the operator.

The cell on the left of each operator contains predicates that are preconditions of the operator. In Table 6.13,
preconditions of OP, are in Cell(0,0) and post conditions are copied in Cell(1,1). Cell(0,0) is copied after deleting
the delete list predicates of an operator OP, in Cell(1,0).

A triangle table is constructed that consists of (k+1) rows and columns. The columns are indexed by 'j' from left
to right with values 0 to k, while the rows are indexed by 'i' from top to bottom with values 0 to k. The
construction of triangle table starts with Cell(0, 0) containing the start state predicates. The following steps are
required to construct the triangle table as given below.
• Add predicates from add-list of operator OP in Cell(k, k), for k>0, that is, A(OP) are added in
Cell(k, k).
• For each cell Cell(i,j), i>j, copy the contents of Cell(i-1, j) with predicates from del-list of
operator OP removed.
For each operator OP; the preconditions are union of all the predicates in Cell(i – 1,j), 0 ≤j<i and post conditions
of sequence of operations {OP1, OP 2, OP} is union of all the predicates in Cell(i, j), 0 ≤ j ≤i.

Finally, when the table is complete, the union of the facts in the bottom row represents the goal state.

The advantages of using a triangle table are that it can offer sub plans to other problems and can also offer
assistance to recover from unusual situation while generating plan for the problem. Macro plans for MACROP
can be stored in triangle table and used as STRIPS rule for creating longer plans. In addition to learning
MACROPS, constructing specific plans can also produce control strategy that will reduce planning efforts in
future.
The triangle table for macro operator REVERSE with the sequence of operators as US(X, Y), PD(X), PU(Y), S(Y, X)
is given in Table 6.14. Here, X and Y are variables that can be bound with actual objects at the time of use. The
start and goal states of the problem are written as
Start State O(X, Y) Ʌ C(X) Ʌ T(Y) Ʌ AE
Final State O(Y, X) Ʌ C(Y) Ʌ T(X) Ʌ AE

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 36


Principles of Artificial Intelligence/18AI55 Module – 4

Figure 6.32 shows the representation of the operation of reversing the blocks.

A plan for macro operator REVERSE is stored in the triangle table and can be used for generating plans for
problems where reversal is required. Here we have struck-out those predicates which are no longer valid after
an operator specified in immediate previous row is applied.

Macro operator REVERSE(X, Y) is defined as sequence of simple operators US(X, Y), PD(X), PU(Y), ST(Y, X) with
pre and post conditions of macro operator REVERSE stored in the triangle table as explained above.

Let us consider the problem defined by the following start and goal states:

Start State O(b, a) Ʌ T(a) Ʌ T(c) Ʌ T(d) Ʌ C(b) Ʌ C(c) Ʌ C(d) Ʌ AE


Goal State O(a, b) Ʌ O(c, d) Ʌ T(b) Ʌ T(d) Ʌ C(a) Ʌ C(c) Ʌ AE

The start and goal states are shown in Fig. 6.33.

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 37


Principles of Artificial Intelligence/18AI55 Module – 4

REVERSE(a, b) is replaced by all sequence of operators US(a, b), PD(a), PU(b), ST(b, a) and thus the final plans
may be written as

Plan 1: US(a, b), PD(a), PU(b), ST(b, a), PU(c), ST(c, d)


Plan 2: PU(c), ST(c, d), US(a, b), PD(a), PU(b), ST(b, a)

Prof. Manzoor Ahmed, Dept. of AI&ML, SKIT 38

You might also like