0% found this document useful (0 votes)
18 views15 pages

UNIT 4 AI Notes

Uploaded by

vanamsanthosh54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views15 pages

UNIT 4 AI Notes

Uploaded by

vanamsanthosh54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit---4

Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-Space Search,
Planning Graphs, other Classical Planning Approaches, Analysis of Planning approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical Planning, Planning
and Acting in Nondeterministic Domains, Multi agent Planning.

Classical Planning:-
Definition of Classical Planning :
Planning and Acting in Nondeterministic Domains, Multi agent Planning.
Classical Planning is the planning where an agent takes advantage of the problem
structure to construct complex plans of an action. The agent performs three tasks in
classical planning The agent plans after knowing what is the problem.

 Acting: It decides what action it has to take.


 Learning: The actions taken by the agent make him learn new things.

A language known as Planning Domain Definition Language, which is used to represent


all actions into one action schema.
It describes the four basic things needed in a search problem:

 Initial state: It is the representation of each state as the conjunction of the ground
and functionless atoms.
 Actions: It is defined by a set of action schemas which implicitly define
the ACTION() and RESULT()functions.
 Result: It is obtained by the set of actions used by the agent.
 Goal: It is same as a precondition, which is a conjunction of literals (whose value is
either positive or negative).
There are various examples which will make PDLL understandable:
 Air cargo transport
 The spare tire problem
 The blocks world and many more.

AI Notes prepared by V. Ravindranath


Air cargo transport:
This problem can be illustrated with the help of the following actions:

 Load: This action is taken to load cargo.


 Unload: This action is taken to unload the cargo when it reaches its destination.
 Fly: This action is taken to fly from one place to another.
Therefore, the Air cargo transport problem is based on loading and unloading the cargo
and flying it from one place to another.

Algorithms for Planning with State-Space Search :

A state space is a way to mathematically represent a problem by defining all the


possible states in which the problem can be. This is used in search algorithms to represent
the initial state, goal state, and current state of the problem. Each state in the state space
is represented using a set of variables.

The efficiency of the search algorithm greatly depends on the size of the state space,
and it is important to choose an appropriate representation and search strategy to search
the state space efficiently.

Features of State Space Search:


State space search has several features that make it an effective problem-solving
technique in Artificial Intelligence. These features include:

 Exhaustiveness:
State space search explores all possible states of a problem to find a solution.
 Completeness:
If a solution exists, state space search will find it.
 Optimality:
Searching through a state space results in an optimal solution.
 Uninformed and Informed Search:
State space search in artificial intelligence can be classified as uninformed if it
provides additional information about the problem.

In contrast, informed search uses additional information, such as heuristics, to guide


the search process.

AI Notes prepared by V. Ravindranath


Algorithm Steps:
The steps involved in state space search are as follows:
 Initial state : to begin the search process, we set the current state to the initial
state.
 Check if the current state is the goal state. If It is, we terminate the algorithm and
return the result.
 If the Current State Is Not The Goal State, generate the set of possible successor
states that can be reached from the current state.
 For Each Successor State: check if it has already been visited. If it has, we skip it,
Else:
 Add it to the queue of states to be visited.
 Set The Next State in the queue as the current state and check if it's the goal state.
If it is, we return the result. If not, we repeat the previous step until find the goal
state or explore all the states.
 If all possible states have been explored and the goal state still needs to be found,
we return with No Solution.

State Space Representation:


State space Representation involves defining an INITIAL STATE and a GOAL
STATE and then determining a sequence of actions, called states, to follow.

 State: A state can be an Initial State, a Goal State, or any other possible state that
can be generated by applying rules between them.
 Space: In an AI problem, space refers to the exhaustive collection of all conceivable
states.
 Search: This technique moves from the beginning state to the desired state by
applying good rules while traversing the space of all possible states.
 Search Tree: To visualize the search issue, a search tree is used, which is a tree-
like structure that represents the problem. The initial state is represented by the root
node of the search tree, which is the starting point of the tree.
 Transition Model:
This describes what each action does, while Path Cost assigns a cost value to each

AI Notes prepared by V. Ravindranath


path, an activity sequence that connects the beginning node to the end node.
The optimal option has the lowest cost among all alternatives.

Example of State Space Search:

The 8-puzzle problem is a commonly used example of a state space search. It is a


sliding puzzle game consisting of 8 numbered tiles arranged in a 3x3 grid and one blank
space. The game aims to rearrange the tiles from their initial state to a final goal state by
sliding them into the blank space.
To represent the state space in this problem, we use the nine tiles in the puzzle and
their respective positions in the grid.
Each state in the state space is represented by a 3x3 array with values ranging
from 1 to 8, and the blank space is represented as an empty tile. The initial state of the
puzzle represents the starting configuration of the tiles, while the goal state represents the
desired configuration.

Search algorithms utilize the state space to find a sequence of moves that will transform
the initial state into the goal state.
This algorithm guarantees a solution but can become very slow for larger state
spaces. Alternatively, other algorithms, such as a search, use heuristics to guide the
search more efficiently.
Our objective is to move from the current state to the target state by sliding the
numbered tiles through the blank space. Let's look closer at reaching the target state from
the current state.

To summarize, our approach involved exhaustively exploring all reachable states


from the current state and checking if any of these states matched the target state.

AI Notes prepared by V. Ravindranath


Planning Graphs:

What is a Planning Graph?:


A Planning Graph is a data structure primarily used in automated planning and
artificial intelligence to find solutions to planning problems. It represents a planning
problem’s progression through a series of levels that describe states of the world and the
actions that can be taken. Here’s a breakdown of its main components and how it
functions:
1. Levels: A Planning graph has two alternating types of levels: action levels and
state levels. The first level is always a state level, representing the initial state of
the planning problem.
2. State Levels: These levels consist of nodes representing logical propositions or
facts about the world. Each successive state level contains all the propositions of
the previous level plus any that can be derived by the actions of the intervening
action levels.
3. Action Levels: These levels contain nodes representing actions. An action node
connects to a state level if the state contains all the preconditions necessary for
that action. Actions in turn can create new state conditions, influencing the
subsequent state level.
4. Edges: The graph has two types of edges: one connecting state nodes to action
nodes (indicating that the state meets the preconditions for the action), and another
connecting action nodes to state nodes (indicating the effects of the action).
5. Mutual Exclusion (Mutex) Relationships: At each level, certain pairs of actions or
states might be mutually exclusive, meaning they cannot coexist or occur together
due to conflicting conditions or effects. These mutex relationships are critical for
reducing the complexity of the planning problem by limiting the combinations of
actions and states that need to be considered.

AI Notes prepared by V. Ravindranath


Levels in Planning Graphs

 Level S0: It is the initial state of the planning graph that consists of nodes each
representing the state or conditions that can be true.
 Level A0: Level A0 consists of nodes that are responsible for taking all specific
actions in terms of the initial condition described in the S0.
 Si: It represents the state or condition which could hold at a time i, it may be
both P and ¬P.
 Ai: It contains the actions that could have their preconditions satisfied at i.

Working of Planning Graph


The planning graph has a single proposition level that contains all the initial conditions.
The planning graph runs in stages, each stage and its key workings are described below:

1. Extending the Planning Graph: At stage i (the current level), the graph plan takes
the planning graph from stage i-1 (the previous stage) and extends it by one time
step. This adds the next action level representing all possible actions given the
propositions (states) in the previous level, followed by the proposition level
representing the resulting states after actions have been performed.
2. Valid Plan Found: If the graph plan finds a valid plan, it halts the planning process.
3. Proceeding to the Next Stage: If no valid plan is found, the algorithm determines
that the goals are not all achievable in time i and moves to the next stage.

Mutual Exclusion in Planning Graph


Mutual exclusion in graph planning refers to the principle that certain actions or
propositions cannot coexist or occur simultaneously due to inherent constraints or
dependencies within the planning problem. Mutex relations can hold between actions and
literals under various conditions.

Mutex Conditions Between Actions:


 Inconsistent Effects: One action negates the effect of another.
 Interference: One action deletes a precondition or creates an add-effect of
another.
 Competing Needs: Precondition of action a and precondition of action b cannot be
true simultaneously.

AI Notes prepared by V. Ravindranath


Mutex Conditions Between Literals
 Negation of Each Other: Two literals are mutually exclusive if one is the negation of
the other.
 Achieved by Mutually Exclusive Actions: No pair of non-mutex actions can make
both literals true at the same level.

Other Classical Planning Approaches:

Currently the most popular and effective approaches to fully automated planning are:

• Translating to a Boolean satisfy ability (SAT) problem


• Forward state-space search with carefully crafted heuristics
• Search using a planning graph

These three approaches are not the only ones tried in the 40-year history of
automated planning. In this section we first describe the translation to a satisfy ability
problem and then describe three other influential approaches: planning as first-order
logical deduction; as constraint satisfaction; and as plan refinement.

AI Notes prepared by V. Ravindranath


Classical planning as Boolean satisfy ability :

We saw how SATPLAN solves planning problems that are expressed in


propositional logic. Here we show
how to translate a PDDL description into a form that can be processed by SATPLAN. The
translation is a
series of straightforward steps:
 Proposition Alize the actions: replace each action schema with a set of ground
actions formed by substituting constants for each of the variables. These ground
actions are not part of the translation, but will be used in subsequent steps.

 Define the initial state: assert F 0 for every fluent F in the problem’s initial state, and
¬F for every fluent not mentioned in the initial state.
 Proposition Alize the goal: for every variable in the goal, replace the literals that
contain the variable with a disjunction over constants. For example, the goal of
having block A on another block, On(A, x) ∧ Block (x) in a world with objects A, B
and C, would be replaced by the goal (On(A, A) ∧ Block (A)) ∨ (On(A, B) ∧ Block
(B)) ∨ (On(A, C) ∧ Block (C)) .

 Add successor-state axioms: For each fluent F , add an axiom of the form F t+1 ⇔
Action Causes F t ∨ (F t ∧ ¬Action Causes Not F t) , where Action Causes F is a
disjunction of all the ground actions that have F in their add list, and Action Causes
Not F is a disjunction of all the ground actions that have F in their delete list

Analysis Of Planning Approaches:

Planning combines the two major areas of AI we have covered so far: search and
logic. A planner can be seen either as a program that searches for a solution or as one that
(constructively) proves the existence of a solution.
The cross-fertilization of ideas from the two areas has led both to improvements in
performance amounting to several orders of magnitude in the last decade and to an
increased use of planners in industrial applications. Unfortunately, we do not yet have a
clear understanding of which techniques work best on which kinds of problems.

Quite possibly, new techniques will emerge that dominate existing methods.
Planning is foremost an exercise in controlling combinatorial explosion. If there are n
propositions in a domain, then there are 2n states.

As we have seen, planning is PSPACE- hard. Against such pessimism, the


identification of independent sub problems can be a powerful weapon. In the best case—
full decomposability of the problem—we get an exponential speedup. Decomposability is
destroyed, however, by negative interactions between actions.

AI Notes prepared by V. Ravindranath


GRAPHPLAN records mutexes to point out where the difficult interactions are.
SATPLAN rep- resents a similar range of mutex relations, but does so by using the general
CNF form rather than a specific data structure.
Forward search addresses the problem heuristically by trying to find patterns
(subsets of propositions) that cover the independent sub problems. Since this approach is
heuristic, it can work even when the sub problems are not completely independent.
Sometimes it is possible to solve a problem efficiently by recognizing that negative
interactions can be ruled out.
We say that a problem has serialize able sub goals if there exists an order of sub
goals such that the planner can achieve them in that order without having to undo any of
the previously achieved sub goals.

Planning and Acting in the Real World:


This allows human experts to communicate to the planner what they know about
how to solve the problem. Hierarchy also lends itself to efficient plan construction because
the planner can solve a problem at an abstract level before delving into details.
Presents agent architectures that can handle uncertain environments and interleave
deliberation with execution, and gives some examples of real world systems.

Time, Schedules, and Resources:

The classical planning representation talks about what to do, and in what order, but
the representation cannot talk about time: how long an action takes and when it occurs.

This is the subject matter of scheduling. The real world also imposes many resource
constraints; for example, an airline has a limited number of staff—and staff who are on one
flight cannot be on another at the same time. This section covers methods for representing
and solving planning problems that include temporal and resource constraints.

The approach we take in this section is “plan first, schedule later”: that is, we divide
the overall problem into a planning phase in which actions are selected, with some
ordering constraints, to meet the goals of the problem, and a later scheduling phase, in
which temporal information is added to the plan to ensure that it meets resource and
deadline constraints.

AI Notes prepared by V. Ravindranath


This approach is common in real-world manufacturing and logistical settings, where
the planning phase is often performed by human experts. The automated methods of
SATPLAN and partial-order planners can do this; search-based methods produce totally
ordered plans, but these can easily be converted to plans with minimal ordering
constraints.

Hierarchical Planning :
The hierarchical planning is a methodology that entails grouping tasks and actions
into several abstraction levels or hierarchies, with higher-level jobs being broken down
into a series of lower-level tasks. It offers a method for effectively using a hierarchy of
goals and sub-goals to reason and plan in complex contexts.

AI Systems can effectively handle complicated tasks and surroundings because of


hierarchical planning, which enables them to make decisions at many levels of
abstraction. Compared to flat planning systems, which treat tasks at the same level of
abstraction, this approach differs. AI systems can efficiently handle relationships, prioritize
tasks, and distribute resources thanks to the structured method of hierarchical planning,
which makes it very useful in complicated contexts.

Components of Hierarchical Planning:


Artificial intelligence (AI) hierarchical planning usually entails the following essential
elements:
 High-Level Goals: High-level goals provide the initial direction for the planning
process and guide the decomposition of tasks into smaller sub-goals.
 Tasks: Tasks are actions that need to be performed to accomplish the high-level
goals.
 Sub-Goals: Sub-goals are intermediate objectives that contribute to the
accomplishment of higher-level goals. Sub-goals are derived from decomposing
high-level goals into smaller, more manageable tasks.
 Hierarchical Structure: Hierarchical planning organizes tasks and goals into a
hierarchical structure, where higher-level goals are decomposed into sub-goals, and
sub-goals are further decomposed until reaching primitive actions that can be directly
executed.
 Task Dependencies and Constraints: Hierarchical planning considers
dependencies and constraints between tasks and sub-goals. These dependencies
determine the order in which tasks need to be executed and any preconditions that
must be satisfied before a task can be performed.
 Plan Representation: Plans in hierarchical planning are represented as hierarchical
structures that capture the sequence of tasks and sub-goals required to achieve the
high-level goals. This representation facilitates efficient plan generation, execution,
and monitoring.
 Plan Evaluation and Optimization: Hierarchical planning involves evaluating and
optimizing plans to ensure they meet the desired criteria, such as efficiency,

AI Notes prepared by V. Ravindranath


feasibility, and resource utilization. This may involve iteratively refining the plan
structure or adjusting task priorities to improve performance.

Hierarchical Planning Techniques in AI:


The hierarchical planning techniques that are leveraged for organizing and
executing hierarchical structures:

1. HTN (Hierarchical Task Network) Planning


HTN planning decomposing high-level tasks into simpler sub-tasks using
hierarchical structures called task networks. HTN planning enables the
representation of complex tasks as hierarchical networks of actions and conditions,
allowing for flexible and modular planning.

2. Hierarchical Reinforcement Learning (HRL)


HRL is extension of reinforcement learning, it leverages hierarchical
structures to facilitate learning and decision-making in complex environments.

In HRL, tasks are organized into a hierarchy of sub-goals, and the agent learns
policies for achieving these sub-goals at different levels of abstraction. By learning
hierarchies of policies, HRL enables more efficient exploration and exploitation of
the environment, leading to faster learning and improved performance.

3. Hierarchical Task Networks (HTNs)


HTNs are used for representing and reasoning about hierarchical task
decomposition. HTNs consist of a set of tasks organized into a hierarchy, where
higher-level tasks are decomposed into sequences of lower-level tasks. HTNs
provide a structured framework for planning and execution, allowing for the efficient
generation of plans that satisfy complex goals and constraints.

4. Hierarchical State Space Search


Hierarchical state space search is a planning technique that involves
exploring the state space of a problem in a hierarchical manner. Instead of directly
exploring individual states, hierarchical state space search organizes states into
hierarchical structures, where higher-level states represent abstract
representations of the problem space. This hierarchical exploration allows for more
efficient search and pruning of the state space, leading to faster convergence and
improved scalability.

AI Notes prepared by V. Ravindranath


Planning And Acting In Non-Deterministic Domains:
So far we have considered only classical planning domains that are fully
observable, staticand deterministic. Furthermore we have assumed that the action
descriptions are correct and complete.

Agents have to deal with both incomplete and incorrect information.


Incompleteness arises because the world is partially observable, non-deterministic or
both. Incorrectness arises because the world does not necessarily match my model of
the world.
The possibility of having complete or correct knowledge depends on how much
indeterminacy there in the world. Bounded indeterminacy actions can have
unpredictable effects, but the possible effects can be listed in the action description
axioms.
Unbounded indeterminacy the set of possible preconditions or effects either is
unknown or is too large to be enumerated completely.
Unbounded indeterminacy is closely related to the qualification problem.

There are four planning methods for handling indeterminacy.

The following planning methods are suitable for bounded indeterminacy.


 Sensor less Planning
 Conditional Planning
 Execution Monitoring and Re planning
 Continuous Planning

Sensor less Planning:-


 Also called as Confront Planning
 This method constructs standard, sequential plans that are to be executed
without perception.
 This algorithm must ensure that the plan achieves the goal in all possible
circumstances, regardless of the true initial state and the actual action
outcomes.
 It relies on coercion – the idea that the world can be forced into a given state
even when the agent has only partial information about the current state.
 Coercion is not always possible.
Conditional Planning:-

 Also called as Contingency planning


 This method constructing a conditional plan with different branches for the
 different contingencies that could arise.
 The agent plans first and then executes the plan was produced.
 The agents find out which part of the plan to execute by including sensing
 actions in the plan to test for the appropriate conditions.

AI Notes prepared by V. Ravindranath


The following planning methods are suitable for Unbounded indeterminacy,

Execution Monitoring and Re planning:-


 In this, the agent can use any of the preceding planning techniques to construct
a plan.
 It also uses Execution Monitoring to judge whether the plan has a provision
for the actual current situation or need to be revised.
 Replanning occurs when something goes wrong. In this the agent can handle
unbounded indeterminacy.

Continuous Planning:-
 It is designed to persist over a lifetime.
 It can handle unexpected circumstances in the environment, even if these
 occur while the agent is in the middle of constructing a plan.

Multi Agent Planning :


Multi agent planning extends the traditional AI planning paradigm to scenarios
where multiple agents, each possessing distinct capabilities, knowledge, and objectives,
interact and collaborate towards shared or interrelated goals.

These agents can be embodied in various forms, including software agents, robots,
or human-AI hybrid systems.

Multi agent Systems (MAS) are made up of several interacting agents in an


environment. Every agent in MAS is independent, thus it can act on its own and make
decisions based on its observations and goals. The interactions among these agents can
be cooperative, competitive, or neutral, depending on the system’s design and objectives.
The main objective of MAS is to deal with issues that are hard or even impossible for a
single agent to tackle because of the complexity, scale, or need for expertise.

Multi agent Planning Components:

The component of multi-agent planning can be broadly categorized into four parts.
 Agents
 Environment,
 communication
 collaboration

 Agents: Agents are self-governing in a multi-agent system. Such sensors can


perceive the environment and actuators can handle actions. Agents can be
designed to have internal processes such as algorithms or learning mechanisms
for them to act.

AI Notes prepared by V. Ravindranath


 Environment: The environment in the multiagent planning is the one where agents
work. It is its characteristics that are quite changeable due to various factors over
some time. Complexity comes from the environment’s scale, connections and
unpredictability.
 Communication: One of the significant aspects of multiagent planning is the ability
of agents to convey information and synchronize their actions through
communication. It is composed of techniques, such as message passing or shared
memory. Adequate communication is a prerequisite for group work,
synchronization, and conflict resolution of agents.
 Collaboration: Collaborative strategies aim to encourage interaction and joint
performance of individuals. This consists of task sharing, information exchange,
conflict management, and team building. Working together extends collective
wisdom and overall system efficiency.

Multi agent Planning System Architecture:

It includes following parts:


 Goal Specification: Agent grouping / coordination with a single objective or target
on which they apply their efforts.
 Knowledge Sharing: For instance, the missions may exchange important
intelligence that can be an integral part of decision making.
 Action Coordination: Enacting meticulous actions coherently among agents in the
sidesteppings of conflicts and in the disease of synergy.
 Adaptation: Strategy to include planning for overcoming the changing challenges or
goal that may evoke on a constant basis and be capable to adapt.

Types of Multi agent Planning:


1. Centralized Planning: In this planning, one unit or the central controller decides
what to do for all the agents from the whole system’s state. This method of dealing
with the coordination problem makes it easier to coordinate but at the same time, it
can turn into a bottleneck and a single point of failure.

2. Decentralized Planning: It is the process where each agent makes its own
decisions depending on the information available locally and the limited
communication with other agents. This approach is supposed to be more robust
and scalable, but it is hard to coordinate it properly.
3. Distributed Planning: It is a mixed-up method where agents have to share some
info and adjust their plans in order to obtain the common world objectives. This
mixture of the advantages of the centralized and decentralized approaches, tries to
bring the best from both these systems and to make the factors that are both
necessary for coordination and autonomy.

AI Notes prepared by V. Ravindranath


Multi agent Planning Techniques:

 Distributed Problem-Solving Algorithms: The agents in these algorithms break


down the complicated problems into the easy-to-handle sub-tasks and the agents
then distribute these sub-tasks among themselves..
 Game Theory: It furnishes a tool for studying the strategic relationships among
agents. It is the key to comprehending the competitive and cooperative behaviours
of agents, which assists them to make the best decisions in the multivalent
environments.

 Multi agent Learning: The multi agent learning process is based on the agents’
enhancement of their performance by the means of their experience and interaction
with other agents. The following methods, such as reference learning let agents to
adjust to the changing environments and the changing goals.

 Communication Protocols: The communication and coordination of the agents


that are structured and have a clear protocol of the information exchange and
synchronization amongst them, is a tool for the agents to exchange the information
and be synchronized. Protocols are the norms that guarantee that messages are
exchanged and perceived in the same way, hence they make it possible for the
collaboration.

AI Notes prepared by V. Ravindranath

You might also like