UNIT 4 AI Notes
UNIT 4 AI Notes
Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-Space Search,
Planning Graphs, other Classical Planning Approaches, Analysis of Planning approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical Planning, Planning
and Acting in Nondeterministic Domains, Multi agent Planning.
Classical Planning:-
Definition of Classical Planning :
Planning and Acting in Nondeterministic Domains, Multi agent Planning.
Classical Planning is the planning where an agent takes advantage of the problem
structure to construct complex plans of an action. The agent performs three tasks in
classical planning The agent plans after knowing what is the problem.
Initial state: It is the representation of each state as the conjunction of the ground
and functionless atoms.
Actions: It is defined by a set of action schemas which implicitly define
the ACTION() and RESULT()functions.
Result: It is obtained by the set of actions used by the agent.
Goal: It is same as a precondition, which is a conjunction of literals (whose value is
either positive or negative).
There are various examples which will make PDLL understandable:
Air cargo transport
The spare tire problem
The blocks world and many more.
The efficiency of the search algorithm greatly depends on the size of the state space,
and it is important to choose an appropriate representation and search strategy to search
the state space efficiently.
Exhaustiveness:
State space search explores all possible states of a problem to find a solution.
Completeness:
If a solution exists, state space search will find it.
Optimality:
Searching through a state space results in an optimal solution.
Uninformed and Informed Search:
State space search in artificial intelligence can be classified as uninformed if it
provides additional information about the problem.
State: A state can be an Initial State, a Goal State, or any other possible state that
can be generated by applying rules between them.
Space: In an AI problem, space refers to the exhaustive collection of all conceivable
states.
Search: This technique moves from the beginning state to the desired state by
applying good rules while traversing the space of all possible states.
Search Tree: To visualize the search issue, a search tree is used, which is a tree-
like structure that represents the problem. The initial state is represented by the root
node of the search tree, which is the starting point of the tree.
Transition Model:
This describes what each action does, while Path Cost assigns a cost value to each
Search algorithms utilize the state space to find a sequence of moves that will transform
the initial state into the goal state.
This algorithm guarantees a solution but can become very slow for larger state
spaces. Alternatively, other algorithms, such as a search, use heuristics to guide the
search more efficiently.
Our objective is to move from the current state to the target state by sliding the
numbered tiles through the blank space. Let's look closer at reaching the target state from
the current state.
Level S0: It is the initial state of the planning graph that consists of nodes each
representing the state or conditions that can be true.
Level A0: Level A0 consists of nodes that are responsible for taking all specific
actions in terms of the initial condition described in the S0.
Si: It represents the state or condition which could hold at a time i, it may be
both P and ¬P.
Ai: It contains the actions that could have their preconditions satisfied at i.
1. Extending the Planning Graph: At stage i (the current level), the graph plan takes
the planning graph from stage i-1 (the previous stage) and extends it by one time
step. This adds the next action level representing all possible actions given the
propositions (states) in the previous level, followed by the proposition level
representing the resulting states after actions have been performed.
2. Valid Plan Found: If the graph plan finds a valid plan, it halts the planning process.
3. Proceeding to the Next Stage: If no valid plan is found, the algorithm determines
that the goals are not all achievable in time i and moves to the next stage.
Currently the most popular and effective approaches to fully automated planning are:
These three approaches are not the only ones tried in the 40-year history of
automated planning. In this section we first describe the translation to a satisfy ability
problem and then describe three other influential approaches: planning as first-order
logical deduction; as constraint satisfaction; and as plan refinement.
Define the initial state: assert F 0 for every fluent F in the problem’s initial state, and
¬F for every fluent not mentioned in the initial state.
Proposition Alize the goal: for every variable in the goal, replace the literals that
contain the variable with a disjunction over constants. For example, the goal of
having block A on another block, On(A, x) ∧ Block (x) in a world with objects A, B
and C, would be replaced by the goal (On(A, A) ∧ Block (A)) ∨ (On(A, B) ∧ Block
(B)) ∨ (On(A, C) ∧ Block (C)) .
Add successor-state axioms: For each fluent F , add an axiom of the form F t+1 ⇔
Action Causes F t ∨ (F t ∧ ¬Action Causes Not F t) , where Action Causes F is a
disjunction of all the ground actions that have F in their add list, and Action Causes
Not F is a disjunction of all the ground actions that have F in their delete list
Planning combines the two major areas of AI we have covered so far: search and
logic. A planner can be seen either as a program that searches for a solution or as one that
(constructively) proves the existence of a solution.
The cross-fertilization of ideas from the two areas has led both to improvements in
performance amounting to several orders of magnitude in the last decade and to an
increased use of planners in industrial applications. Unfortunately, we do not yet have a
clear understanding of which techniques work best on which kinds of problems.
Quite possibly, new techniques will emerge that dominate existing methods.
Planning is foremost an exercise in controlling combinatorial explosion. If there are n
propositions in a domain, then there are 2n states.
The classical planning representation talks about what to do, and in what order, but
the representation cannot talk about time: how long an action takes and when it occurs.
This is the subject matter of scheduling. The real world also imposes many resource
constraints; for example, an airline has a limited number of staff—and staff who are on one
flight cannot be on another at the same time. This section covers methods for representing
and solving planning problems that include temporal and resource constraints.
The approach we take in this section is “plan first, schedule later”: that is, we divide
the overall problem into a planning phase in which actions are selected, with some
ordering constraints, to meet the goals of the problem, and a later scheduling phase, in
which temporal information is added to the plan to ensure that it meets resource and
deadline constraints.
Hierarchical Planning :
The hierarchical planning is a methodology that entails grouping tasks and actions
into several abstraction levels or hierarchies, with higher-level jobs being broken down
into a series of lower-level tasks. It offers a method for effectively using a hierarchy of
goals and sub-goals to reason and plan in complex contexts.
In HRL, tasks are organized into a hierarchy of sub-goals, and the agent learns
policies for achieving these sub-goals at different levels of abstraction. By learning
hierarchies of policies, HRL enables more efficient exploration and exploitation of
the environment, leading to faster learning and improved performance.
Continuous Planning:-
It is designed to persist over a lifetime.
It can handle unexpected circumstances in the environment, even if these
occur while the agent is in the middle of constructing a plan.
These agents can be embodied in various forms, including software agents, robots,
or human-AI hybrid systems.
The component of multi-agent planning can be broadly categorized into four parts.
Agents
Environment,
communication
collaboration
2. Decentralized Planning: It is the process where each agent makes its own
decisions depending on the information available locally and the limited
communication with other agents. This approach is supposed to be more robust
and scalable, but it is hard to coordinate it properly.
3. Distributed Planning: It is a mixed-up method where agents have to share some
info and adjust their plans in order to obtain the common world objectives. This
mixture of the advantages of the centralized and decentralized approaches, tries to
bring the best from both these systems and to make the factors that are both
necessary for coordination and autonomy.
Multi agent Learning: The multi agent learning process is based on the agents’
enhancement of their performance by the means of their experience and interaction
with other agents. The following methods, such as reference learning let agents to
adjust to the changing environments and the changing goals.