Module 5 Notes
Module 5 Notes
theorem proving. It is commonly used in expert systems, such as Prolog, and is a form of goal-
driven reasoning where the search for a solution begins from the goal and works backwards through
the logic to determine what facts or conditions need to be true.
• Goal-driven: Instead of starting from known facts and applying rules forward, backward
chaining starts with the goal and attempts to find the premises (facts) that will make the goal
true.
• Recursive: It works by recursively decomposing a goal into sub-goals (i.e., finding rules
whose conclusions match the goal and then trying to satisfy the preconditions of those
rules).
• The goal (or query) is the target fact that we are trying to prove. The system tries to
prove the goal by searching for supporting facts or rules.
2. Find a Rule to Support the Goal:
• If the goal matches the conclusion of a rule, the system looks at the preconditions
(the body) of the rule.
• For example, if the goal is Criminal(West) and there is a rule
Criminal(x) :- Enemy(x, America), the system will attempt to prove
Enemy(West, America).
3. Break Down Sub-goals:
• Each precondition of the rule becomes a sub-goal. The system then recursively tries
to prove these sub-goals.
• For instance, to prove Enemy(West, America), the system will check for facts
or rules that support Enemy(x, y) for x = West and y = America.
4. Base Facts:
• If a sub-goal is a fact that is already known (i.e., a fact in the knowledge base), the
goal is satisfied.
• If not, the system continues searching for more rules that can help prove the sub-
goal.
5. Repeat Until Goal is Proven:
• This process continues until all the sub-goals are either known facts or can be
derived from the rules. If the goal can be proven through this process, backward
chaining succeeds. If no valid path is found, it fails.
3. Example of Backward Chaining
Consider a simple rule-based system with the following rules:
1. Criminal(x) :- Enemy(x, America) (If x is an enemy of America, then x is a
criminal.)
2. Enemy(West, America) (West is an enemy of America.)
Goal: Criminal(West)
1. What is Resolution?
Resolution is a refutation-based inference rule used to prove the unsatisfiability of a set of logical
sentences by deriving a contradiction. If a contradiction can be derived, the original set of sentences
is inconsistent, and the negation of the goal is unsatisfiable.
Resolution operates on conjunctive normal form (CNF), where sentences are expressed as a
conjunction of disjunctions (i.e., a conjunction of clauses, where each clause is a disjunction of
literals). The rule of resolution is applied to pairs of clauses that contain complementary literals.
2. Resolution Rule
Given two clauses:
• Clause 1: A∨B
• Clause 2: ¬A∨C
Where A is a literal and ¬A is its negation, the resolution of these two clauses results in a new
clause:
4. Examples of Resolution
Consider the following clauses:
• Clause 1: P∨Q
• Clause 2: ¬P∨R
Resolution would work as follows:
5. Unification in Resolution
In First-Order Logic, resolution requires unification. Unification is the process of making two
terms identical by substituting variables with terms. For example, in the resolution of the clauses:
• Clause 1: P(x)∨Q(y)
• Clause 2: ¬P(a)∨R(b)
We would unify P(x) with ¬P(a) by substituting x=a, resulting in:
• Unification becomes more complex in FOL due to the need to unify predicates and terms,
such as P(x) and P(a).
• The resolution process involves unifying predicates and applying the resolution rule to
produce new clauses, which is repeated until a contradiction is found or no further resolution
is possible.
• ¬P(x)∨Q(x)
• P(a)
• ¬Q(a)
Resolution works as follows:
1. Resolve P(a) with ¬P(x)∨Q(x), resulting in Q(a).
2. Resolve Q(a) with ¬Q(a), resulting in the empty clause, indicating a contradiction.
Thus, the set of clauses is unsatisfiable.
9. Applications of Resolution
• Automated Theorem Proving: Resolution is a fundamental technique in many automated
theorem proving systems.
• Logic Programming: Languages like Prolog use resolution as their primary inference
mechanism.
• AI and Knowledge Representation: Resolution is used to derive new facts from existing
knowledge in expert systems and knowledge-based systems.
Types of Algorithms:
• Forward Search: Starting from the initial state, explore the state space by applying actions
until a goal state is reached.
• Backward Search: Starting from the goal, work backward to find a sequence of actions that
lead to the initial state.
• Heuristic Search: Incorporates heuristics to guide the search process more efficiently by
evaluating the cost of reaching the goal from any given state.
Classical planning methods, while efficient for some domains, struggle in environments that are
non-deterministic, partially observable, or dynamic.
• Actions: Actions or operators are the transitions between states. Each action has:
• Preconditions: The conditions that must be true before an action can be applied.
• Effects: The changes that will occur to the world after applying the action.
• Goal: The set of conditions that need to be satisfied in the final state.
• Plan: A sequence of actions that transforms the initial state into a state that satisfies the goal.
• Advantages:
• Memory Efficiency: DFS only needs to store the current path, making it more
memory-efficient than BFS.
• Disadvantages:
• Non-Optimal: DFS is not guaranteed to find the shortest path to the goal, and it may
get stuck in deep or infinite branches.
• Completeness: DFS may fail to find a solution if the state space is infinitely deep or
contains loops.
• Advantages:
• Optimality: UCS is guaranteed to find the least costly solution, provided that the
costs are non-negative.
• Completeness: UCS is complete as long as the state space is finite and all actions
have non-negative costs.
• Disadvantages:
• Time and Space Complexity: UCS can be computationally expensive and memory-
intensive, as it needs to consider all paths in terms of their cost.
• Advantages:
• Speed: Best-First Search can be faster than BFS and UCS, especially when the
heuristic is good.
• Disadvantages:
• Non-Optimal: Since it does not take the cost of the path into account, it may not find
the least-cost solution.
• Completeness: Best-First Search is not always complete if the heuristic is poorly
designed.
E. A Search Algorithm*
• Description: A* search is an informed search algorithm that combines the advantages
of UCS and Best-First Search. It uses a heuristic function h(n) that estimates the cost to
the goal and an actual cost function g(n) that keeps track of the cost to reach the
current state. A* aims to minimize the total cost f(n)=g(n)+h(n).
• Advantages:
• A good heuristic can dramatically reduce the search space and speed up the process of
finding a solution.
• Heuristics are often domain-specific, meaning that they are tailored to particular types of
planning problems.
5. Conclusion
Algorithms for planning as state-space search are central to classical planning in AI. These
algorithms help generate plans by searching through all possible states and actions in the problem
domain. Depending on the characteristics of the problem and the computational resources available,
different search algorithms can be used. While algorithms like A* are optimal and complete, others
like DFS and Best-First Search may be more memory-efficient but less reliable in finding the
optimal solution.
Key considerations when choosing a planning algorithm include:
GRAPHPLAN Algorithm
Inconsistent effects: Remove(Spare , Trunk ) is mutex with LeaveOvernight because one has the
effect At(Spare , Ground ) and the other has its negation.
Interference: Remove(Flat , Axle) is mutex with LeaveOvernight because one has the precondition
At(Flat , Axle) and the other has its negation as an effect.
Competing needs: PutOn(Spare , Axle) is mutex with Remove(Flat , Axle) because one has At(Flat
, Axle) as a precondition and the other has its negation.
Inconsistent support: At(Spare , Axle) is mutex with At(Flat , Axle) in S2 because the only way of
achieving At(Spare , Axle) is by PutOn(Spare , Axle), and that is mutex with the persistence action
that is the only way of achieving At(Flat , Axle). Thus, the mutex relations detect the immediate
conflict that arises from trying to put two objects in the same place at the same time.
1. Components of the Planning Graph
• State Levels (S0, S1, S2...):
• Represent sets of literals (facts) that describe the possible states of the world at each
level.
• Example:
• In S0, we have At(Spare, Trunk) (the spare tire is in the trunk) and
¬At(Spare, Ground) (the spare tire is not on the ground).
• Action Levels (A0, A1...):
• Contain all possible actions that can be applied given the current state.
• Actions are determined based on their preconditions (must match the current state)
and produce effects (which are added to the next state level).
• Example:
• In A0, actions include Remove(Spare, Trunk) and Remove(Flat,
Axle).
• Mutex Links (Mutual Exclusion):
• Represent conflicts between states or actions. If two states or actions are mutually
exclusive, they cannot co-occur in the same plan.
• In the graph, these are shown as crossed lines or gray links.
• Represents the initial conditions of the problem, such as where the spare tire is
(At(Spare, Trunk)) or the flat tire's location.
• A0 (First Action Level):
3. Mutex Relationships
• Mutex Between Actions:
• Two states are mutually exclusive if no valid sequence of actions can lead to both
being true at the same time.
• The graph alternates between action and state levels, growing until a solution is
found or the graph "levels off" (no new states or actions can be added).
• A solution plan can be extracted when all goals are satisfied in a state level with no
mutual exclusions.
• Use of Preconditions and Effects:
• Actions are only added if their preconditions are satisfied in the preceding state level.
• Effects of actions are used to build the next state level.
Each action depends on the state of the world and produces new states, which are shown in the
subsequent levels.
Termination of GRAPHPLAN
The properties are as follows:
• Literals increase monotonically: Once a literal appears at a given level, it will appear
at all subsequent levels. This is because of the persistence actions; once a literal shows
up, persistence actions cause it to stay forever.
• Actions increase monotonically: Once an action appears at a given level, it will appear
at all subsequent levels. This is a consequence of the monotonic increase of literals; if
the preconditions of an action appear at one level, they will appear at subsequent levels,
and thus so will the action.
• Mutexes decrease monotonically: If two actions are mutex at a given level Ai , then they
will also be mutex for all previous levels at which they both appear. The same holds for
mutexes between literals. It might not always appear that way in the figures, because
the figures have a simplification: they display neither literals that cannot hold at level
Si nor actions that cannot be executed at level Ai . We can see that “mutexes decrease
monotonically” is true if you consider that these invisible literals and actions are mutex
with everything.
The proof can be handled by cases: if actions A and B are mutex at level Ai , it
must be because of one of the three types of mutex. The first two, inconsistent effects
and interference, are properties of the actions themselves, so if the actions are mutex
at Ai , they will be mutex at every level. The third case, competing needs, depends on
conditions at level Si : that level must contain a precondition of A that is mutex with
a precondition of B. Now, these two preconditions can be mutex if they are negations
of each other (in which case they would be mutex in every level) or if all actions for
achieving one are mutex with all actions for achieving the other. But we already know
that the available actions are increasing monotonically, so, by induction, the mutexes
must be decreasing.
• No-goods decrease monotonically: If a set of goals is not achievable at a given level,
then they are not achievable in any previous level. The proof is by contradiction: if they
were achievable at some previous level, then we could just add persistence actions to
make them achievable at a subsequent level.