0% found this document useful (0 votes)
25 views30 pages

AI MQP Solution

The document discusses key concepts in Artificial Intelligence, including definitions of AI, the Turing Test, rationality, types of agents, and problem-solving agents. It explains various search algorithms such as Breadth-First Search (BFS), Iterative Deepening Depth-First Search (IDDFS), Depth-First Search (DFS), and uniform cost search, along with their properties and examples. Additionally, it covers the 8-puzzle problem and how a problem-solving agent can be applied to solve it.

Uploaded by

misbahsayyed818
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views30 pages

AI MQP Solution

The document discusses key concepts in Artificial Intelligence, including definitions of AI, the Turing Test, rationality, types of agents, and problem-solving agents. It explains various search algorithms such as Breadth-First Search (BFS), Iterative Deepening Depth-First Search (IDDFS), Depth-First Search (DFS), and uniform cost search, along with their properties and examples. Additionally, it covers the 8-puzzle problem and how a problem-solving agent can be applied to solve it.

Uploaded by

misbahsayyed818
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Saturday, January 18, 2025 3:49 PM

Artificial Intelligence model question paper solution

Module -1

Q.01 a Define Artificial Intelligence? Explain Turing test approach with example

Artificial Intelligence (AI)


AI is the field of computer science focused on creating machines or software that can perform tasks that typically require human
intelligence. These tasks include learning from experience, making decisions, understanding language, recognizing objects, and solving
problems.
Turing Test
The Turing Test, proposed by Alan Turing in 1950, is used to measure a machine's ability to exhibit intelligent behavior. The test involves
a human interrogator who converses with both a machine and a human, without knowing which is which. If the interrogator cannot
distinguish between the human and the machine based on their responses, the machine is said to have passed the Turing Test.

The computer would need to possess the following capabilities:

1. Natural Language Processing (NLP): The machine must understand and generate human language.
2. Knowledge Representation: The machine should store and recall information.
3. Automated Reasoning: The machine must make decisions and draw conclusions from its knowledge.
4. Machine Learning: The machine should adapt and learn from new experiences.

Total Turing Test (Extended)


The Total Turing Test adds the ability for the machine to:
• Recognize and interpret visual information (Computer Vision).
• Manipulate objects in the environment (Robotics).

Example of the Turing Test


Imagine you're having a text chat with two people. One of them is a human, and the other is a machine (AI). Your job is to figure out
which one is the machine, just by chatting.
Example Chat:
• You (Interrogator): "What is your favorite hobby?"
• Human: "I love playing guitar, especially in a band."
• AI: "I enjoy learning new things, like reading and solving puzzles."
• You: "What did you have for lunch?"
• Human: "I had a sandwich and some fruit."
• AI: "I had a salad with some chicken."
In this case, the AI gives answers that sound like something a human would say. If you can't tell who is the AI and who is the human just
by the conversation, the AI has passed the Turing Test.

.
Q.02 b Explain the concept of Rationality in Detail

Rationality in AI
Rationality in artificial intelligence (AI) refers to the behavior of an agent that consistently makes the best possible decisions to achieve its
goals. A rational agent always aims to maximize its performance measure based on the information it has about the environment and its
own actions.

Definition of a Rational Agent


A rational agent is one that selects actions that are expected to maximize its performance measure, given its percept sequence and built-
in knowledge of the environment. It does what is "best" according to the information it has at each step.
Four Factors Defining Rationality
Rationality depends on the following four key factors:
• Performance Measure: This defines what success looks like. For example, in the case of a vacuum-cleaning robot, the performance
measure could be the number of clean squares in the environment.
• Knowledge of the Environment: The agent’s understanding of its surroundings plays a critical role in making decisions. For instance,
a robot that knows where dirt is located can clean more efficiently.
• Available Actions: Rationality also depends on the actions that the agent can take. In the case of the vacuum agent, these actions
could include moving, cleaning, or stopping.
• Percept Sequence: The agent uses its sensors to gather information about the environment, which guides its decision-making
process.

Example of a Rational Agent


Consider a simple vacuum-cleaner robot. The robot performs the following steps:
• If a square is dirty, the robot cleans it.
• If the square is clean, the robot moves to the next square.
This behavior is rational because the robot maximizes its performance measure by keeping the environment as clean as possible, given
the available actions (cleaning and moving) and the information it gathers (whether a square is clean or dirty).

Ideal vs. Practical Rationality


• Ideal Rationality: In an ideal world, a rational agent would always make the perfect decision using all available knowledge and
resources.
• Bounded Rationality: In practice, agents are limited by computational constraints and incomplete information, so they make the
best decision they can under these limitations. This means they aim for the "best" solution, but not necessarily the perfect one.

Q.02 a Explain various types of Environment in Artificial Intelligence.


b Define Agent .Explain any 3 types of Agent in Artificial Intelligence

Definition of Agent:
An Agent in Artificial Intelligence (AI) is an entity that perceives its environment through sensors and acts
upon it through actuators to achieve specific goals. It can be a physical robot, a software program, or
even a virtual assistant. The behavior of an agent is determined by its programming and its interactions
with the environment.

Types of Agents in Artificial Intelligence:


1. Simple Reflex Agent:
○ A Simple Reflex Agent is the most basic type of agent. It selects actions based on the current
percept, ignoring the history of past percepts.
○ It operates using a set of condition-action rules (also called if-then rules) that directly map
percepts to actions.
○ For example, in the vacuum world, if the agent perceives dirt, it performs the action of
cleaning (Suck). If it perceives a clean environment, it may move in a random direction.
Example: A thermostat that switches on the heating system when the temperature goes
below a set threshold.
Characteristics:
○ No memory of past actions or percepts.
○ Works well in fully observable and static environments.
○ Can face limitations in partially observable environments (like infinite loops).
2. Model-Based Reflex Agent:
○ Model-Based Reflex Agents extend the idea of simple reflex agents by maintaining an
internal model (or state) of the environment.
○ They use this internal state, which is updated based on their actions and new percepts, to
make decisions.
○ These agents can handle partial observability because they store information about the
environment they can’t see directly.
○ The agent maintains internal states that help it track what it has observed and allows it to
make better decisions.
Example: A robot vacuum cleaner that remembers its previous movements and dirt locations
to avoid getting stuck and to clean the entire area.
Characteristics:
○ Uses memory to improve decision-making.
○ Better suited for partially observable environments.
○ Maintains an internal state of the environment.

3. Goal-Based Agent:
○ A Goal-Based Agent selects actions based on the goal it is trying to achieve, in addition to
considering the current environment's state.
○ It uses a model of the world and plans ahead to decide on actions that will lead to the
desired goal.
○ Goal-based agents are capable of reasoning about the future and making decisions that will
help them reach the goal in the most efficient way.
○ These agents can plan and explore multiple options, considering long-term consequences.
Example: An autonomous car navigating through traffic to reach a destination while avoiding
obstacles and following traffic rules.
Characteristics:
○ Has a specific goal and works towards achieving it.
○ Can consider multiple future states and plan ahead.
○ Requires search algorithms and planning techniques.

Module -2

Q. 03 a Explain problem solving agent in detail with suitable example.

A problem-solving agent is an intelligent agent that attempts to solve a problem by formulating


the problem, searching through a space of possible solutions, and executing the actions that
lead to the goal. The agent follows a structured process to solve the problem, which typically
includes:
1. Formulating a goal: Defining what needs to be achieved.
2. Problem formulation: Describing the problem in terms of initial state, actions, state
space, and goal test.
3. Search: Searching through the state space to find a solution.
4. Execution: Carrying out the action sequence to reach the goal.
The Vacuum Cleaner Problem
Imagine a simple scenario where a vacuum cleaner agent is tasked with cleaning a two-room
house. Each room can either be clean or dirty, and the vacuum cleaner can move between the
rooms and clean them.
Problem Setup:
• The vacuum cleaner has the goal of cleaning both rooms.
• The agent has a basic set of actions to perform, such as moving between rooms and
cleaning the current room.

1. States:
Each state is defined by:
• Agent’s location: It can either be in the Left or Right square.
• Dirt status: Each square can be either dirty or clean.
For 2 locations, there are 2×2x2=8
2. Initial State:
Any of the 8 states can be designated as the starting state. For example:
○ Agent starts at Left, and both squares are dirty.

3. Actions:
The vacuum cleaner can perform one of three actions:
○ Left: Move to the left square (if not already there).
○ Right: Move to the right square (if not already there).
○ Suck: Clean the current square.

4. Transition Model:
○ Left: Moves the agent to the left square unless it's already at the leftmost position.
○ Right: Moves the agent to the right square unless it's already at the rightmost position.
○ Suck: Cleans the current square. If the square is already clean, no effect occurs.

5. Goal Test:
The goal is achieved when both squares are clean, regardless of the agent's location.

6. Path Cost:
Each action (Left, Right, or Suck) costs 1. The total path cost is the number of steps taken to
achieve the goal.

How the Problem-Solving Agent Works


1. Formulate Goal:
○ The agent’s goal is to clean both squares.
2. Formulate Problem:
○ The agent starts at a specific state and considers actions that will lead to a state where
both squares are clean.
3. Search for a Solution:
○ The agent tries a sequence of actions like:
 Suck (clean the current square).
 Move to the other square (Left or Right).
 Suck again.
4. Execute Solution:
○ The agent follows the solution step by step to clean both squares.

Advantages of a Problem-Solving Agent


• Works systematically to achieve the goal.
• Can solve complex problems using structured methods.
• Efficient in well-defined environments.
Limitations
• Requires a clearly defined goal.
• May fail in dynamic or partially observable environments.

b Discuss how BFS and IDDFS work in Artificial Intelligence with examples

Breadth-First Search (BFS) and Iterative Deepening Depth-First Search (IDDFS) are two widely
used search algorithms in Artificial Intelligence (AI) for solving search problems in graphs or
trees. They differ in their approach to traversing nodes and handling memory constraints.

1. Breadth-First Search (BFS)


How It Works:
• BFS is a complete and uninformed search algorithm that explores all nodes at the current
depth level before moving to the next level.
• It uses a queue (FIFO) to keep track of nodes to be explored.
Steps of BFS:
1. Start at the root node and enqueue it.
2. Dequeue the front node from the queue, explore it, and enqueue its unvisited child
nodes.
3. Repeat until the goal is found or all nodes are explored.
Properties:
• Completeness: Guarantees finding a solution if one exists.
• Optimality: Provides the shortest path if all step costs are equal.
• Time Complexity: O(b^d), where b is the branching factor and d is the depth of the
shallowest solution.
• Space Complexity: O(b^d), as all nodes at a given level must be stored.

2. Iterative Deepening Depth-First Search (IDDFS)


How It Works:
• IDDFS combines the space efficiency of Depth-First Search (DFS) with the completeness of BFS.
• It performs DFS with a depth limit, starting at 0 and incrementally increasing the limit until the goal is found.
Steps of IDDFS:
1. Start with a depth limit of 0 and perform DFS up to that depth.
2. If the goal is not found, increment the depth limit by 1 and repeat the search.
3. Continue until the goal is reached.
Properties:
• Completeness: Guarantees finding a solution if one exists.
• Optimality: Provides the shortest path for uniform step costs.
• Time Complexity: O(b^d), similar to BFS.
• Space Complexity: O(bd) due to the depth-limited recursion stack.
Q.04 a Explain how PSA solves 8-Puzzle Problem in detail

The 8-puzzle consists of a 3×3 grid with 8 numbered tiles and 1 blank space. The objective is to
rearrange the tiles into the goal configuration by sliding the blank into adjacent positions.

Formal Problem Formulation


1. States:
• A state is a description of the board's configuration, specifying the position of all 8 tiles and the
blank.
2. Initial State:
• Any valid configuration can be designated as the starting state.
3. Actions:
• The possible actions involve sliding the blank tile Up, Down, Left, or Right, depending on its
position. For example:
○ If the blank is in the middle, all four actions are valid.
○ If it is in the top-left corner, only Down and Right are valid.
4. Transition Model:
• Given a state and an action, the transition model determines the resulting state. For example:
○ If the blank is swapped with the tile numbered 5, the state changes accordingly.
5. Goal Test:
• The goal test checks whether the current state matches the specified goal configuration (e.g., tiles
in numerical order, with the blank at the bottom-right).
6. Path Cost:
• Each step (move) has a cost of 1. Therefore, the path cost is the total number of moves made to
reach the goal state.

A Problem-Solving Agent (PSA) operates in the following steps:


1. Goal Formulation:
The agent identifies the desired final state (goal configuration).
2. Problem Formulation:
The agent defines the problem using the components described above (states, actions, goal test,
etc.).
3. Search for a Solution:
The agent uses a search algorithm to find a sequence of actions that leads to the goal.
4. Execution:
The agent executes the action sequence step by step until the goal is reached.
b Explain how DFS and uniform cost Search algorithm works in AI with suitable examples

DFS
Depth-First Search (DFS) is a popular and fundamental search algorithm used in Artificial Intelligence (AI)
to explore graphs or trees. The core idea behind DFS is to explore as deep as possible down a branch
before backtracking. This makes it a LIFO (Last In, First Out) approach, meaning that the most recently
explored node is expanded first.
DFS is commonly used in problems like pathfinding, solving puzzles (e.g., 8-puzzle), and analyzing graphs.
Working of DFS:
1. Start at the Root Node: DFS begins the search at the root node (or the initial state of the
problem). From there, it explores the child nodes.
2. Expand Deeply: Starting with the root node, DFS explores down one branch of the tree, moving to
the deepest level.
3. Backtrack if No Further Nodes: If DFS reaches a node with no unexpanded child nodes, it
backtracks to the most recent node that still has unvisited children.
4. Repeat: This process continues until a goal node is found or all paths are exhausted.

Key Properties of DFS:


• Space Efficiency: DFS only needs to store the current path from the root to the current node,
making it memory-efficient. It requires space proportional to the maximum depth of the tree.
• Non-Optimal: DFS does not guarantee the shortest path to the goal. It may explore unpromising
paths before reaching the goal.
• Non-Complete: DFS may not find a solution if the state space is infinite or if it explores paths that
never lead to a solution.
• Can Get Stuck: In the presence of cycles or infinite paths, DFS may revisit nodes and get stuck in
loops.

Advantages of DFS:
1. Memory Efficient: DFS requires only O(m) space (where m is the maximum depth of the tree) as it
needs to store only the current path.
2. Simple Implementation: DFS is straightforward to implement using recursion or a stack.
3. Good for Deep Searches: If the goal lies deep in the tree, DFS will eventually find it without
exploring shallow nodes first.

Disadvantages of DFS:
1. Non-Optimal: DFS does not guarantee the shortest path to the goal. It may explore long paths
before finding a shorter one.
2. Non-Complete: DFS is not guaranteed to find a solution if the search space is infinite or if there
are cycles.
3. Can Get Stuck: In cases with cycles (e.g., infinite loops), DFS can get stuck, failing to find the goal.

Applications of DFS:
• Puzzle Solving: DFS can be used to solve puzzles like the 8-puzzle or the 15-puzzle.
• Game Searching: In games, DFS can explore all possible moves from a given state.
• Web Crawling: DFS can be used to explore web pages or websites by following links.

Uniform-Cost Search (UCS)


Uniform-Cost Search (UCS) is a type of algorithm used to find the shortest or least expensive
path to a goal. It is particularly useful when each step or action in a problem has a different
cost, and the goal is to find the least costly way to reach the goal.
How UCS Works:
1. Priority Queue: UCS keeps a list of nodes (states) it needs to explore, and it orders these
nodes by their total cost to reach them. The node with the lowest cost is explored first.
2. Exploring Nodes: UCS explores the node with the least cost first. It checks if this node is
the goal. If it is, the algorithm stops. If not, it looks at all possible actions (or moves) from
that node and calculates the cost to reach those new nodes.
3. Optimal Path: UCS always finds the best (least expensive) path to the goal. It will continue
checking all possible paths, but it will always choose the least costly one.

How UCS Solves Problems:


1. Start at the Initial Node: The algorithm begins at the starting node and gives it a cost of 0.
2. Check Neighbors: UCS looks at all possible next moves (neighbors) and calculates the cost
to reach those neighbors.
3. Keep Expanding the Cheapest Path: UCS keeps exploring the neighbor that has the lowest
cost, and it repeats this process until it finds the goal.
4. Goal Test: Once UCS finds the goal node, it stops and provides the path that costs the
least.

PROPERTIES:
• Optimal: UCS always finds the cheapest path because it always picks the node with the
lowest total cost.
• Complete: It will always find a solution if one exists, as long as the costs of actions are
positive.
Disadvantages of UCS:
• Memory Usage: UCS can use a lot of memory because it needs to store all the nodes it
explores.
• Slower in Some Cases: If there are many low-cost steps, UCS may have to explore many
paths before it finds the best one.
Applications:
• GPS Navigation: Finding the shortest route from one place to another.
• Route Planning: Used in many systems where we need to find the least expensive way to
get from point A to point B.

Module -2

Q. 05 a Define Heuristic Function. Explain A*Search Algorithm for minimizing the total
estimated solution cost.
b With a neat diagram explain Knowledge based Agent.

A Knowledge-Based Agent is an intelligent system that makes decisions based on the


information it has about the world. It works by storing and processing knowledge, then using
that knowledge to take actions in its environment.
Key Components of a Knowledge-Based Agent:
1. Knowledge Base (KB):
○ This is the core component of the agent. It stores all the knowledge (facts, rules, and
beliefs) that the agent has about the world.
○ The knowledge is represented in a formal language, allowing the agent to reason
and infer new facts.
2. TELL:
○ The agent uses the TELL function to add new knowledge to its knowledge base. For
example, if the agent learns something new from its environment, it updates the KB
by using TELL.
3. ASK:
○ The agent uses the ASK function to query the knowledge base. It asks questions to
the KB to find out what action it should take or to get more information.
4. Inference:
○ Inference is the process of deriving new knowledge from existing knowledge. It
allows the agent to reason and make decisions based on the information it has
stored in the KB.
○ When the agent asks a question (ASK), the answer is derived from the knowledge
stored in the KB.
5. Percepts and Actions:
○ The agent interacts with the world by receiving percepts (data or observations from
the environment) and taking actions (doing things in the environment).
○ It continuously updates its knowledge base with the TELL function about what it
perceives, and it asks the KB (using ASK) what action it should perform based on the
current knowledge.
6. Declarative vs. Procedural Approaches:
○ Declarative Approach: The agent's behavior is specified by telling it what to know.
This approach focuses on what the agent knows, not how it behaves.
○ Procedural Approach: The behavior is encoded directly in the program code,
specifying how the agent behaves.
Example:
Imagine a self-driving car:
• Knowledge Base: The KB stores facts like "The car is at point A," "The traffic light is red,"
"The destination is B," etc.
• TELL: The car receives sensory data (like traffic light color, nearby vehicles) and updates its
KB.
• ASK: The car might ask, "What is the best route to the destination?" or "Should I stop at
the light?"
• Inference: The agent uses the knowledge in the KB to infer that it should stop at the red
light or that the fastest route to the destination is via highway X.

Q. 06 a Explain Wampus world problem in detail;

Wumpus World Overview


• A 4x4 grid represents rooms connected by passageways.
• The agent starts at [1,1], facing right.
• The environment has:
○ A Wumpus (dangerous creature).
○ Pits (falling into them is deadly).
○ Gold (the goal for the agent).
Performance Measure
• +1000 for grabbing the gold and exiting the cave.
• -1000 for falling into a pit or being eaten by the Wumpus.
• -1 for each action.
• -10 for using the arrow to kill the Wumpus.
Environment
• The agent starts at [1,1].
• Gold and Wumpus are randomly placed.
• Pits have a 20% chance of being in any room (except the start room).
• Stench is sensed near the Wumpus.
• Breeze is sensed near pits.
• The agent can shoot an arrow to kill the Wumpus, but it only has one arrow.
Actuators (Actions the agent can take)
1. Turn Left (90°), Turn Right (90°).
2. Move Forward.
3. Grab: Pick up the gold.
4. Shoot: Use the arrow to kill the Wumpus.
5. Climb: Exit the cave with the gold.
Sensors (What the agent can sense)
• Stench: When the Wumpus is near.
• Breeze: When a pit is nearby.
• Glitter: When the agent is in the same room as the gold.
• Bump: If the agent runs into a wall.
• Scream: When the Wumpus is killed.
Characteristics of the Wumpus World
• Observable? No. The agent has local perception (it can only sense adjacent squares).
• Deterministic? Yes. Actions have predictable outcomes.
• Episodic? No. The agent’s actions depend on the history of its actions.
• Static? Yes. The Wumpus and pits don't move.
• Discrete? Yes. The environment consists of a finite number of states.
• Single-Agent? Yes. The Wumpus is not an agent but a part of the environment.
Goal
• Find the gold and exit the cave while avoiding the Wumpus and pits.
Explain Propositional Logic syntax and Semantics with suitable example.

Atomic Sentences
• Atomic sentences are the simplest sentences in propositional logic. They are just proposition
symbols, which can be either true or false.
• These symbols represent facts or statements. For example:
○ P, Q, R, or something more descriptive like North or W1,3 (which could mean "the wumpus
is in room [1,3]").
• Two special symbols are used in logic:
○ True: Always true.
○ False: Always false.
Complex Sentences
• Complex sentences are made by combining atomic sentences using logical connectives
(operators).
• These connectives allow us to build more complicated logical statements from simple ones.
Module -4

Q. 07 a Explain syntax and semantics of First order Logic with suitable Examples

First-Order Logic (FOL)


First-Order Logic is an expressive logic used to represent commonsense knowledge and relations in the world.
Key Features:
1. Foundation: Builds on propositional logic and adds objects, relations, and functions.
2. Alternative Name: Also called First-Order Predicate Calculus (FOPC).
3. Purpose: Models facts about objects and their relationships in the world.

Basic Concepts in FOL


1. Objects: Things in the domain, e.g., people, houses, numbers.
2. Relations: Properties or connections between objects:
○ Unary: e.g., Red(x) (x is red).
○ Binary: e.g., Brother(x, y) (x is the brother of y).
○ n-ary: e.g., BiggerThan(x, y, z) (x is bigger than y and z).
3. Functions: Map one object to another, e.g., Father(John) refers to John’s father.
4. Ontology (Ontological Commitment): Defines "what exists" in the world (facts, objects, relations).
5. Epistemology (Epistemological Commitment): Deals with "what an agent believes" (truth, falsehood, or
uncertainty).
b Explain the steps involved in Knowledge Engineering with Example
Q. 08 a Explain forward chaining with example
b Explain Unification in Artificial Intelligence with Example

Unification is the process of finding a substitution that makes two logical expressions identical.
A substitution maps variables in one expression to constants, variables, or functions in another
expression.

Key Concepts in Unification


1. Substitution:
○ A mapping of variables to terms.
○ Example: {x/John} means the variable x is replaced by the constant John.
2. Unifier:
○ A substitution that makes two logical expressions identical.
○ Example: To unify Knows(John, x) and Knows(John, Jane), the unifier is {x/Jane}.
3. Most General Unifier (MGU):
○ The simplest unifier that works without adding unnecessary restrictions.
○ Example: For Knows(John, x) and Knows(y, z), the MGU is {y/John, x/z}.
4. Occur Check:
○ A variable cannot unify with an expression containing itself.
○ Example: S(x) cannot unify with S(S(x)).

Steps in Unification
1. Compare Expressions:
○ Compare the structures of the two expressions step by step.
2. Apply Substitution:
○ Replace variables with constants, other variables, or functions as needed.
3. Check for Conflicts:
○ Fail if constants do not match or if the occur check fails.
4. Return Unifier:
○ Provide the substitution set (MGU) or indicate failure.

Example of Unification
Scenario:
You have a query: "Whom does John know?" represented as Knows(John, x).
Your knowledge base contains these sentences:
1. Knows(John, Jane)
2. Knows(y, Bill)
3. Knows(y, Mother(y))
4. Knows(x, Elizabeth)
Unification Results:
1. Expression: UNIFY(Knows(John, x), Knows(John, Jane))
○ Substitution: {x/Jane}
2. Expression: UNIFY(Knows(John, x), Knows(y, Bill))
○ Substitution: {x/Bill, y/John}
3. Expression: UNIFY(Knows(John, x), Knows(y, Mother(y)))
○ Substitution: {y/John, x/Mother(John)}
4. Expression: UNIFY(Knows(John, x), Knows(x, Elizabeth))
○ Result: Fails because x cannot simultaneously be John and Elizabeth.

Applications of Unification in AI
1. Inference Systems:
○ Used in forward chaining and backward chaining to derive new facts.
2. Logic Programming:
○ Core mechanism in Prolog for solving queries.
3. Pattern Matching:
○ Matching data against templates or rules.
4. Knowledge Representation:
○ Helps in reasoning with symbolic knowledge.

Module -5

Q. 09 a Explain Backward Chaining with example.

Example
Advantages of Backward Chaining
1. Goal-Oriented:
○ Works efficiently by focusing only on the goal.
2. Avoids Irrelevant Facts:
○ Does not process all facts, only those related to the goal.
3. Recursive Nature:
○ Easy to implement using recursion in programming.

Disadvantages of Backward Chaining


1. Complex Rules:
○ If the rules are complex or deeply nested, the process can be computationally expensive.
2. Requires Complete Knowledge:
○ Assumes the knowledge base contains all facts needed to prove the goal.
3. Infinite Loops:
○ May loop indefinitely if not carefully implemented (e.g., recursive dependencies).

Applications of Backward Chaining


1. Expert Systems:
○ Used in diagnostic systems like MYCIN for medical diagnosis.
2. Logic Programming:
○ Core inference mechanism in Prolog.
3. Rule-Based Systems:
○ Decision-making systems in AI.

b Explain Graph Plan Algorithm

Planning Graphs in Artificial Intelligence


A planning graph is a data structure used in propositional planning to represent and reason about
possible actions and their effects over time. It helps estimate the feasibility and cost of achieving a goal.

Key Features of Planning Graphs


1. Structure:
○ A planning graph consists of levels that alternate between states (literals) and actions.
○ Level 0 represents the initial state.
2. Levels:
○ Literal Levels (States): Contain all literals that could be true at that time step.
○ Action Levels: Contain all actions whose preconditions could be satisfied at that time step.
3. Termination:
○ The graph is constructed until two consecutive levels are identical, at which point the graph
levels off.

Example: "Have Cake and Eat It Too"


• Initial State (Level 0):
○ Have(Cake) is true.
• Actions:
○ Eat(Cake) (results in ¬Have(Cake)).
○ Persistence Action: Represent inaction, meaning Have(Cake) can persist without being
eaten.
• Planning Graph Construction:
1. At Level 1 (Actions):
 Possible actions: Eat(Cake) or Persistence.
2. At Level 2 (Literals):
 Results: Either ¬Have(Cake) (if eaten) or Have(Cake) (if persisted).

Mutex Relations
A mutual exclusion (mutex) relation identifies pairs of actions or literals that cannot occur together.
Mutex Between Actions:
1. Inconsistent Effects:
○ One action negates the effect of another.
○ Example: Eat(Cake) and persistence of Have(Cake).
2. Interference:
○ One action’s effect negates the precondition of another.
○ Example: Eat(Cake) negates the precondition of persistence (Have(Cake)).
3. Competing Needs:
○ Preconditions of two actions are mutex.
○ Example: Bake(Cake) and Eat(Cake) (need ¬Have(Cake) and Have(Cake) simultaneously).
Mutex Between Literals:
1. Negation:
○ One literal is the negation of the other.
○ Example: Have(Cake) and ¬Have(Cake).
2. Mutex Between Actions Producing Them:
○ All actions achieving two literals are mutex.
Heuristic Estimation Using Planning Graphs
• Estimate the cost of achieving a goal based on the level where it appears in the graph.
Heuristic Methods:
1. Max-Level:
○ Take the maximum level where any goal appears. (Admissible)
2. Sum-Cost:
○ Take the sum of levels of all goals. (Inadmissible)
3. Set-Level:
○ Find the level where all goals appear without mutex. (Admissible, dominates max-level)

Graph Plan Algorithm Properties


1. Monotonicity:
○ Literals increase: Once a literal is in a level, it persists in all subsequent levels.
○ Actions increase: Once an action is possible, it remains possible.
○ Mutex relations decrease: If two actions or literals are mutex, they remain mutex in earlier
levels.
2. Termination:
○ Because literals increase and mutexes decrease, the graph eventually levels off,
guaranteeing a level where all goals are non-mutex.

Advantages of Planning Graphs


1. Efficiency:
○ Polynomial complexity in the number of literals.
2. Heuristic Estimation:
○ Provides admissible heuristics (e.g., max-level, set-level).
3. Tracks Mutex:
○ Records impossibilities, making planning more accurate.

Q. 10 a Explain the concept of Resolution in AI.


b How state space search is done in AI discuss b How state space search is done in AI discuss

Planning can be treated as a search problem where we navigate through states (situations)
using actions (steps) to reach a goal. Two primary approaches are forward search and
backward search. Each has advantages, challenges, and uses heuristics for efficiency.

1. Forward (Progression) Search


• How it works:
Starts from the initial state and explores possible actions step by step until the goal state
is reached.
Challenges:
1. Irrelevant actions:
○ Many actions may not contribute to the goal.
○ Example: In buying a book online, there could be 10 billion possible actions for
different ISBNs, most of which are irrelevant.
2. Large state spaces:
○ Problems like air cargo planning involve a massive number of states and actions.
○ Example: Moving cargo between airports involves 2000 possible actions per state
on average, making the search graph enormous.
Why use it?
• When we can clearly define and evaluate possible actions from the start.
• Works well with strong heuristic guidance to prune irrelevant actions.

2. Backward (Regression) Search


• How it works:
Starts from the goal state and works backward, finding actions that could have led to the
current goal.
Key Feature:
Focuses only on relevant actions—those directly helping achieve the goal.
Example:
Goal: ¬Poor ∧ Famous (not poor and famous).
• Backward search identifies actions to achieve this state and works step by step to the
initial state.
Challenges:
1. State description complexity:
○ In some problems (e.g., n-queens), describing states one step away from the goal
can be difficult.
2. Action relevance:
○ Requires careful consideration of which actions are relevant to the goal.
Why use it?
• Efficient for well-defined goals with clearly reversible actions.
• Works best when the domain supports regression (e.g., described in PDDL format).

3. Heuristics for Planning


• Heuristics:
Functions estimating the cost or distance from the current state to the goal state.
Admissible Heuristics:
• Ensure the heuristic does not overestimate the cost to reach the goal.
• Example: Using a simplified version of the problem (relaxed problem) to calculate the
solution cost.
Why heuristics matter:
• Without heuristics, both forward and backward searches would explore too many
irrelevant states.
• A good heuristic reduces exploration and ensures efficient planning.
Domain-Specific vs. Domain-Independent Heuristics:
• Domain-Specific: Tailored to specific problems but require human expertise.
• Domain-Independent: Automatically generated for any problem using the factored
representation (states and actions described as logical facts).

You might also like