AI MQP Solution
AI MQP Solution
Module -1
Q.01 a Define Artificial Intelligence? Explain Turing test approach with example
1. Natural Language Processing (NLP): The machine must understand and generate human language.
2. Knowledge Representation: The machine should store and recall information.
3. Automated Reasoning: The machine must make decisions and draw conclusions from its knowledge.
4. Machine Learning: The machine should adapt and learn from new experiences.
.
Q.02 b Explain the concept of Rationality in Detail
Rationality in AI
Rationality in artificial intelligence (AI) refers to the behavior of an agent that consistently makes the best possible decisions to achieve its
goals. A rational agent always aims to maximize its performance measure based on the information it has about the environment and its
own actions.
Definition of Agent:
An Agent in Artificial Intelligence (AI) is an entity that perceives its environment through sensors and acts
upon it through actuators to achieve specific goals. It can be a physical robot, a software program, or
even a virtual assistant. The behavior of an agent is determined by its programming and its interactions
with the environment.
3. Goal-Based Agent:
○ A Goal-Based Agent selects actions based on the goal it is trying to achieve, in addition to
considering the current environment's state.
○ It uses a model of the world and plans ahead to decide on actions that will lead to the
desired goal.
○ Goal-based agents are capable of reasoning about the future and making decisions that will
help them reach the goal in the most efficient way.
○ These agents can plan and explore multiple options, considering long-term consequences.
Example: An autonomous car navigating through traffic to reach a destination while avoiding
obstacles and following traffic rules.
Characteristics:
○ Has a specific goal and works towards achieving it.
○ Can consider multiple future states and plan ahead.
○ Requires search algorithms and planning techniques.
Module -2
1. States:
Each state is defined by:
• Agent’s location: It can either be in the Left or Right square.
• Dirt status: Each square can be either dirty or clean.
For 2 locations, there are 2×2x2=8
2. Initial State:
Any of the 8 states can be designated as the starting state. For example:
○ Agent starts at Left, and both squares are dirty.
3. Actions:
The vacuum cleaner can perform one of three actions:
○ Left: Move to the left square (if not already there).
○ Right: Move to the right square (if not already there).
○ Suck: Clean the current square.
4. Transition Model:
○ Left: Moves the agent to the left square unless it's already at the leftmost position.
○ Right: Moves the agent to the right square unless it's already at the rightmost position.
○ Suck: Cleans the current square. If the square is already clean, no effect occurs.
5. Goal Test:
The goal is achieved when both squares are clean, regardless of the agent's location.
6. Path Cost:
Each action (Left, Right, or Suck) costs 1. The total path cost is the number of steps taken to
achieve the goal.
b Discuss how BFS and IDDFS work in Artificial Intelligence with examples
Breadth-First Search (BFS) and Iterative Deepening Depth-First Search (IDDFS) are two widely
used search algorithms in Artificial Intelligence (AI) for solving search problems in graphs or
trees. They differ in their approach to traversing nodes and handling memory constraints.
The 8-puzzle consists of a 3×3 grid with 8 numbered tiles and 1 blank space. The objective is to
rearrange the tiles into the goal configuration by sliding the blank into adjacent positions.
DFS
Depth-First Search (DFS) is a popular and fundamental search algorithm used in Artificial Intelligence (AI)
to explore graphs or trees. The core idea behind DFS is to explore as deep as possible down a branch
before backtracking. This makes it a LIFO (Last In, First Out) approach, meaning that the most recently
explored node is expanded first.
DFS is commonly used in problems like pathfinding, solving puzzles (e.g., 8-puzzle), and analyzing graphs.
Working of DFS:
1. Start at the Root Node: DFS begins the search at the root node (or the initial state of the
problem). From there, it explores the child nodes.
2. Expand Deeply: Starting with the root node, DFS explores down one branch of the tree, moving to
the deepest level.
3. Backtrack if No Further Nodes: If DFS reaches a node with no unexpanded child nodes, it
backtracks to the most recent node that still has unvisited children.
4. Repeat: This process continues until a goal node is found or all paths are exhausted.
Advantages of DFS:
1. Memory Efficient: DFS requires only O(m) space (where m is the maximum depth of the tree) as it
needs to store only the current path.
2. Simple Implementation: DFS is straightforward to implement using recursion or a stack.
3. Good for Deep Searches: If the goal lies deep in the tree, DFS will eventually find it without
exploring shallow nodes first.
Disadvantages of DFS:
1. Non-Optimal: DFS does not guarantee the shortest path to the goal. It may explore long paths
before finding a shorter one.
2. Non-Complete: DFS is not guaranteed to find a solution if the search space is infinite or if there
are cycles.
3. Can Get Stuck: In cases with cycles (e.g., infinite loops), DFS can get stuck, failing to find the goal.
Applications of DFS:
• Puzzle Solving: DFS can be used to solve puzzles like the 8-puzzle or the 15-puzzle.
• Game Searching: In games, DFS can explore all possible moves from a given state.
• Web Crawling: DFS can be used to explore web pages or websites by following links.
PROPERTIES:
• Optimal: UCS always finds the cheapest path because it always picks the node with the
lowest total cost.
• Complete: It will always find a solution if one exists, as long as the costs of actions are
positive.
Disadvantages of UCS:
• Memory Usage: UCS can use a lot of memory because it needs to store all the nodes it
explores.
• Slower in Some Cases: If there are many low-cost steps, UCS may have to explore many
paths before it finds the best one.
Applications:
• GPS Navigation: Finding the shortest route from one place to another.
• Route Planning: Used in many systems where we need to find the least expensive way to
get from point A to point B.
Module -2
Q. 05 a Define Heuristic Function. Explain A*Search Algorithm for minimizing the total
estimated solution cost.
b With a neat diagram explain Knowledge based Agent.
Atomic Sentences
• Atomic sentences are the simplest sentences in propositional logic. They are just proposition
symbols, which can be either true or false.
• These symbols represent facts or statements. For example:
○ P, Q, R, or something more descriptive like North or W1,3 (which could mean "the wumpus
is in room [1,3]").
• Two special symbols are used in logic:
○ True: Always true.
○ False: Always false.
Complex Sentences
• Complex sentences are made by combining atomic sentences using logical connectives
(operators).
• These connectives allow us to build more complicated logical statements from simple ones.
Module -4
Q. 07 a Explain syntax and semantics of First order Logic with suitable Examples
Unification is the process of finding a substitution that makes two logical expressions identical.
A substitution maps variables in one expression to constants, variables, or functions in another
expression.
Steps in Unification
1. Compare Expressions:
○ Compare the structures of the two expressions step by step.
2. Apply Substitution:
○ Replace variables with constants, other variables, or functions as needed.
3. Check for Conflicts:
○ Fail if constants do not match or if the occur check fails.
4. Return Unifier:
○ Provide the substitution set (MGU) or indicate failure.
Example of Unification
Scenario:
You have a query: "Whom does John know?" represented as Knows(John, x).
Your knowledge base contains these sentences:
1. Knows(John, Jane)
2. Knows(y, Bill)
3. Knows(y, Mother(y))
4. Knows(x, Elizabeth)
Unification Results:
1. Expression: UNIFY(Knows(John, x), Knows(John, Jane))
○ Substitution: {x/Jane}
2. Expression: UNIFY(Knows(John, x), Knows(y, Bill))
○ Substitution: {x/Bill, y/John}
3. Expression: UNIFY(Knows(John, x), Knows(y, Mother(y)))
○ Substitution: {y/John, x/Mother(John)}
4. Expression: UNIFY(Knows(John, x), Knows(x, Elizabeth))
○ Result: Fails because x cannot simultaneously be John and Elizabeth.
Applications of Unification in AI
1. Inference Systems:
○ Used in forward chaining and backward chaining to derive new facts.
2. Logic Programming:
○ Core mechanism in Prolog for solving queries.
3. Pattern Matching:
○ Matching data against templates or rules.
4. Knowledge Representation:
○ Helps in reasoning with symbolic knowledge.
Module -5
Example
Advantages of Backward Chaining
1. Goal-Oriented:
○ Works efficiently by focusing only on the goal.
2. Avoids Irrelevant Facts:
○ Does not process all facts, only those related to the goal.
3. Recursive Nature:
○ Easy to implement using recursion in programming.
Mutex Relations
A mutual exclusion (mutex) relation identifies pairs of actions or literals that cannot occur together.
Mutex Between Actions:
1. Inconsistent Effects:
○ One action negates the effect of another.
○ Example: Eat(Cake) and persistence of Have(Cake).
2. Interference:
○ One action’s effect negates the precondition of another.
○ Example: Eat(Cake) negates the precondition of persistence (Have(Cake)).
3. Competing Needs:
○ Preconditions of two actions are mutex.
○ Example: Bake(Cake) and Eat(Cake) (need ¬Have(Cake) and Have(Cake) simultaneously).
Mutex Between Literals:
1. Negation:
○ One literal is the negation of the other.
○ Example: Have(Cake) and ¬Have(Cake).
2. Mutex Between Actions Producing Them:
○ All actions achieving two literals are mutex.
Heuristic Estimation Using Planning Graphs
• Estimate the cost of achieving a goal based on the level where it appears in the graph.
Heuristic Methods:
1. Max-Level:
○ Take the maximum level where any goal appears. (Admissible)
2. Sum-Cost:
○ Take the sum of levels of all goals. (Inadmissible)
3. Set-Level:
○ Find the level where all goals appear without mutex. (Admissible, dominates max-level)
Planning can be treated as a search problem where we navigate through states (situations)
using actions (steps) to reach a goal. Two primary approaches are forward search and
backward search. Each has advantages, challenges, and uses heuristics for efficiency.