Ai Mqpa-1
Ai Mqpa-1
One major milestone in AI history was the failure of early neural networks to
generalize effectively. In the 1960s and 1970s, researchers were excited about the
potential of neural networks, which are systems inspired by the human brain.
However, these early networks faced a significant issue: they couldn’t generalize well.
This means that they would perform well on specific tasks they had been trained on
but struggled to apply the knowledge to new, unseen data. For example, a neural
network trained to recognize certain patterns might fail when faced with similar but
slightly different patterns. This led to a slowdown in research during the 1970s, often
referred to as the "AI Winter," when interest and funding for AI research decreased.
2. Advent of DENDRAL
The next important milestone was the emergence of intelligent agents in the 1990s.
An intelligent agent is a system that perceives its environment, makes decisions
based on its perceptions, and takes actions to achieve specific goals. These agents
are designed to act autonomously, learn from experience, and adapt to changing
environments. The development of intelligent agents led to the creation of multi-
agent systems, where multiple intelligent agents interact with each other and work
towards a common goal. The rise of intelligent agents paved the way for
applications like autonomous vehicles, robotics, and smart assistants like Siri and
Alexa. These agents are now used in many real-world applications, showcasing AI’s
ability to think, learn, and act in dynamic environments.
Conclusion
These milestones represent key points in AI history. The neural networks failure to
generalize marked a period of frustration and led to a reevaluation of AI techniques.
The advent of DENDRAL showed that AI could be powerful in specific, expert-
driven areas. Finally, the emergence of intelligent agents opened new possibilities
for AI, making it more dynamic and applicable to a wider range of industries. These
developments have helped AI evolve into the field we know today, with systems that
can think, learn, and act intelligently.
• Episodic: In an episodic task, the agent’s actions in one part of the task do
not affect the other parts. Each action or decision is independent of the others,
and the task is broken down into separate episodes. For example, when an AI
system classifies individual images in an image recognition task, each image is
treated separately, and the decision made for one image doesn’t affect the
next.
• Sequential: In a sequential task, the agent’s actions are dependent on what
happened before. Each decision impacts the next one. For example, in a maze
navigation task, each step taken by the agent affects the next one, as the
agent has to remember its path and make decisions based on what it has
learned so far
2. Environment
The environment for a biometric authentication system is the physical and digital
setup where it operates. This includes:
• User’s biometric data: Fingerprint, face, iris scan, voice, etc., are collected for
verification.
• Database: A storage of authorized biometric data for comparison.
• Authentication devices: Hardware like fingerprint scanners, cameras, or
sensors used to collect biometric data.
• Security systems: Systems that protect against unauthorized access to
biometric data and make decisions based on the comparisons.
3. Actuators:
Actuators are the components that perform actions in response to decisions made by
the system. In a biometric authentication system, actuators include:
• Access control: Granting or denying access to a system or secure area based
on the authentication result.
• User feedback: Displaying results (e.g., "Access Granted" or "Access Denied")
on a screen or through an audio system.
• Locking or unlocking: If the system controls physical access (like doors), the
actuator could be a lock that opens or closes.
4. Sensors:
Sensors are devices that gather biometric data from the user for processing. In
biometric authentication systems, sensors include:
• Fingerprint scanners: Capture the unique pattern of a person’s fingerprints.
• Facial recognition cameras: Capture images of a person’s face to compare with
stored data.
• Iris scanners: Capture the unique patterns in a person’s iris for identification.
• Voice recognition microphones: Capture a person's voice for voice biometrics.
• Thermal scanners: In some cases, thermal sensors might be used for face or
body temperature detection as part of a multi-factor authentication system.
7.Define five components of a problem. Write a complete state space for a
vacuum cleaner to clean 2 squares P and Q. Q is to the right of P
Example Transitions:
• Starting at (P, 0, 0) (both squares dirty), the vacuum cleaner can:
o Clean P → Transition to (P, 1, 0) (P cleaned).
o Move Right → Transition to (Q, 0, 0) (vacuum moves to Q, both squares
still dirty).
• If the vacuum cleaner is at (Q, 0, 0), it can:
o Clean Q → Transition to (Q, 0, 1) (Q cleaned).
o Move Left → Transition to (P, 0, 0) (vacuum moves back to P, both squares
still dirty).
Goal State:
The goal state is when both squares are clean. So, the goal state is (P, 1, 1) or (Q, 1, 1),
where both P and Q are clean.
Path Cost:
The path cost can be counted as the number of actions the vacuum cleaner takes,
such as cleaning or moving between squares.
1. Time Complexity:
• BFS:
o O(b^d): BFS explores every level of the tree or graph, and in the worst
case, it needs to explore all possible nodes. Here, b is the branching factor
(average children per node) and d is the depth of the goal.
• DFS:
o O(b^d): DFS also explores all nodes in the worst case, but it may explore
deeply into one branch before considering others. The time complexity is
the same as BFS in the worst case.
Comparison: Both BFS and DFS have the same time complexity in the worst case,
O(b^d).
2. Space Complexity:
• BFS:
o O(b^d): BFS stores all the nodes at each level in the queue, so its space
requirement is high, especially when the tree or graph is large.
• DFS:
o O(bd): DFS uses a stack to keep track of the path it's currently exploring,
so it only needs to store nodes in that path. This results in lower space
complexity compared to BFS.
Comparison: DFS requires less memory (space) than BFS.
3. Optimality:
• BFS:
o Optimal: BFS guarantees the shortest path to the goal because it explores
all nodes at one level before moving to the next level.
• DFS:
o Not Optimal: DFS can find a solution but doesn't guarantee it's the
shortest. It might explore a long path before finding a goal.
Comparison: BFS is optimal for finding the shortest path, while DFS is not guaranteed
to find the best solution.
4. Completeness:
• BFS:
o Complete: If there is a solution, BFS will always find it, as it checks all nodes
level by level.
• -DFS:
o Not Always Complete: If there are infinite paths or loops, DFS may get
stuck and fail to find a solution.
Comparison: BFS is complete, while DFS may not always find a solution in infinite or
cyclic spaces.
In IDDFS, the algorithm performs a series of DFS searches, starting with a depth of 0
(just the root), and increases the depth limit by one after each search, until it reaches
the solution or the maximum depth. This way, it explores all nodes at the shallowest
depth first, like BFS, but uses the memory efficiency of DFS.
Steps in IDDFS:
1. Start with depth limit 0: Perform a DFS up to depth 0 (just the starting node).
2. Increase depth limit: Increase the depth limit by 1 and perform a DFS again up
to this new limit.
3. Repeat: Continue this process until the goal is found or the maximum depth is
reached.
Example:
o Start at A. Explore B, C, D, E, F.
10.For an automatic taxi driver application, explain Goal and utility agents with
appropriate block diagrams
In the context of an automatic taxi driver application, Goal-Based Agents and Utility-
Based Agents are two types of intelligent agents that help the taxi achieve its
objectives effectively.
1. Goal-Based Agents
A Goal-Based Agent acts based on predefined goals. It selects actions that will lead
to the achievement of these goals. In the taxi driver application, the goal can be to
safely transport passengers from the pickup location to the destination.
Key Features:
• The agent decides actions by evaluating how well they achieve the goal.
• It does not consider the "quality" of the solution but only ensures the goal is met.
• Actions: Avoid obstacles, follow traffic rules, and stop at the destination.
• Perception: The taxi detects the current location, obstacles, and passenger
instructions.
• Decision-making: Based on the goal (destination), the taxi plans the route and
avoids obstacles.
• Action: Execute the planned actions, like turning, accelerating, or stopping.
2. Utility-Based Agents
A Utility-Based Agent goes a step further than a Goal-Based Agent. It evaluates not
just achieving the goal but also how "good" or "efficient" each action is, based on a
utility function. In the taxi driver application, it considers factors like fuel efficiency,
shortest route, time, and passenger comfort to optimize the journey
Key Features:
• Perception: The taxi monitors current traffic conditions, fuel levels, and
passenger preferences.
• Decision-making: Evaluates possible routes and actions based on a utility
function.
Action: Execute the action that maximizes overall utility (e.g., taking a less
congested route).
Conclusion:
In an automatic taxi driver application, Goal-Based Agents ensure the taxi meets its
primary objective (reaching the destination), while Utility-Based Agents optimize
the process, improving overall efficiency and passenger satisfaction. Both play crucial
roles in intelligent decision-making
11.Given a 3×3 board with 8 tiles (each numbered from 1 to 8) and one empty
space in a 8-puzzle Problem as shown in the figure below. Place the numbers
so as to match the final configuration using DFS and BFS. We can slide four
adjacent tiles (left, right, above, and below) into the empty space.
Initial State:
123
560
784
Goal State:
586
074
DFS Solution
1. Start at the Initial State: Place the initial state on the stack.
2. Pop and Expand: Pop the top state from the stack. Generate all possible
successor states by sliding tiles adjacent to the empty space.
3. Check for Goal: If a successor state matches the goal state, the solution is
found.
4. Push Successors: Push the generated successor states onto the stack.
5. Repeat: Continue steps 2-4 until the stack is empty or the goal state is reached.
BFS Solution
2. Dequeue and Expand: Dequeue the front state from the queue. Generate all
possible successor states.
3. Check for Goal: If a successor state matches the goal state, the solution is
found.
4. Enqueue Successors: Enqueue the generated successor states at the rear of the
queue.
5. Repeat: Continue steps 2-4 until the queue is empty or the goal state is
reached.
Analysis
• DFS: DFS might explore irrelevant paths and potentially get stuck in loops,
especially if the search space is large. It might not find the shortest solution.
• BFS: BFS guarantees to find the shortest solution path (if one exists) because it
explores all nodes at a given depth before moving to the next level. However,
it can require more memory as it needs to store all nodes at a particular level.
The provided image shows the sequence of moves leading to the final state. Each
step represents a move made by sliding a tile into the empty space.
Step 1: Initial
123
560
784
Step 2:
123
506
784
Step 3:
123
586
704
Step 4: Final
123
586
074
Note:
Depth Limited Search (DLS) is a modified version of Depth First Search (DFS),
designed to limit the search depth to a specified level, preventing it from going too
deep and potentially getting stuck in infinite loops. It is a type of uninformed
search algorithm used in AI for traversing or searching tree-like structures (such as
game trees, decision trees, or problem-solving spaces).
Algorithm Explanation:
1. Input:
o A depth limit LL, which is the maximum depth the search can reach.
2. Process:
o Start at the root node.
o Explore each branch of the tree by recursively expanding the child nodes.
o Keep track of the current depth in the search. If the depth exceeds the
limit LL, stop exploring further down that branch.
o If a solution (goal state) is found at or within the limit, return it. If the
limit is reached without finding a goal, backtrack and explore other
branches.
o If no solution is found within the depth limit, report that the search
failed.
3. Termination:
▪ The algorithm has explored all nodes up to the depth limit and has
not found a solution.
ProblemStatement:
We have 3 missionaries (M) and 3 cannibals (C) on one side of a river, along with a
boat that can carry at most 2 people. The goal is to get all the missionaries and
cannibals across the river without violating the rule that at no time can the number of
cannibals outnumber the missionaries on either side of the river.
Where:
Initially:
• (M_L,C_L,M_R,C_R)=(3,3,0,0)
Where all the missionaries and cannibals are on the left side of the river.
Goal state:
• (M_L,C_L,M_R,C_R)=(0,0,3,-3)
Where all the missionaries and cannibals are safely on the right side of the river.
Constraints:
• At no point on either side of the river can the number of cannibals exceed the
number of missionaries (i.e., M_L ≥ C_L and M_R ≥ C_R on both sides).
Possible Actions:
• One or two people (missionaries or cannibals) can cross the river in the boat.
• The boat can either go from the left bank to the right bank or from the right
bank to the left bank.
• At each step, one or two people move, and the state of the system changes
accordingly.
The state space can be visualized as a graph where each node represents a state, and
the edges represent valid actions (movements of one or two people between the
banks). Below is a simplified diagram of the initial states and possible actions.
The nodes are marked as (M_L, C_L, M_R, C_R), and arrows represent valid moves
between states.
The diagram would continue expanding in a similar fashion until the goal state is reached.
o Begin from the initial state (3, 3, 0, 0) and add it to the open list.
2. State exploration:
o At each step, explore all valid actions (i.e., possible boat movements that
do not violate the constraints).
o Generate the next states and check if they lead to the goal state (0, 0, 3, 3).
3. Goal test:
o The goal state is when all missionaries and cannibals are on the right side
of the river.
4. Termination:
o Once the goal state is found, backtrack to trace the optimal sequence of
moves.
Yes, it is a good idea to check for repeated states in this problem. Without checking
for repeated states, the search algorithm could re-explore the same states multiple
times, leading to inefficiency and longer computation time. This problem has a large
state space, and without pruning (checking for repeated states), the search would
become exponentially larger as it would revisit the same states multiple times. By
keeping track of visited states, the algorithm can avoid unnecessary work, ensuring
that it explores each state only once.
1. Start at (3, 3, 0, 0)
3. Add valid states to the queue and continue until the goal state (0, 0, 3, 3) is
reached.
14.Define in your own words the following terms: state, state space, search
tree, search node, goal, action, transition model, and branching factor.
2. State Space: The state space is the collection of all possible states that can be
reached from the initial state, by applying actions over time. It’s like a map that
shows all possible situations that can occur during the problem-solving
process.
5. Goal: The goal is the target state or condition we aim to reach while solving
the problem. It represents the desired outcome or solution to the problem.
6. Action: An action is a step or operation that moves from one state to another.
It transforms the current state into a new state based on certain rules.
7. Transition Model: The transition model describes how actions move from one
state to another. It defines the rules or mechanisms that explain how states
evolve when an action is applied.
8. Branching Factor: The branching factor is the number of possible actions (or
child nodes) that can be taken from any given state. It’s a measure of how
many choices are available at each step in the search tree.
1. Perceive: The agent perceives the environment and gathers information using
sensors.
2. Reason: It processes this information using its knowledge base and reasoning
mechanisms to make inferences and conclusions.
4. Act: The agent takes action using actuators to affect the environment.
• Knowledge Base (KB): A collection of facts and rules that represent the
agent's knowledge about the world.
• Inference Mechanism: A process that helps the agent derive new knowledge
from the existing knowledge base.
• Learning: If applicable, the agent may improve its knowledge over time based
on experience.
PEAS stands for Performance measure, Environment, Actuators, and Sensors. Here's
the PEAS specification for the Wumpus World:
1. Performance Measure:
o The goal is to find the gold and bring it back to the starting point,
avoiding pits and the Wumpus. A good performance is when the agent
collects the gold, survives, and returns to the start.
o Negative points for falling into a pit or being eaten by the Wumpus.
o Positive points for collecting the gold and safely exiting.
2. Environment:
3. Actuators:
o The agent can move in the four cardinal directions (North, South, East,
West).
o The agent can grab the gold when it is on the same square as the gold.
o The agent can shoot an arrow to kill the Wumpus (if the Wumpus is in a
line of sight).
4. Sensors:
This PEAS specification outlines how the knowledge-based agent functions in the
Wumpus World environment, using sensors to gather information, actuators to
perform actions, and a performance measure to evaluate success.
16.
Apply the A* search to find the solution path from a to z. Heuristics are with
nodes, and cost is with edges. Write all steps as well as open and closed lists for
full marks.
• Heuristics (h): We'll use the estimated cost to reach the goal node 'z' from
each node. The heuristic values are provided next to the nodes in the image.
• Costs (c): The cost of traversing each edge is given by the values on the
edges.
A* Search Algorithm:
1. Initialization:
o Create an OPEN list and a CLOSED list. Initially, the OPEN list contains
only the start node 'a' with its f(n) value calculated as:
where g(a) is the cost to reach 'a' from the start (0 in this case).
2. Iteration:
o Select Node: Select the node with the lowest f(n) value from the OPEN
list. In this case, it's 'a'.
o Expand Node: Remove the selected node ('a') from the OPEN list and
add it to the CLOSED list. Generate all successor nodes of 'a' (nodes
connected to 'a' by an edge).
o Evaluate Successors: For each successor node:
▪ Calculate g(n) as the cost to reach the successor from the start node .
▪ If the successor is already in the OPEN list, check if the new path to
it has a lower cost. If so, update its g(n), f(n), and parent.
o Repeat: Repeat steps 2 and 3 until the goal node 'z' is selected from the
OPEN list.
Step-by-Step Solution:
1. Initial State:
o OPEN: {a(14)}
o CLOSED: {}
2. Expand a:
o CLOSED: {a}
3. Expand b:
o CLOSED: {a, b}
4. Expand c:
o CLOSED: {a, b, c}
5. Expand e:
o CLOSED: {a, b, c, e}
6. Expand d:
o OPEN: {f(27), z(31)}
o CLOSED: {a, b, c, e, d}
7. Expand f:
o OPEN: {z(31)}
o CLOSED: {a, b, c, e, d, f}
8. Expand z:
o OPEN: {}
o CLOSED: {a, b, c, e, d, f, z}
Solution Path:
The solution path is found to be: a -> c -> d -> e -> z with a total cost of 31.
Note:
• The heuristic values used in this example are admissible (i.e., they never
overestimate the actual cost to the goal). This is crucial for A* search to
guarantee finding the optimal solution.
• If multiple nodes have the same f(n) value, you can choose any of them to
expand next. This can lead to different solution paths, but the cost of the
optimal path will remain the same.
First-Order Logic (FOL) is a widely used formal system for representing knowledge
and reasoning. However, despite its strengths, FOL faces several limitations,
especially in practical AI applications. Below are the various reasons for the failure or
limitations of First-Order Logic:
• Explanation: FOL assumes that knowledge is either true or false, which works
well for certain problems but fails in uncertain or incomplete scenarios. Real-
world problems often involve probabilities or uncertain facts that FOL cannot
handle directly.
• Example: A general rule might be "All birds can fly." However, an exception
exists with birds like penguins and ostriches, which cannot fly. FOL needs
explicit rules for every exception, which is inefficient.
4. Complexity of Computation:
• Explanation: FOL can become computationally expensive, especially when
reasoning with large knowledge bases. The process of inference can require
exhaustive searches through many possibilities, which becomes impractical for
large-scale problems.
• Example: In a situation with many rules and facts, FOL may take too long to
draw conclusions because it has to check many combinations of these rules.
• Explanation: FOL is rigid and does not support commonsense reasoning well.
Human reasoning often involves making assumptions, filling in missing
information, or using intuition—tasks that FOL struggles with.
• Example: If someone says, "John is in the kitchen," a human might infer that
John is inside the house, but FOL would need a strict rule or fact explicitly
stating that kitchens are inside houses.
6. Identity and Equality Problems:
• Explanation: FOL deals with objects and their properties, but it sometimes has
difficulty with identity (i.e., determining if two objects are the same) or equality
in complex domains. Handling this requires special care and definitions.
• Example: If you have two different representations of the same object, FOL
may fail to recognize them as the same object unless explicitly stated.
Given:
• There are 6 tokens labelled T1 , T2 , T3 , T4 , T5 , T6
• Since we are not replacing the token after it is selected, the number of tokens
decreases after the first selection.
• The order in which the tokens are selected matters because the tokens are not
replaced.
2. After the first token is selected, for the second token, we have 5 remaining
tokens to choose from.
Each outcome is a pair of tokens chosen in sequence. We can list all the possible
outcomes as follows:
Step 4: Conclusion
The sample space consists of 30 possible ordered pairs (since the selection order
matters and tokens are not replaced).
This represents all the possible outcomes of selecting 2 tokens from the 6 available
tokens without replacement.
(Summary:
• Failure of First-Order Logic: Issues such as incomplete knowledge
representation, handling uncertainty, inability to deal with exceptions, and
computational complexity are the main reasons for FOL's limitations in real-
world AI applications.
• Sample Space for Picking 2 Tokens from 6: When selecting 2 tokens from 6
tokens without replacement, the total number of possible ordered outcomes is
30, and the sample space consists of 30 pairs of tokens.)
20. Define Universal and Existential Instantiation and give examples for both.
Prove the following using Backward and Forward chaining: "As per the law, it is
a crime for an American to sell weapons to hostile nations. Country E, an
enemy of America, has some missiles, and all the missiles were sold to it by
Solan, who is an American citizen." Prove that "Solan s criminal."
21.Complete the following exercises about logical sentences:
What is Resolution?
Procedure:
o Negate the statement you want to prove and add it to the set of clauses.
b. Prove that if each of q1 and q2 has π as its stationary distribution, then the
sequential composition q =q1 ◦ q2 also has π as its stationary distribution.
28. Using the axioms of probability, prove that any probability distribution on a
discrete random variable must sum to 1.
Conclusion:
Using the axioms of probability, we have proved that the probabilities in a discrete random
variable's distribution must always sum to 1. This ensures that the total likelihood of all
possible outcomes covers the entire sample space.
29.We have a bag of three biased coins a, b, and c with probabilities of coming up
heads of 20%, 60%, and 80%, respectively. One coin is drawn randomly from the bag
(with equal likelihood of drawing each of the three coins), and then the coin is
flipped three times to generate the outcomes X1, X2, and X3.
a. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.
b. Calculate which coin was most likely to have been drawn from the bag if the
observed flips come out heads twice and tails once