Ai & ML Question Bank With Related Solution
Ai & ML Question Bank With Related Solution
Question Bank
Module -1
Sl Question Mar Bloo
N ks m’s
O Level
1. What is Artificial Intelligence? Explain its evolution. 8 L2
Definition of AI (1 mark):
Originated in the 1950s with pioneers like Alan Turing and John
McCarthy.
The Dartmouth Conference (1956) coined the term "Artificial
Intelligence."
2. Illustrate the importance of turing test and state how this can be considered a 5 L2
benchmark for any AI algorithms
Definition and concept (1 mark):
3. State the various types of intelligence with suitable examples for each category. 8 L2
the various types of intelligence with suitable examples for each category:
1. Linguistic Intelligence:
o Ability to use language effectively, both verbally and in
writing
o Examples: Writers, poets, journalists, speakers, translators
2. Logical-Mathematical Intelligence:
o Capacity to understand and work with numbers, logical
reasoning, and abstractions
o Examples: Mathematicians, scientists, engineers,
accountants, programmers
3. Spatial Intelligence:
o Skill in visualizing and manipulating objects and spatial
dimensions
o Examples: Architects, artists, navigators, chess players,
surgeons
4. Musical Intelligence:
o Ability to produce and appreciate rhythm, pitch, and
timbre
o Examples: Musicians, composers, conductors, music
critics
5. Bodily-Kinesthetic Intelligence:
o Control of one's body movements and the capacity to
handle objects skillfully
o Examples: Athletes, dancers, actors, craftspeople, surgeons
6. Interpersonal Intelligence:
o Capacity to understand and interact effectively with others
o Examples: Teachers, counselors, politicians, salespeople,
managers
7. Intrapersonal Intelligence:
o Self-awareness and the ability to understand one's own
emotions, motivations, and inner states
o Examples: Psychologists, philosophers, spiritual leaders
8. Naturalistic Intelligence:
o Ability to recognize and categorize plants, animals, and
other elements of nature
o Examples: Biologists, ecologists, botanists,
environmentalists
9. Existential Intelligence:
o Sensitivity and capacity to tackle deep questions about
human existence
o Examples: Philosophers, theologians, cosmologists
10. Emotional Intelligence:
o Ability to perceive, control, and evaluate emotions
o Examples: Therapists, negotiators, effective leaders
4. Examine the AI literature to discover whether or not the following tasks can 10 L4
currently be solved by computers:
a. Playing a decent game of table tennis (ping-pong).
b. Giving tablets to a bedridden patient.
c. Discovering and proving new mathematical theorems.
e. Writing an intentionally funny story.
f. Giving competent legal advice in a specialized area of law. For the currently
infeasible tasks, try to find out what the difficulties are and estimate when they
will be overcome
Playing a decent game of table tennis (ping-pong):
For all these tasks, it's important to note that while AI can make
significant progress, human oversight and collaboration will likely remain
crucial for the foreseeable future, especially in sensitive areas like
healthcare and law.
Reasoning:
1. Safety: Using your blinker always maximizes safety for all road
users.
2. Predictability: It makes your actions predictable to other drivers,
pedestrians, and cyclists.
3. Legal compliance: It's typically required by law in most
jurisdictions.
4. Habit formation: Consistently using your blinker builds a good
habit.
5. Consideration for unseen road users: There may be road users you
haven't noticed.
A reflex agent design is sufficient to carry out this policy. Here's why:
1. Simple rule: The policy "always use your blinker before turning"
is a straightforward rule that can be implemented as a reflex
action.
2. No complex decision-making: The agent doesn't need to evaluate
goals or utilities in real-time; it simply needs to activate the
blinker before any turn.
3. Direct mapping from perception to action: When the agent
perceives that a turn is imminent, it can directly trigger the action
of using the blinker.
4. No internal state required: The agent doesn't need to maintain any
internal state or model of the world to implement this policy.
3. Goal-Based Agents
Explanation: These agents consider future consequences of actions. They
use goal information to decide which situations are desirable.
4. Utility-Based Agents
Explanation: These agents have a more general performance measure
(utility function) to evaluate different possible scenarios, allowing for
more complex decision-making.
5. Learning Agents
Explanation: These agents can improve their performance over time
through experience. They have a learning component that modifies other
components to improve overall performance.
6. Hybrid Agents
Explanation: These agents combine two or more of the above
architectures to leverage the strengths of each.
Each category of agent program has its strengths and is suited to different
types of environments and problems. The choice of agent program
depends on the complexity of the environment, the nature of the task, and
the resources available for implementation.
8 Explain the various properties of Environments available. 8 L2
Properties of Environments in AI
Understanding the properties of environments is crucial for designing
effective intelligent agents. Here are the key properties of environments:
Evaluation Strategies
1. Completeness: Does the algorithm guarantee to
find a solution if one exists?
2. Optimality: Does the algorithm guarantee to
find the best solution?
3. Time Complexity: How does the execution
time grow with problem size?
4. Space Complexity: How does the memory
usage grow with problem size?
5. Branching Factor: Average number of
successors per state.
6. Depth of Solution: Length of the shortest path
to a solution.
7. Heuristic Accuracy: For informed search, how
well does the heuristic estimate the actual cost?
- DFS:
DFS will visit nodes in the following order until it finds
node 8: 1, 2, 4. It will then visit node8. So DFS will visit 3
nodes before finding node 8.
BFS:
BFS will visit all nodes in the current level before
proceeding. The nodes visited before reaching node 8 will
be: 1, 2, 3, 4, 5, 6, 7. So BFS will visit 7 nodes before finding
node 8.
2. Problem-Solving Agent:
Key Differences:
Feature Simple Reflex Agent Problem-Solving Agent
No planning; acts
Plans a sequence of
Planning based on current
actions based on goals
percept
1. Approach:
2. Traversal Strategy:
DFS:
o Depth-wise: It explores deeper into the tree
or graph by following one path as far as
possible.
o Backtracking: After reaching a node with no
unvisited neighbors, it backtracks to the
most recent node with unexplored paths.
BFS:
o Level-wise: It explores all nodes at a given
level before moving to the next level.
o Queue-based: Uses a queue to keep track
of the nodes to visit in level order.
DFS:
o Uses a stack (can be implemented with
recursion or explicitly with a stack) to
manage nodes to be visited next.
BFS:
o Uses a queue to manage nodes in the order
they were discovered to visit the next node
level-by-level.
4. Space Complexity:
DFS:
o Space complexity is O(d), where d is the
maximum depth of the tree or graph (the
deepest path). In the worst case (if the
graph is very deep), DFS can use a lot of
memory, especially for deep recursion.
BFS:
o Space complexity is O(b^d), where b is the
branching factor (the maximum number of
children per node) and d is the depth. BFS
can require a large amount of memory since
it needs to store all nodes at the current
level before moving to the next level.
5. Time Complexity:
DFS:
o Time complexity is O(V + E), where V is the
number of vertices (nodes) and E is the
number of edges. DFS needs to explore all
vertices and edges in the worst case.
BFS:
o Time complexity is also O(V + E). It explores
every vertex and every edge, making it
linear in terms of the graph size, similar to
DFS.
6. Completeness:
DFS:
o Not complete in the case of infinite-depth
graphs or trees. DFS can get stuck exploring
an infinitely deep path and might never find
the solution even if it exists at a shallow
level.
BFS:
o Complete in finite graphs or trees. BFS
guarantees that if a solution exists, it will be
found in the shallowest depth possible,
meaning it will explore all nodes level by
level until the goal is reached.
7. Optimality:
DFS:
o Not optimal. DFS does not necessarily find
the shortest path to a goal, as it may
explore a long path first even if a shorter
path exists.
BFS:
o Optimal (in unweighted graphs). BFS is
guaranteed to find the shortest path to the
goal in an unweighted graph or tree since it
explores all nodes at each depth level
before moving deeper.
8. Use Cases:
DFS:
o Useful when:
You want to explore all possible
solutions (e.g., solving a maze or
searching for all connected
components in a graph).
Memory is a concern (especially
when the branching factor is large).
You are looking for a path but not
necessarily the shortest path (e.g.,
puzzle-solving where any solution
will do).
BFS:
o Useful when:
You need the shortest path in an
unweighted graph or tree (e.g.,
finding the shortest route in a city
map).
You need to explore nodes level by
level (e.g., social networks, word
ladders).
You want to explore shallow
solutions before deep ones,
especially in finite graphs.
9. Example:
DFS:
o Example in a tree:
sql
Copy code
DFS (Pre-order): Start at root
(1), go deep into each branch:
Path: 1 → 2 → 4 → 8 → backtrack →
5 → backtrack → 3 → 6 → 7
BFS:
o Example in a tree:
sql
Copy code
BFS (Level-order): Visit all
nodes at the current level before
going deeper:
Path: 1 → 2 → 3 → 4 → 5 → 6 → 7 →
8
Summary of Differences:
Depth-First Search Breadth-First Search
Feature
(DFS) (BFS)
Time
O(V + E) O(V + E)
Complexity
Backtracking Yes No
1. State:
o A state represents a specific configuration
or condition of the problem at any point in
time. Each state can be thought of as a
node in the search space.
o The initial state is where the problem-
solving agent starts, and the goal state is
the desired outcome or solution.
o For example, in a maze, a state could
represent the current position of the agent
in the maze.
2. Initial State:
o The initial state is the starting point of the
problem-solving process. It represents the
configuration from which the search begins.
o The agent uses the initial state to begin
exploring the state space.
o Example: In a chess game, the initial state is
the starting arrangement of the pieces on
the board.
3. Goal State:
o The goal state is the target configuration
that the problem-solving agent seeks to
achieve. The search algorithm terminates
when the goal state is found.
o A well-defined problem has a clear goal
state or a goal condition that specifies when
a solution is found.
o Example: In a pathfinding problem, the goal
state is reaching a specific destination.
4. Actions:
o Actions are the set of possible moves or
operations that the agent can perform to
transition from one state to another.
o Each action changes the current state to a
new state by following the problem’s
defined rules.
o Example: In a navigation problem, actions
might include moving north, south, east, or
west to explore new locations.
5. Transition Model:
o The transition model defines how actions
transform the current state into a new
state. It describes the result of taking a
particular action from a given state.
o It is often represented as a function:
Result(s, a) returns the new state after
performing action a in state s.
o Example: In a puzzle, the transition model
specifies how swapping two pieces leads to
a new arrangement.
6. Path Cost:
o The path cost is a numerical value that
represents the total cost of a sequence of
actions from the initial state to the current
state. It allows the algorithm to evaluate
and compare different paths.
o The path cost might depend on factors like
distance, time, or any other resource. In
many algorithms, the goal is to minimize
this cost.
o Example: In a navigation problem, the path
cost might represent the total distance
traveled.
7. Search Space:
o The search space consists of all the possible
states and actions that can be explored to
solve the problem. It represents the entire
set of configurations the agent could
encounter while searching for a solution.
o The size and structure of the search space
impact the efficiency of the search
algorithm. Larger or more complex search
spaces are harder to navigate and require
more efficient strategies.
8. Search Strategy:
o A search strategy is the algorithm or
method that dictates how the agent
explores the search space. Different
strategies define the order in which nodes
are visited and how paths are evaluated.
o Common strategies include:
Depth-First Search (DFS): Explores
as deep as possible before
backtracking.
Breadth-First Search (BFS): Explores
nodes level by level.
Uniform-Cost Search: Expands the
least-cost node first.
A Search*: Uses heuristics to guide
the search toward the goal
efficiently.
o The choice of strategy determines the
performance in terms of time, space,
completeness, and optimality.
9. Solution:
o A solution is a sequence of actions (or a
path) that leads from the initial state to the
goal state.
o The search algorithm terminates when a
solution is found. Some algorithms aim to
find the first solution (like DFS), while others
focus on finding the optimal (least-cost)
solution (like BFS or A*).
1. States:
o A state is a snapshot of the world or the
problem at a particular moment. It
encapsulates all the necessary information
the agent needs to make decisions.
o Example: In a chess game, a state includes
the positions of all pieces on the board.
2. Initial State:
o The initial state is where the agent begins
its search. It defines the starting conditions
from which the agent explores the state
space.
3. Goal State:
o The goal state is the desired configuration
that represents the solution to the problem.
The agent's objective is to navigate the
state space to reach this state.
5. Search Space:
o The state space forms the search space for
the agent. The search space includes every
state and the connections (transitions)
between states that can be explored.
6. Path:
o A path is a sequence of states that the
agent transitions through as it explores the
state space. The path begins at the initial
state and ends at the goal state if a solution
is found.
Example:
In a maze-solving problem: