0% found this document useful (0 votes)
10 views11 pages

AI - Possible 2 Marks With Answers

Uploaded by

SARANYA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views11 pages

AI - Possible 2 Marks With Answers

Uploaded by

SARANYA M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

22AD406 ARTIFICIAL INTELLIGENCE

2 MARKS
UNIT I INTELLIGENT AGENTS AND BLIND SEARCH:

Q1: Define Artificial Intelligence.


A: Artificial Intelligence (AI) is the simulation of human intelligence in machines that can
perform tasks such as learning, problem-solving, and decision-making.

Q2: What is an Intelligent Agent?


A: An intelligent agent is an entity that perceives its environment through sensors and acts upon
it using actuators to achieve a specific goal.

Q3: Who is considered the father of AI?


A: John McCarthy is considered the father of AI, as he coined the term "Artificial Intelligence"
in 1956.

Q4: What was the significance of the Dartmouth Conference (1956) in AI?
A: The Dartmouth Conference marked the official beginning of AI as a field of study, bringing
researchers together to discuss machine intelligence.

Q5: What are the two main components of an intelligent system?


A: The two main components are the agent (which perceives and acts) and the environment
(where the agent operates).

Q6: What is a fully observable environment?


A: A fully observable environment is one where the agent has access to complete information
about the environment's state at any time.

Q7: What is a rational agent?


A: A rational agent is an entity that takes actions to maximize its performance measure based on
the given percepts and knowledge.

Q8: How is rationality different from intelligence in AI?


A: Rationality refers to making the best possible decision, while intelligence encompasses
learning, reasoning, and problem-solving.

Q9: What is a static environment in AI?


A: A static environment does not change while the agent is making a decision.

Q10: What is a stochastic environment?


A: A stochastic environment has unpredictable outcomes, meaning the same action may lead to
different results.

Q11: Name two types of intelligent agents.


A: (i) Simple Reflex Agent, (ii) Goal-Based Agent.
Q12: What is a utility-based agent?
A: A utility-based agent chooses actions that maximize its utility function, providing the best
possible outcome rather than just a goal-based result.

Q13: What is a state space in AI?


A: A state space is the set of all possible states an agent can be in while solving a problem.

Q14: What is the purpose of state space search?


A: State space search is used to explore different states systematically to find a solution to a
given problem.

Q15: What is the Generate and Test method?


A: It is a brute-force search technique where potential solutions are generated and tested to see if
they meet the goal criteria.

Q16: What is the major drawback of Generate and Test search?


A: It can be inefficient and time-consuming, as it does not use heuristics to guide the search.

Q17: What is Simple Search in AI?


A: Simple Search is an uninformed search technique that systematically explores states without
considering the cost or efficiency of paths.

Q18: Give an example of a Simple Search algorithm.


A: Breadth-First Search (BFS) is an example of a simple search algorithm.

Q19: What is Depth-First Search (DFS)?


A: DFS is a search algorithm that explores as far as possible along a branch before backtracking.

Q20: What is the worst-case time complexity of DFS?


A: The worst-case time complexity of DFS is O(b^m), where b is the branching factor and m is
the maximum depth.

Q21: What is Breadth-First Search (BFS)?


A: BFS is a search algorithm that explores all nodes at the current level before moving deeper
into the search tree.

Q22: What is the advantage of BFS over DFS?


A: BFS guarantees finding the shortest path in an unweighted graph, whereas DFS does not.

Q23: Which search method uses more memory, DFS or BFS?


A: BFS uses more memory as it stores all nodes at the current level, whereas DFS only stores the
current path.

Q24: Which search method is more suitable for finding a solution in an infinite state space?
A: DFS can get stuck in an infinite loop, while BFS systematically explores all levels, making it
more suitable for infinite state spaces.
Q25: What is Depth-Bounded DFS?
A: Depth-Bounded DFS is a variation of DFS where a depth limit is set to prevent exploring
deep or infinite paths.

Q26: How does Depth-Bounded DFS solve the problem of infinite loops in DFS?
A: By setting a maximum depth, it ensures the search does not continue indefinitely in deep or
infinite state spaces.

Q27:Define an Intelligent Agent.


Answer: An intelligent agent is an entity that perceives its environment through sensors and acts
upon it using actuators to achieve specific goals.

Q28:What are the main components of an intelligent agent?


Answer: The main components are sensors, actuators, an agent function, and a performance
measure.

Q29:What is the difference between an agent and an environment?


Answer: An agent is an entity that perceives and acts, while the environment is the external
system in which the agent operates.

Q30:Define Rationality in AI.


Answer: Rationality refers to an agent’s ability to make decisions that maximize its performance
measure based on available information and possible actions.

Q31:What is the difference between BFS and DFS?


Answer: BFS explores all nodes at the current level before moving deeper, whereas DFS
explores as far as possible along a branch before backtracking.

Q32:What is Generate and Test search?


Answer: Generate and Test is a brute-force search technique where possible solutions are
generated and tested to check if they meet the goal criteria.

Q33:What is Depth Bounded DFS?


Answer: Depth Bounded DFS is a variation of DFS where the search is limited to a fixed depth
to avoid infinite loops in deep or infinite graphs.

Q34:What is the primary advantage of BFS over DFS?


Answer: BFS guarantees finding the shortest path in an unweighted graph, whereas DFS does
not.

Q35:What is the nature of a dynamic environment?


Answer: A dynamic environment changes while the agent is deliberating on an action.

Q36:What is meant by the structure of an intelligent agent?


Answer: The structure of an intelligent agent refers to how it is designed, including its
architecture and decision-making mechanisms (e.g., simple reflex, goal-based, utility-based).
UNIT II INFORMED SEARCH METHODS:

Q1: What is Heuristic Search?


A: Heuristic search is a search strategy that uses a heuristic function to estimate the best possible
path toward a goal, improving search efficiency.

Q2: How does heuristic search differ from uninformed search?


A: Heuristic search uses domain-specific knowledge to guide the search, while uninformed
search explores blindly without guidance.

Q3: What is a heuristic function in AI?


A: A heuristic function, denoted as h(n), estimates the cost or distance from a given node to the
goal state.

Q4: Give an example of a commonly used heuristic function.


A: The Manhattan distance (sum of horizontal and vertical distances) is a common heuristic
function used in grid-based pathfinding.

Q5: What is Best First Search?


A: Best First Search is a search algorithm that selects the next node to expand based on the
lowest heuristic cost h(n).

Q6: How does Best First Search differ from A Search?*


A: Best First Search uses only h(n), whereas A* Search uses f(n) = g(n) + h(n), considering
both path cost and heuristic estimates.

Q7: What is the Hill Climbing search algorithm?


A: Hill Climbing is an iterative search algorithm that moves toward the highest-valued
neighboring state, optimizing locally at each step.

Q8: What is the main drawback of Hill Climbing?


A: It can get stuck in local maxima, plateaus, or ridges, preventing it from reaching the global
optimum.

Q9: What is a Local Maximum in Hill Climbing?


A: A local maximum is a state where all neighboring states have lower values, but it is not the
best possible solution (global maximum).

Q10: Name one technique to overcome local maxima in Hill Climbing.


A: Random restarts, where the search is restarted from a new random position, can help escape
local maxima.

Q11: What is the Solution State Space in AI?


A: It refers to the entire set of possible states that an agent can explore while solving a problem.
Q12: What defines the goal state in a solution state space?
A: The goal state is a specific state that satisfies the problem’s objective or solution criteria.

Q13: What is Variable Neighbourhood Descent (VND)?


A: VND is an optimization technique that systematically changes the neighborhood structure to
escape local optima.

Q14: What is the main advantage of VND?


A: It avoids getting stuck in poor local optima by exploring multiple neighborhood structures.

Q15: What is Beam Search?


A: Beam Search is a heuristic search algorithm that keeps only the k best nodes at each level to
limit memory usage.

Q16: How does Beam Search differ from Best First Search?
A: Beam Search restricts the number of nodes explored at each step, while Best First Search
expands all promising nodes.

Q17: What is Taboo Search?


A: Taboo Search is a metaheuristic optimization technique that prevents revisiting recently
explored states using a taboo list.

Q18: Why is the taboo list important in Taboo Search?


A: It helps escape cycles and local optima by preventing the algorithm from revisiting previously
explored solutions.

Q19: What is the Peak to Peak method in AI?


A: Peak to Peak is an optimization technique that allows movement between local maxima using
a special transition strategy.

Q20: How does Peak to Peak search overcome local maxima?


A: It introduces strategies like random jumps or guided transitions to move between peaks (local
optima).

Q21: What is Brute Force Search?


A: Brute Force Search systematically explores all possible solutions until the correct one is
found.

Q22: What is the main disadvantage of Brute Force Search?


A: It is highly inefficient for large problems due to its exponential time complexity.

Q23: What is the Branch and Bound method?


A: Branch and Bound is an optimization algorithm that systematically explores branches of a
search tree while pruning non-optimal solutions.
Q24: How does Branch and Bound improve search efficiency?
A: It eliminates unnecessary branches using bounding functions, reducing the number of
explored states.

Q25: What is Refinement Search?


A: Refinement Search incrementally improves a partial solution until a complete and optimal
solution is found.

Q26: What is an example of Refinement Search?


A: Constraint Satisfaction Problems (CSPs), where a solution is refined step-by-step by adjusting
variable assignments.

UNIT III A* AND RANDOMIZED SEARCH METHODS

Q1: What is the A algorithm?*


A: A* is a best-first search algorithm that finds the shortest path using the evaluation function
f(n) = g(n) + h(n), where g(n) is the actual cost and h(n) is the heuristic estimate to the goal.

Q2: Why is A better than BFS and DFS?*


A: A* efficiently balances exploration and cost optimization, ensuring it finds the shortest path if
the heuristic function is admissible.

Q3: When is A considered admissible?*


A: A* is admissible if the heuristic function h(n) never overestimates the actual cost to the goal
(i.e., h(n) is optimistic).

Q4: Why does A need an admissible heuristic?*


A: An admissible heuristic ensures A* always finds the optimal path, avoiding suboptimal
solutions.

Q5: What is Recursive Best First Search (RBFS)?


A: RBFS is a memory-efficient version of A* that uses recursion to store only limited paths
instead of keeping all nodes in memory.

Q6: How does RBFS handle limited memory constraints?


A: RBFS replaces the least promising nodes with better alternatives, reducing memory usage
compared to standard A*.

Q7: What is a local maximum in search algorithms?


A: A local maximum is a suboptimal peak where all neighboring solutions have lower values,
preventing further improvement.

Q8: Name two methods to escape local maxima.


A: Simulated Annealing and Iterated Hill Climbing.
Q9: What is Iterated Hill Climbing?
A: Iterated Hill Climbing repeatedly runs Hill Climbing with different random starting points to
escape local maxima.

Q10: How does Iterated Hill Climbing improve standard Hill Climbing?
A: It avoids getting stuck in local maxima by restarting the search from different initial positions.

Q11: What is Simulated Annealing in AI?


A: Simulated Annealing is a probabilistic search method that allows occasional worse moves to
escape local maxima.

Q12: What is the role of temperature in Simulated Annealing?


A: Temperature controls the probability of accepting worse solutions, gradually decreasing over
time to refine the search.

Q13: What is a Genetic Algorithm?


A: A Genetic Algorithm (GA) is an optimization technique inspired by natural selection, using
selection, crossover, and mutation to evolve solutions.

Q14: Name three key operations in Genetic Algorithms.


A: Selection, Crossover, and Mutation.

Q15: What is the Travelling Salesman Problem (TSP)?


A: The TSP is an optimization problem where a salesman must visit N cities exactly once and
return to the starting point while minimizing travel cost.

Q16: Why is TSP considered an NP-hard problem?


A: The number of possible routes grows exponentially with the number of cities, making it
computationally expensive to find the optimal solution.

Q17: How can Genetic Algorithms be used to solve TSP?


A: GA can optimize TSP by evolving better tour sequences through selection, crossover, and
mutation over multiple generations.

Q18: What type of crossover is commonly used in GA for TSP?


A: Order Crossover (OX) and Partially Mapped Crossover (PMX) are commonly used to
maintain valid city sequences.

UNIT IV GAME PLAYING, PLANNING AND CONSTRAINT SATISFACTION

Q1: What is an example of an AI-based board game?


A: Chess, Go, and Tic-Tac-Toe are examples of board games where AI is used to make
intelligent moves.
Q2: How does AI play board games?
A: AI plays board games using search algorithms like Minimax, Alpha-Beta pruning, and Monte
Carlo Tree Search (MCTS).

Minimax Algorithm

Q3: What is the Minimax algorithm?


A: Minimax is a decision-making algorithm used in two-player games to minimize the
opponent’s maximum possible gain.

Q4: What is the main assumption behind the Minimax algorithm?


A: It assumes that both players play optimally and try to maximize their own chances of
winning.

Alpha-Beta Pruning

Q5: What is Alpha-Beta pruning?


A: Alpha-Beta pruning is an optimization technique for Minimax that eliminates unnecessary
branches, reducing computation time.

Q6: What is the advantage of Alpha-Beta pruning over Minimax?


A: It speeds up the Minimax algorithm by ignoring branches that will not affect the final
decision.

B Search*

Q7: What is B Search?*


A: B* Search is an extension of Alpha-Beta pruning that uses probability to decide which
branches to explore.

Q8: When is B Search useful?*


A: It is useful when the search space is large, and exact evaluation of all possible moves is
impractical.

Q9: What is the major limitation of search-based game-playing AI?


A: Search algorithms become computationally expensive as the game complexity increases.

Q10: How can heuristic functions help overcome search limitations?


A: Heuristics guide the search toward promising moves, reducing the number of nodes explored.

Q11: What does STRIPS stand for in AI?


A: STRIPS stands for Stanford Research Institute Problem Solver, a formal planning system.

Q12: What is the purpose of STRIPS in AI?


A: STRIPS represents planning problems using initial states, goal states, and operators to
achieve goals.
Q13: What is Forward State Space Planning?
A: It is a planning method that starts from the initial state and applies actions until the goal is
reached.

Q14: What is the main drawback of Forward State Space Planning?


A: It can be computationally expensive because it explores many possible future states.

Q15: What is Backward State Space Planning?


A: It starts from the goal state and works backward to find a sequence of actions leading to the
initial state.

Q16: When is Backward State Space Planning more efficient than Forward Planning?
A: When the number of backward possibilities is smaller than forward possibilities, reducing the
search space.

Q17: What is Goal Stack Planning?


A: Goal Stack Planning is a problem-solving approach where goals are pushed onto a stack and
solved sequentially.

Q18: What is a drawback of Goal Stack Planning?


A: It can lead to inefficient solutions if solving one goal undoes progress on another goal.

Q19: What is a Constraint Satisfaction Problem (CSP)?


A: A CSP is a problem where a solution must satisfy a set of constraints (e.g., Sudoku, Map
Coloring).

Q20: What are the three main components of a CSP?


A: Variables, Domains, and Constraints.

Q21: What is the N-Queens Problem?


A: The N-Queens problem requires placing N queens on an N×N chessboard so that no two
queens attack each other.

Q22: What is one approach to solving the N-Queens problem?


A: Backtracking is a common approach used to solve the N-Queens problem.

UNIT V PREPOSITIONAL LOGIC, FIRST ORDER LOGIC AND INFERENCING

Q1: What is formal logic in AI?


A: Formal logic is a system of reasoning using symbols and rules to represent and infer
knowledge systematically.

Q2: Name two types of formal logic used in AI.


A: Propositional Logic (PL) and First-Order Logic (FOL).
Q3: What is Propositional Logic (PL)?
A: PL is a logic system where statements (propositions) are either true or false, combined using
logical operators like AND (∧), OR (∨), and NOT (¬).

Q4: What is a truth table in Propositional Logic?


A: A truth table shows all possible truth values of a logical expression based on its inputs.

Q5: What is resolution in Propositional Logic?


A: Resolution is a rule of inference used to prove logical statements by eliminating
contradictions.

Q6: What is the purpose of the resolution rule?


A: It helps in automated theorem proving by deducing new clauses from existing ones.

A: FOL extends Propositional Logic by introducing quantifiers (∀, ∃) and predicates to express
Q7: What is First-Order Logic (FOL)?

complex relationships.

Q8: How does FOL differ from Propositional Logic?


A: FOL includes objects, predicates, and quantifiers, whereas Propositional Logic deals only
with simple statements.

Q9: What is Forward Chaining?


A: Forward Chaining starts from known facts and applies inference rules to derive new facts
until a goal is reached.

Q10: Why is Forward Chaining incomplete in FOL?


A: Because it may generate an infinite number of conclusions, missing the most relevant ones
needed for the solution.

Q11: What is Resolution Refutation in FOL?


A: It is a proof technique that assumes the negation of a statement and derives a contradiction to
prove the original statement.

Q12: How does Resolution Refutation help in theorem proving?


A: It systematically eliminates clauses until a contradiction is found, proving the statement by
contradiction.

Q13: What is a Horn Clause in logic?


A: A Horn Clause is a disjunction of literals with at most one positive literal, commonly used in
logic programming.

Q14: What is SLD (Selective Linear Definite) Resolution?


A: SLD Resolution is a special case of resolution used in logic programming for efficient proof
search in Prolog.
Q15: What is Backward Chaining?
A: Backward Chaining starts from the goal and works backward by finding rules that support the
goal until known facts are reached.

Q16: How does Backward Chaining differ from Forward Chaining?


A: Backward Chaining starts from the goal and works backward, whereas Forward Chaining
starts from facts and moves forward.

You might also like