0% found this document useful (0 votes)
8 views

AI Answers

The document outlines four foundational approaches to AI: Acting Humanly (Turing Test), Thinking Humanly (Cognitive Modeling), Thinking Rationally (Laws of Thought), and Acting Rationally (Rational Agent). It also describes key components of AI systems including Perception, Knowledge Representation, Learning, Reasoning, Problem Solving, and Natural Language Processing. Additionally, it categorizes AI environments and discusses various search techniques such as Depth First Search and A* Search, along with their concepts, implementations, and performance measures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

AI Answers

The document outlines four foundational approaches to AI: Acting Humanly (Turing Test), Thinking Humanly (Cognitive Modeling), Thinking Rationally (Laws of Thought), and Acting Rationally (Rational Agent). It also describes key components of AI systems including Perception, Knowledge Representation, Learning, Reasoning, Problem Solving, and Natural Language Processing. Additionally, it categorizes AI environments and discusses various search techniques such as Depth First Search and A* Search, along with their concepts, implementations, and performance measures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

1. Explain the four different approaches which form the foundations of AI.

Acting Humanly: The Turing Test Approach

 Goal: Create a system that behaves indistinguishably from a human.


 Turing Test: Proposed by Alan Turing, this test evaluates whether a machine can
exhibit intelligent behavior indistinguishable from a human. If a human interrogator
cannot tell whether they are interacting with a machine or a human, the machine is
said to have passed the test.
 Focus: External behavior (what the system does).
 Example: Chatbots like ChatGPT are designed to mimic human conversation.

2. Thinking Humanly: The Cognitive Modelling Approach

 Goal: Create a system that thinks like a human.


 Focus: Internal thought processes (how the system thinks).
 Method: Use cognitive science and psychology to model human reasoning, problem-
solving, and decision-making.
 Example: Cognitive architectures like ACT-R, which simulate human memory and
learning processes.

3. Thinking Rationally: The “Laws of Thought” Approach

 Goal: Create a system that uses logical reasoning to solve problems.


 Focus: Correct reasoning based on formal logic and rules.
 Method: Use mathematical logic to derive conclusions from premises.
 Example: Expert systems that use rule-based reasoning to diagnose diseases.

4. Acting Rationally: The Rational Agent Approach

 Goal: Create a system that acts to achieve the best possible outcome.
 Focus: Optimal decision-making based on goals and environment.
 Method: Use rationality to maximize performance measures.
 Example: Self-driving cars that make decisions to reach a destination safely and
efficiently.
2. How are the intelligent systems categorized?
3. What are the various components of an AI?

1. Perception

 AI systems scan the environment and identify objects within it.


 It uses sensors such as cameras, temperature sensors, and motion detectors to capture information.
 After gathering data, the system analyzes the objects and their relationships.
 Example: A self-driving car perceives roads, obstacles, and traffic lights using cameras and sensors.

2. Knowledge Representation

 The data collected from the environment needs to be stored in a structured format.
 This helps the AI system compare information, learn patterns, and make decisions.
 Example: A chatbot stores previous conversations to improve its responses.
 Techniques:
o Propositional Logic
o First-Order Logic

3. Learning

 Learning is a crucial part of AI, enabling it to adapt and improve over time.
 Types of Learning:
o Trial and Error (Unsupervised Learning): The system tries multiple actions, remembers successful
ones, and discards failures.
o Supervised Learning: The AI is trained using labeled data to generate correct outputs.
 Example: A spam filter learns from emails labeled as spam or non-spam.

4. Reasoning

 AI uses logic to generate conclusions based on facts and rules.


 Types of Reasoning:
o Deductive Reasoning: The AI applies general rules to reach conclusions.
o Inductive Reasoning: The AI observes patterns and derives general rules.
 Example: A medical diagnosis system analyzing symptoms to suggest a disease.

5. Problem Solving

 AI solves problems by using search techniques and algorithms.


 It applies strategic moves to reach a goal efficiently.
 Example: Google Maps finds the shortest route by evaluating multiple paths.

6. Natural Language Processing (NLP)

 NLP enables AI to understand, process, and respond to human language.


 It involves speech recognition, text processing, and language translation.
 Example: Virtual assistants like Siri, Alexa, and Google Assistant use NLP to answer queries.

4. Define rational agent and explain the concept of rationality with suitable example.

 Rationality refers to the ability of an agent to make decisions that lead to the
best possible outcome or maximize its performance measure, given the
information it has.
Example,
 The vacuum cleaner detects dirt in a corner (percept).

 Using its knowledge base, it plans the most efficient path to clean the dirt while
avoiding obstacles.

 It moves to the corner and cleans the dirt (action).

5. Describe the different types of environments applicable to AI agents.

 Fully Observable vs. Partially Observable

 Fully Observable: The agent has complete access to all relevant information about the environment.
Example: Chess, where the entire board is visible.
 Partially Observable: The agent has limited or noisy perception of the environment.
Example: Self-driving cars, where sensors may not detect all obstacles.

 Single-Agent vs. Multi-Agent

 Single-Agent: Only one agent interacts with the environment.


Example: A robot vacuum cleaning a room.
 Multi-Agent: Multiple agents interact, which can be cooperative or competitive.
Example: A chess game with two competing AI players.

 Deterministic vs. Stochastic

 Deterministic: The outcome of an action is predictable.


Example: Solving a maze using a predefined algorithm.
 Stochastic: Outcomes are uncertain due to randomness or external influences.
Example: Self-driving cars facing unpredictable traffic.

 Episodic vs. Sequential


 Episodic: Each action is independent of past actions.
Example: Image classification, where each image is analyzed separately.
 Sequential: Past actions influence future decisions.
Example: Playing chess, where each move impacts the game’s future.

 Static vs. Dynamic

 Static: The environment does not change while the agent is making decisions.
Example: A Sudoku puzzle where numbers remain fixed.
 Dynamic: The environment changes while the agent operates.
Example: A self-driving car navigating changing traffic.

 Discrete vs. Continuous

 Discrete: The agent makes decisions from a finite set of options.


Example: Chess, where there are defined moves.
 Continuous: The agent has infinite possible actions.
Example: A robot arm adjusting its position continuously.

 Known vs. Unknown

 Known: The agent has full knowledge of how the environment works.
Example: Playing a board game with fixed rules.
 Unknown: The agent must learn how the environment works through exploration.
Example: A robot learning to navigate an unfamiliar terrain.

6. Classify the given agents according to their task environments.

7. What is meant by PEAS descriptor? Give PEAS description for a given agent.
8. Explain the Goal based agent with block diagram stating its pros and cons.

9. What are the basic building blocks of learning agent with a neat block diagram?

10. Explain the steps in problem formulation for state space with example.
11. Explain the concept, implementation, algorithm and performance measures for the
following search techniques with suitable examples:
(i) Depth First Search.
(ii) Breadth First Search.
(iii) Depth Limited Search.
(iv) Bidirectional Search.
(v) A* Search.

(i) Depth First Search (DFS)

Concept:

DFS is an algorithm for traversing or searching tree or graph data structures. It starts
at the root node (or any arbitrary node in the case of a graph) and explores as far as
possible along each branch before backtracking.

Implementation:
 Stack Data Structure: DFS uses a stack to keep track of the nodes to visit
next.

 Recursive Approach: DFS can be implemented recursively, where the call


stack serves as the stack.
Algorithm:
1. Start by pushing the root node onto the stack.

2. While the stack is not empty:


o Pop a node from the stack.

o If the node is the goal, return success.

o Push all unvisited neighbors of the node onto the stack.

3. If the stack is empty and the goal has not been found, return failure.

Performance Measures:
 Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.

 Space Complexity: O(V), due to the stack storage.

Example:

Consider a graph with nodes A, B, C, D, E:

 Start at A.

 Visit A, then B, then D, then E, then C.

(ii) Breadth First Search (BFS)

Concept:

BFS is an algorithm for traversing or searching tree or graph data structures. It starts
at the root node and explores all the neighboring nodes at the present depth level
before moving on to nodes at the next depth level.

Implementation:
 Queue Data Structure: BFS uses a queue to keep track of the nodes to visit
next.

Algorithm:
1. Start by enqueueing the root node.

2. While the queue is not empty:


o Dequeue a node from the queue.

o If the node is the goal, return success.

o Enqueue all unvisited neighbors of the node.

3. If the queue is empty and the goal has not been found, return failure.

Performance Measures:
 Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.
 Space Complexity: O(V), due to the queue storage.

Example:

Consider a graph with nodes A, B, C, D, E:

 Start at A.

 Visit A, then B and C, then D and E.

(iii) Depth Limited Search (DLS)

Concept:

DLS is a variation of DFS where the search is limited to a specified depth. This is
useful to prevent the search from going too deep in infinite or very large trees.

Implementation:
 Stack Data Structure: Similar to DFS but with a depth limit.

 Recursive Approach: Can be implemented recursively with a depth


parameter.

Algorithm:
1. Start by pushing the root node onto the stack with depth 0.

2. While the stack is not empty:


o Pop a node and its depth from the stack.

o If the node is the goal, return success.

o If the depth is less than the limit, push all unvisited neighbors onto the
stack with incremented depth.

3. If the stack is empty and the goal has not been found, return failure.

Performance Measures:
 Time Complexity: O(V + E), where V is the number of vertices and E is the
number of edges.

 Space Complexity: O(V), due to the stack storage.

Example:

Consider a graph with nodes A, B, C, D, E and a depth limit of 2:

 Start at A (depth 0).

 Visit A, then B (depth 1), then D (depth 2).

(iv) Bidirectional Search


Concept:

Bidirectional search runs two simultaneous searches—one forward from the initial
state and the other backward from the goal—hoping that the two searches meet in
the middle.

Implementation:
 Two Queues: One for the forward search and one for the backward search.

 Intersection Check: Check if the current node in the forward search is in the
backward search's visited set and vice versa.

Algorithm:
1. Initialize two queues: one for the forward search and one for the backward
search.

2. While both queues are not empty:


o Perform a step in the forward search.

o Perform a step in the backward search.

o If the current node in the forward search is in the backward search's


visited set, return success.

o If the current node in the backward search is in the forward search's


visited set, return success.

3. If either queue is empty and the goal has not been found, return failure.

Performance Measures:
 Time Complexity: O(b^(d/2)), where b is the branching factor and d is the
depth of the shallowest solution.

 Space Complexity: O(b^(d/2)), due to the storage of both searches.

Example:

Consider a graph with nodes A, B, C, D, E:

 Forward search starts at A.

 Backward search starts at E.

 They meet at C.

(v) A* Search

Concept:

A* is a best-first search algorithm that finds the least-cost path from a given initial
node to one goal node. It uses a heuristic function to estimate the cost from the
current node to the goal.
Implementation:
 Priority Queue: A* uses a priority queue to prioritize nodes with the lowest
estimated total cost.

 Heuristic Function: An admissible heuristic function h(n) that estimates the


cost to reach the goal from node n.

Algorithm:
1. Initialize the open list with the start node and its f-score (g + h).

2. While the open list is not empty:


o Pop the node with the lowest f-score.

o If the node is the goal, return the path.

o Generate the node's successors and calculate their f-scores.

o For each successor, if it is not in the open list or has a better f-score, add
it to the open list.

3. If the open list is empty and the goal has not been found, return failure.

Performance Measures:
 Time Complexity: O(b^d), where b is the branching factor and d is the depth
of the solution.

 Space Complexity: O(b^d), due to the storage of the open list.

Example:
Consider a graph with nodes A, B, C, D, E and heuristic values:
 Start at A.

 Visit A, then B (lowest f-score), then D, then E (goal).

Each of these search techniques has its own strengths and weaknesses, making them
suitable for different types of problems and constraints.

12. Differentiate between informed search and uninformed search algorithms.


13. Explain Admissibility in A* search method.
If the heuristic is not admissible, A* may not find the optimal solution, as it could prioritize a path that
appears shorter but is actually costlier.

14. Explain the Memory bounded Heuristic searches.


15. Sums based on uninformed search techniques and informed search techniques.

You might also like