0% found this document useful (0 votes)
32 views11 pages

AI Assignment 1 Q and A

1) The document discusses different definitions and categories of artificial intelligence (AI). It defines AI based on capabilities, functionality, architecture, learning approach, and application domain. 2) The key differences between programming with and without AI are outlined, such as AI enabling problem-solving, decision-making, data processing, learning, predictability, customization and scalability. 3) An intelligent agent is defined as a software entity that perceives its environment, reasons about information, and takes actions to achieve goals. Characteristics include perception, reasoning, actuation, learning, autonomy, adaptability and mobility.

Uploaded by

Chinmay Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views11 pages

AI Assignment 1 Q and A

1) The document discusses different definitions and categories of artificial intelligence (AI). It defines AI based on capabilities, functionality, architecture, learning approach, and application domain. 2) The key differences between programming with and without AI are outlined, such as AI enabling problem-solving, decision-making, data processing, learning, predictability, customization and scalability. 3) An intelligent agent is defined as a software entity that perceives its environment, reasons about information, and takes actions to achieve goals. Characteristics include perception, reasoning, actuation, learning, autonomy, adaptability and mobility.

Uploaded by

Chinmay Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

AI Assignment 1 Q and A

1) Explain different definitions of artificial intelligence according to different categories.


Artificial Intelligence (AI) is a broad field with various definitions, categorized based on
different perspectives. Here are some key definitions across different categories:
1. Based on Capabilities:
• Narrow/Weak AI: Systems designed and trained for a specific task or a narrow range
of tasks. Examples include virtual assistants, image recognition systems, and speech
recognition software.
• General/Strong AI: Hypothetical systems with the ability to understand, learn, and
apply knowledge across a wide range of tasks, similar to human intelligence. General
AI remains a theoretical goal.
2. Based on Functionality:
• Expert Systems: Rule-based systems that mimic the decision-making abilities of a
human expert in a particular domain. They use a knowledge base of facts and
heuristics to draw conclusions.
• Learning Systems: Systems that can learn from data and improve their performance
over time. This category includes machine learning and deep learning systems.
• Robotic Systems: Intelligent systems integrated with physical robots, capable of
performing tasks in the physical world. Examples include industrial robots and
autonomous vehicles.
• Natural Language Processing (NLP) Systems: Systems that can understand, interpret,
and generate human language. This includes chatbots, language translation, and
sentiment analysis applications.
3. Based on Architecture:
• Symbolic AI: Systems that rely on symbols and rules for representation and processing
of information. Expert systems often fall into this category.
• Connectionist AI: Systems that model intelligence based on the interconnected
processing of simple units, often inspired by neural networks. Deep learning is a
prominent example of connectionist AI.
4. Based on Learning Approach:
• Supervised Learning: Algorithms trained on labelled data, where the input and
corresponding output pairs are provided during training.
• Unsupervised Learning: Algorithms trained on unlabelled data, aiming to find
patterns and relationships within the data without explicit guidance.
• Reinforcement Learning: Agents learn by interacting with an environment, receiving
feedback in the form of rewards or penalties based on their actions
5. Based on Application Domain:
• Medical AI: Systems designed for healthcare applications, such as disease diagnosis,
drug discovery, and personalized medicine.
• Financial AI: Systems applied to financial tasks, including algorithmic trading, fraud
detection, and risk assessment.
• Educational AI: Systems developed to enhance learning experiences, provide
personalized education, and support educational processes.
2) Differentiate Programming without and with AI.
Aspect Programming without AI Programming with AI
Problem- Relies on human problem- Utilizes AI-based problem-solving
solving solving abilities techniques
Decision- Human-driven decision-making AI-driven decision-making based on
making processes algorithms
Data Manual data processing Automated data processing using AI
processing methods algorithms
Complexity Limited handling of complex Efficiently handles complex tasks and
problems problems with AI
Learning Limited learning capabilities Continuous learning and
improvement facilitated by AI models
Predictability Limited predictability without Enhanced predictability through
data analysis thorough data analysis
Customization Manual customization based on Automated customization driven by
human input data and preferences
Scalability Limited scalability without Scalability achieved through
automation automation and AI systems
Speed Processing speed constrained by Faster execution and processing
human capabilities facilitated by AI
Innovation Driven by human innovation and AI-powered innovation leading to
creativity breakthroughs

3) Define Intelligent Agent. What is the Characteristic of Intelligent Agent.


An Intelligent Agent is a fundamental concept in artificial intelligence and computer science.
It refers to a software entity that perceives its environment, reasons about the information it
receives, and takes actions to achieve specific goals or objectives. Intelligent agents are
designed to operate autonomously, making decisions and adapting their behaviour based on
their observations and interactions with the environment.
Characteristics of Intelligent Agents:
1. Perception:
Agents have the ability to perceive their environment using sensors. This could involve
gathering information through various means such as cameras, microphones, or other types
of sensors.
2. Reasoning:
Intelligent agents can reason about the information they have gathered and make decisions
based on their goals and objectives. This often involves some form of problem-solving or
decision-making algorithms.
3. Actuation:
Agents can take actions in their environment based on their reasoning and decision-making
processes. This might involve physical actions (in the case of robots) or virtual actions (in the
case of software agents).
4. Learning:
Many intelligent agents have the ability to learn from experience. This can include learning
from past interactions with the environment or adapting to changes in the environment over
time.
5. Autonomy:
Intelligent agents are often designed to operate autonomously, meaning they can make
decisions and take actions without direct human intervention. However, the level of
autonomy can vary depending on the specific application.
6. Adaptability:
Intelligent agents are designed to adapt to changes in their environment. This adaptability is
crucial for dealing with dynamic and unpredictable situations.
7. Mobility
Using Computer n/w, intelligent agents engaged in e-commerce gather information until
search parameter are complete.
8. Goal Oriented
Intelligent agents carry out the particular task provided by” user statement of goals”. It moves
around from one machine to another and can react in response to their environment and
tasks initiative to exhibit goal directed behaviour.
9. Independent
Intelligent agent is self-dependent, in the sense that it functions and its own without human
intervention.
4) Define Rationality and Rational Agent. Give an example of rational action performed by any
intelligent agent.
Rationality:
Rationality is the cognitive capacity to make decisions that lead to the optimal outcome given
the available information and a set of logical principles. It involves consistent decision-making
while avoiding contradictions and irrational choices. Rationality is fundamental in decision
theory and is a key concept in artificial intelligence.
Rational Agent:
A rational agent is an autonomous entity capable of perceiving its environment, reasoning
about its actions, and selecting the actions that maximize its expected utility or achieve its
goals. Rational agents operate within dynamic environments where they must adapt and
make decisions in pursuit of their objectives.
Example of Rational Action by a Self-Driving Car:
Consider a self-driving car as a rational agent with the goal of safely transporting passengers
to their destination.
Here's a more detailed breakdown of its rational action:
1. Perception of the Environment:
• The self-driving car utilizes a variety of sensors including cameras, LiDAR, radar, and GPS
to gather real-time data about its surroundings.
• It identifies and tracks other vehicles, pedestrians, traffic signs, traffic lights, road
conditions, and obstacles.
2. Reasoning about Actions:
• The car processes the environmental data and constructs a representation of the current
traffic situation and road conditions.
• It assesses various factors such as traffic density, speed limits, the behavior of nearby
vehicles, and the presence of pedestrians.
3. Decision-Making Process:
• Drawing from its perception and reasoning, the self-driving car evaluates different courses
of action to reach its destination safely and efficiently.
• It considers factors like traffic regulations, the shortest or fastest route, potential hazards,
and the comfort of passengers.
4. Optimal Action Selection:
• Based on its evaluation, the self-driving car selects the most rational action to execute.
• For example, it may stop at red traffic lights, yield to pedestrians at crosswalks, merge into
traffic lanes when safe, and maintain a safe following distance from other vehicles.
5) Explain Model based Reflex agent with block diagram & Explain Goal Based with block
diagram.
❖ Model-Based Reflex Agents
It works by finding a rule whose condition matches the current situation. A model-based agent
can handle partially observable environments by the use of a model about the world. The
agent has to keep track of the internal state which is adjusted by each percept and that
depends on the percept history. The current state is stored inside the agent which maintains
some kind of structure describing the part of the world which cannot be seen.
Updating the state requires information about:
• How the world evolves independently from the agent?
• How do the agent’s actions affect the world?

❖ Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from their goal
(description of desirable situations). Their every action is intended to reduce their distance
from the goal. This allows the agent a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require
search and planning. The goal-based agent’s behaviour can easily be changed.
6) Explain Utility based agent with block Diagram & Learning Agent with block diagram.
❖ Utility-Based Agents
The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used. They choose actions based on a preference (utility) for each
state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer,
cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility
agent chooses the action that maximizes the expected utility. A utility function maps a state
onto a real number which describes the associated degree of happiness.

❖ Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components,
which are:
1. Learning element: It is responsible for making improvements by learning from the
environment.
2. Critic: The learning element takes feedback from critics which describes how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
7) Explain with example various uninformed searching Techniques.
Uninformed searching Techniques have no additional information on the goal node other
than the one provided in the problem definition. The plans to reach the goal state from the
start state differ only by the order and length of actions. Uninformed search in AI refers to a
type of search algorithm that does not use additional information to guide the search process.
Instead, these algorithms explore the search space in a systematic, but blind, manner without
considering the cost of reaching the goal or the likelihood of finding a solution.
Following are the various types of uninformed search algorithms:
1. Breadth-first Search
• Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
• BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
• Time Complexity: O (bd)
• Space Complexity: O(bd)
• Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in layers,
so it will follow the path which is shown by the dotted arrow, and the traversed path
will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
2. Depth-first Search
• Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
• Time Complexity: O(bm) [Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)]
• Space Complexity: O(bm)
• Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.

3. Depth-limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-first
search. In this algorithm, the node at the depth limit will treat as it has no successor nodes
further.
Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cut-off failure value: It defines no solution for the problem within a given depth limit.
• Time Complexity: O(bℓ).
• Space Complexity: O(bℓ).
• Example:
4. Uniform cost search
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The
primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost. Uniform-cost search expands nodes according to their path costs
form the root node. It can be used to solve any graph/tree where the optimal cost is in
demand. A uniform-cost search algorithm is implemented by the priority queue. It gives
maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS
algorithm if the path cost of all edges is the same.
• Time Complexity: O(bd * log(n))
• Space Complexity: O(bd)
• Example:

5. Iterative deepening depth-first search


• The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.
• This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.
• This Search algorithm combines the benefits of Breadth-first search's fast search and
depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search when search space is
large, and depth of goal node is unknown.
• Time Complexity: O(b * d2)
• Space Complexity: O(bd)
• Example:

1'st Iteration--> A
2'nd Iteration--> A, B, C
3'rd Iteration-->A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will
find the goal node.
6. Bidirectional Search
Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to find the
goal node. Bidirectional search replaces one single search graph with two small subgraphs
in which one starts the search from an initial vertex and other starts from goal vertex. The
search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
• Time Complexity: O(bd)
• Space Complexity: O(bd)
• Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.

8) Consider the graph given in the figure. Assume that the initial state is A and the goal state
is G. Find a path from the initial state to the goal state using DFS. also report the solution
cost.

To find a path from the initial state A to the goal state G using Depth-First Search (DFS), we
explore as far as possible along each branch before backtracking. Here's how the search
proceeds:
DFS Exploration:
1. Start at Root node A.
2. Visit unvisited neighbours: B, C.
3. Choose B and explore further: E, D, F, H.
• Reach a dead end at E (no unvisited neighbours).
• Backtrack to B.
• Traverse through D, F, H
• Reach a dead end at H (no unvisited neighbours).
• Backtrack to F then again backtrack to D.
2. Explore D: C, G.
• Traverse to C.
• Reach goal state G. (goal state found!).

Solution Path:

A -> B -> E -> B -> D -> F -> H -> F -> D -> C -> G

1
1
1 1
1
1 1

Solution Cost:

The cost is calculated by summing the weights of the edges in the path. However, the provided
information doesn't specify edge weights. Assuming all edges have a weight of 1, the solution
cost would be:
Cost = 1(A-B) + 1(B-E) + 1(E-B) + 1(B-D) + 1(D-F) + 1(F-H) + 1(H-F) + 1(F-D) + 1(D-C) + 1(C-G)
Cost = 10
DFS search algorithm is non-optimal, as it generates a large number of steps or high cost to
reach to the goal node even in this example the shortest path is A -> C -> G which costs 2.

9) Define Heuristic function and value with example of block world problem.
A heuristic function, often denoted as h(n), is a function used in search algorithms to estimate
the cost or distance from a given state (node) to the goal state in a problem space. The
heuristic function provides a "guess" or "estimate" of how close a given state is to the goal,
helping guide the search towards more promising paths. The value returned by the heuristic
function is known as the heuristic value.
Example: Block World Problem
The Block World Problem is a classic artificial intelligence problem where the goal is to
rearrange a set of blocks on a table to match a target configuration. The blocks can be stacked
on top of each other, and the goal state specifies the desired arrangement of blocks.
Let's define a heuristic function for the Block World Problem:
• Heuristic Function (h(n)): The heuristic function estimates the number of blocks that are
out of place in the current state compared to the goal state.
• Heuristic Value: The heuristic value represents the estimated number of blocks that need
to be moved to reach the goal state.

Example:
Consider the following initial state and goal state in the Block World Problem:
Initial State:
Goal State:

In this example, blocks A, B, C are out of place compared to the goal state. Therefore, the
heuristic function would estimate that the distance to the goal state is 3 (the number of
misplaced blocks).
So, the heuristic value (h(n)) for this initial state would be 3.
The search algorithm (such as A* search) uses this heuristic value to guide the exploration of
the problem space. It prioritizes states with lower heuristic values, as they are likely to be
closer to the goal state. This helps improve the efficiency of the search algorithm by focusing
on more promising paths towards the goal.

10) Difference between informed search and uninformed search.


Parameters Informed Search Uninformed Search
Known as It is also known as Heuristic It is also known as Blind Search.
Search.
Using It uses knowledge for the It doesn’t use knowledge for the
Knowledge searching process. searching process.
Performance It finds a solution more quickly. It finds solution slow as compared
to an informed search.
Completion It may or may not be complete. It is always complete.
Cost Factor Cost is low. Cost is high.
Time It consumes less time because It consumes moderate time because
of quick searching. of slow searching.
Direction There is a direction given about No suggestion is given regarding the
the solution. solution in it.
Implementation It is less lengthy while It is lengthier while implemented.
implemented.
Efficiency It is more efficient as efficiency It is comparatively less efficient as
considers cost and performance. incurred cost is more and the speed
The incurred cost is less and of finding the Breadth-First solution
speed of finding solutions is is slow.
quick.
Computational Computational requirements Comparatively higher computational
requirements are lessened. requirements.
Size of search Having a wide scope in terms of Solving a massive search task is
problems handling large search problems. challenging.
Examples of • Greedy Search • breadth-first search (BFS)
Algorithms • A* Search • depth-first search (DFS)
• AO* Search • uniform-cost search (UCS)
• Hill Climbing Algorithm • depth-limited search
• iterative deepening depth-first
search
• bidirectional search

You might also like