AI Assignment 1 Q and A
AI Assignment 1 Q and A
❖ Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from their goal
(description of desirable situations). Their every action is intended to reduce their distance
from the goal. This allows the agent a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require
search and planning. The goal-based agent’s behaviour can easily be changed.
6) Explain Utility based agent with block Diagram & Learning Agent with block diagram.
❖ Utility-Based Agents
The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used. They choose actions based on a preference (utility) for each
state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer,
cheaper trip to reach a destination. Agent happiness should be taken into consideration.
Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility
agent chooses the action that maximizes the expected utility. A utility function maps a state
onto a real number which describes the associated degree of happiness.
❖ Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components,
which are:
1. Learning element: It is responsible for making improvements by learning from the
environment.
2. Critic: The learning element takes feedback from critics which describes how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
7) Explain with example various uninformed searching Techniques.
Uninformed searching Techniques have no additional information on the goal node other
than the one provided in the problem definition. The plans to reach the goal state from the
start state differ only by the order and length of actions. Uninformed search in AI refers to a
type of search algorithm that does not use additional information to guide the search process.
Instead, these algorithms explore the search space in a systematic, but blind, manner without
considering the cost of reaching the goal or the likelihood of finding a solution.
Following are the various types of uninformed search algorithms:
1. Breadth-first Search
• Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first
search.
• BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
• Time Complexity: O (bd)
• Space Complexity: O(bd)
• Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in layers,
so it will follow the path which is shown by the dotted arrow, and the traversed path
will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
2. Depth-first Search
• Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each
path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
• Time Complexity: O(bm) [Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)]
• Space Complexity: O(bm)
• Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
3. Depth-limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit. Depth-limited search can solve the drawback of the infinite path in the Depth-first
search. In this algorithm, the node at the depth limit will treat as it has no successor nodes
further.
Depth-limited search can be terminated with two Conditions of failure:
• Standard failure value: It indicates that problem does not have any solution.
• Cut-off failure value: It defines no solution for the problem within a given depth limit.
• Time Complexity: O(bℓ).
• Space Complexity: O(bℓ).
• Example:
4. Uniform cost search
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The
primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost. Uniform-cost search expands nodes according to their path costs
form the root node. It can be used to solve any graph/tree where the optimal cost is in
demand. A uniform-cost search algorithm is implemented by the priority queue. It gives
maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS
algorithm if the path cost of all edges is the same.
• Time Complexity: O(bd * log(n))
• Space Complexity: O(bd)
• Example:
1'st Iteration--> A
2'nd Iteration--> A, B, C
3'rd Iteration-->A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will
find the goal node.
6. Bidirectional Search
Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to find the
goal node. Bidirectional search replaces one single search graph with two small subgraphs
in which one starts the search from an initial vertex and other starts from goal vertex. The
search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
• Time Complexity: O(bd)
• Space Complexity: O(bd)
• Example:
In the below search tree, bidirectional search algorithm is applied. This algorithm
divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the
forward direction and starts from goal node 16 in the backward direction.
The algorithm terminates at node 9 where two searches meet.
8) Consider the graph given in the figure. Assume that the initial state is A and the goal state
is G. Find a path from the initial state to the goal state using DFS. also report the solution
cost.
To find a path from the initial state A to the goal state G using Depth-First Search (DFS), we
explore as far as possible along each branch before backtracking. Here's how the search
proceeds:
DFS Exploration:
1. Start at Root node A.
2. Visit unvisited neighbours: B, C.
3. Choose B and explore further: E, D, F, H.
• Reach a dead end at E (no unvisited neighbours).
• Backtrack to B.
• Traverse through D, F, H
• Reach a dead end at H (no unvisited neighbours).
• Backtrack to F then again backtrack to D.
2. Explore D: C, G.
• Traverse to C.
• Reach goal state G. (goal state found!).
Solution Path:
A -> B -> E -> B -> D -> F -> H -> F -> D -> C -> G
1
1
1 1
1
1 1
Solution Cost:
The cost is calculated by summing the weights of the edges in the path. However, the provided
information doesn't specify edge weights. Assuming all edges have a weight of 1, the solution
cost would be:
Cost = 1(A-B) + 1(B-E) + 1(E-B) + 1(B-D) + 1(D-F) + 1(F-H) + 1(H-F) + 1(F-D) + 1(D-C) + 1(C-G)
Cost = 10
DFS search algorithm is non-optimal, as it generates a large number of steps or high cost to
reach to the goal node even in this example the shortest path is A -> C -> G which costs 2.
9) Define Heuristic function and value with example of block world problem.
A heuristic function, often denoted as h(n), is a function used in search algorithms to estimate
the cost or distance from a given state (node) to the goal state in a problem space. The
heuristic function provides a "guess" or "estimate" of how close a given state is to the goal,
helping guide the search towards more promising paths. The value returned by the heuristic
function is known as the heuristic value.
Example: Block World Problem
The Block World Problem is a classic artificial intelligence problem where the goal is to
rearrange a set of blocks on a table to match a target configuration. The blocks can be stacked
on top of each other, and the goal state specifies the desired arrangement of blocks.
Let's define a heuristic function for the Block World Problem:
• Heuristic Function (h(n)): The heuristic function estimates the number of blocks that are
out of place in the current state compared to the goal state.
• Heuristic Value: The heuristic value represents the estimated number of blocks that need
to be moved to reach the goal state.
Example:
Consider the following initial state and goal state in the Block World Problem:
Initial State:
Goal State:
In this example, blocks A, B, C are out of place compared to the goal state. Therefore, the
heuristic function would estimate that the distance to the goal state is 3 (the number of
misplaced blocks).
So, the heuristic value (h(n)) for this initial state would be 3.
The search algorithm (such as A* search) uses this heuristic value to guide the exploration of
the problem space. It prioritizes states with lower heuristic values, as they are likely to be
closer to the goal state. This helps improve the efficiency of the search algorithm by focusing
on more promising paths towards the goal.