Question Bank Complete
Question Bank Complete
QUESTION BANK
ARTIFICIAL INTELLIGENCE (KCS-071)
Unit 1
Introduction to AI
2 Humans store the data and recall by patterns Machine stores and recall information by
using searching algorithms
3 Humans can identify the objects even The machine cannot identify the complete
incomplete objects or some part are missing object correctly even if some part of it is
or distorted missing.
0. What are the different branches of AL? Discuss some of the branches and progress
made in their fields.
The different branches of Ai and their progress in these fields are:
1. Machine Learning : ML is a method where the target is defined and the steps to
reach that target are learned by the ML Algorithm. Example : identifying different objects
like apple, oranges. The goal is achieved when the machine identifies the objects after
training with the multiple pictures. This defines to identify the objects
2. Natural language Processing (NLP): it is defines as the automatic manipulation
of natural language. Like speech, text by the agents or the software. Example :
identification of spam mails improving the mail system
3. Computer Vision: This branch captures and identifies the visual information
nursing a camera, analog to digital conversion, and digital signal processing.
4. Robotics: this branch focuses on manufacturing and designing of Robots which
are used to facilitate our life along with reach where humans have difficulty in reaching.
Example: in surgery at hospitals, cleaner robots, serving robots, manufacturing industry
etc.
The organization of the computer vision system is highly application dependent. The specific
implementation of a computer vision system also depends on if its functionality is pre-specified or
if some part of it can be learned or modified during operations.
A boom of AI (1980-1987)
● Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert
systems were programmed to emulate the decision-making ability of a human expert.
Additionally, Symbolics Lisp machines were brought into commercial use, marking the
onset of an AI resurgence. However, in subsequent years, the Lisp machine market
experienced a significant downturn.
The second AI winter (1987-1993)
● The duration between the years 1987 to 1993 was the second AI Winter duration
The emergence of intelligent agents (1993-2011)
● Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world
chess champion Gary Kasparov, marking the first time a computer triumphed over a
reigning world chess champion.
● Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
● Year 2005: Stanley, the self-driving car developed by Stanford University, wins DARPA
grand challenge.
● Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve
complex questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
● Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
● Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee Sedol
in Seoul, South Korea.
● Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing
images based on textual prompts.
● Year 2022 and onwards: In November, OpenAI launched ChatGPT, offering a
chat-oriented interface to its GPT-4 LLM
0. What is an intelligent agent? Describe basic kinds of agents programs.
An Agent is anything that can be viewed as perceiving its environment through Sensors & acting
upon that environment through Actuators. Agents consist of sensors that perceive the environment
and then upon some conditions actuators take some actions in the environment.
Percept - The agent’s perceptual inputs at any given instant An agent's percept sequence is the
complete history of everything the agent has ever perceived. If we can specify the agent’s choice
of action for every possible perception sequence, then we have said more or less everything there
is to say about the agent.Mathematically the agent’s behaviour is described by an agent function
that maps any given perception sequence to an action. Internally, the agent function for an
artificial agent will be implemented by an agent program.
A Rational Agent is one that does the right thing conceptually speaking, every entry in the table
for the agent auction is filled out correctly. Obviously, doing the right thing is better than doing
the wrong thing, but what does it mean to do the right thing? As the first approximation, we will
say that the right action is the one that will cause the agent to be most successful. A
Performance Measure embodies the criteria for the success of an agent’s behaviour.
The knowledge of the current state environment is not always sufficient to decide for an agent
what to do. The agent needs to know its goal which describes desirable situations. Eg at a road
function, the taxi can turn left then right or go straight. The correct decision depends on where the
taxi is trying to go. Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information. They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive. The agent program can combine this
with information about the results of possible actions in order to choose actions that achieve
the goal. Sometimes goal-based action selection is straightforward, when goal satisfaction
results immediately from a single action. Sometimes it will be more tricky, when the agent has
to consider long sequences of twists & turns to find a way to achieve the goal.
A goal-based agent in principle could reason that if a car in front has its brake light on, it will
slow down. Given the way the world usually evolves, the only action that will achieve the goal by
not hitting other cars is to brake. Although the goal-based agent appears less efficient, it is more
flexible because the knowledge that supports its decisions is represented explicitly & can be
modified. If it starts to rain, the agent can update its knowledge of how efficiently its brakes will
operate, this will automatically cause all of the relevant behaviour to be altered to suit the new
condition.
A utility function maps a state onto a real number which describes the associated degree of
happiness. A complete specification of the utility function allows rational decisions in two kinds
of cases where goals are inadequate. FIRST, when there are conflicting goals, only some of which
can be achieved, the utility function specifies the appropriate tradeoff. SECOND, when there are
several goals that agent can aim for, none of which can be achieved with certainty, utility provides
a way in which the likelihood of success can be weighed up against the importance of the goal.
Obviously the program we choose should be appropriate for the architecture. Eg. If a program
is going to recommend actions like Walk, the architecture should better have legs. In general, the
architecture makes the precepts from the sensors available the program, runs the program
& feeds the program’s action choices not the actuators as they generated
The agent programs that we will design will have same skeleton/structure
1. They take the current percept as input from the sensors
2. Returns an action to the actuators.
The agent program takes the current perception as input because nothing more is available from
the environment. If the agent’s action depends on the entire perception sequence, the agent will
have to remember the percepts.
Goals help organize behavior by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider. Problem formulation is the process of deciding what
actions and states to consider, given a goal. Problem formulation should follow goal formulation
because without a clear goal in mind, you cannot effectively define the parameters of a problem,
including which aspects are important to consider and which can be ignored
4. What is Problem Space? How can a problem be defined as state space search?
A "problem space" refers to the entire set of possible states and actions within a given problem,
essentially representing all the potential configurations a problem can be in, while "defining a
problem as a state space search" means representing that problem as a process of navigating
through this space of states to find a solution by moving from an initial state to a desired goal
state, using defined actions to transition between them; essentially, it's a method of systematically
exploring all possible options to reach the optimal solution within the problem space.
• Analyzing The Problem: Analyzing the problem and its requirement must be done as few
features can have immense impact on the resulting solution.
• Identification of Solutions: This phase generates reasonable amount of solutions to the given
problem in a particular range.
• Choosing a Solution: From all the identified solutions, the best solution is chosen basis on
the results produced by respective solutions.
6. What is Heuristic search? Give the desirable properties of Heuristic search algorithms.
Heuristic search strategies utilize domain-specific information or heuristics to guide the search
towards more promising paths. Here we have knowledge such as how far we are from the goal,
path cost, how to reach the goal node etc. This knowledge helps agents to explore less to the
search space and find more efficiently the goal node.
· Accuracy:
The heuristic function should provide a close approximation of the actual cost to reach the goal,
leading to efficient search direction.
· Consistency:
As the search progresses towards the goal, the estimated cost should never increase, ensuring the
algorithm doesn't backtrack unnecessarily.
· Admissible Heuristic:
The heuristic function should never overestimate the cost to reach the goal, guaranteeing that the
algorithm will find an optimal solution.
· Completeness:
The algorithm should be able to find a solution if one exists, regardless of the search space
complexity.
· Optimality:
Ideally, the algorithm should find the optimal solution or a solution very close to optimal,
balancing efficiency with accuracy.
· No prior information
Uninformed search algorithms do not use any prior knowledge about the problem domain or
heuristics to guide their search.
· Systematic exploration
Uninformed search algorithms explore the problem space systematically, starting from an initial
state and generating all possible successors until reaching a goal state.
· Guaranteed solution
Uninformed search algorithms guarantee a solution if a possible solution exists for that problem.
· Easy to implement
Uninformed search algorithms are easy to implement because they do not require any additional
domain knowledge.
Some examples of uninformed search algorithms include: Breadth-first search (BFS), Depth-first
search (DFS), Uniform cost search (UCS), and Bidirectional search.
Yes. If there is more than one solution then BFS can find the minimal one that requires less
number of steps because BFS ensures that it visits nodes in order of increasing depth, so the first
time it encounters the goal node, it has found the shortest path.
BFS(Breadth First Search) uses Queue DFS(Depth First Search) uses Stack data structure.
data structure for finding the shortest
path.
It works on the concept of FIFO (First In It works on the concept of LIFO (Last In First Out).
First Out).
It is comparatively slower than the DFS It is comparatively faster than the BFS method.
method.
The amount of memory required for BFS The amount of memory required for DFS is less than
is more than that of DFS. that of BFS.
10. Prove that breadth-first search is a special case of uniform cost search.
Breadth-First Search (BFS) is considered a special case of Uniform Cost Search (UCS) because
when every edge in the search graph has a uniform cost of "1", the behavior of UCS exactly
matches BFS, meaning it will explore nodes level by level, prioritizing nodes at the shallower
depth first, just like BFS does
11. What is Depth-Limited Search (DLS), and what is its termination condition?
Depth-Limited Search (DLS) is a variation of Depth-First Search (DFS) that imposes a limit L on
the depth of the search. It explores the search tree up to depth L and stops further exploration
beyond this depth.
Depth limited search can solve the drawback of the infinite path in the Depth first search.
•Standard failure value: It indicates that problem does not have any solution.
•Cutoff failure value: It defines no solution for the problem within a given depth limit.
IDS overcomes the drawbacks of Depth-Limited Search by incrementally increasing the depth
limit L until a solution is found:
· No need for prior knowledge of L: Unlike DLS, IDS does not require an appropriate depth limit
to be known beforehand.
· Completeness: IDS guarantees finding a solution, even if the solution lies deeper than initially
expected.
· Revisits Nodes: Although it revisits nodes at shallower depths multiple times, the time cost is
justified by its completeness and optimality properties.
13. What is Bidirectional Search, and how does it differ from traditional search strategies?
Bidirectional Search is a graph search algorithm that simultaneously explores a search space from
both the starting point and the goal point, effectively "meeting in the middle" to find the shortest
path, unlike traditional search strategies which only explore from the starting point to the goal,
potentially resulting in a significantly faster search process by reducing the search space explored
14. Discuss A* search techniques. Prove that A* is complete and optimal. Justify with an
example.
Proof of Completeness:
· Since g(n) represents the true cost of reaching n and h(n) is admissible, A* will eventually expand
every node on the optimal path before exploring nodes with a higher f(n).
A* is Optimal. The first condition we require for optimality is that h(n) be an admissible
heuristic.
•A heuristic function h(n) is admissible if for every node n, h(n)<=h*(n), where h*(n) is the true
cost to reach the goal state from n.
• An admissible heuristic h(n) is one that never overestimates the cost to reach the goal.
Admissible heuristics are by nature optimistic because they think the cost of solving the problem
is less than it actually is.
• For example, Straight-line distance is admissible because the shortest path between any two
points is a straight line, so the straight line cannot be an overestimate.
A second, slightly stronger condition called consistency (or monotonicity) is required only for
applications of A∗ to graph search.
• A heuristic h(n) is consistent if, for every node n and every successor n’ of n generated by any
action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting
to n’ plus the estimated cost of reaching the goal from n’: h(n) ≤ c(n, a, n’) + h(n’)
This is a form of the general triangle inequality, which stipulates that each side of a triangle
cannot be longer than the sum of the other two sides.
15. Explain the AO* algorithm.
The AO* algorithm is based on AND-OR graphs to break complex problems into smaller ones
and then solve them. The AND side of the graph represents those tasks that need to be done with
each other to reach the goal, while the OR side stands alone for a single task.
The AO* algorithm uses knowledge-based search, where both the start and target states are
predefined. By leveraging heuristics, it identifies the best path, significantly reducing time
complexity. Compared to the A algorithm*, AO* is more efficient when solving problems
represented as AND-OR graphs, as it evaluates multiple paths at once.
The search begins at the initial node and explores its child nodes based on the type (AND or OR).
The costs associated with nodes are calculated using the following principles:
· For OR Nodes: The algorithm considers the lowest cost among the child nodes. The cost for an
OR node can be expressed as:
C(n)=min{C(c1),C(c2),…,C(ck)}
where C(n) is the cost of node n and c1,c2,…,ck are the child nodes of n.
· For AND Nodes: The algorithm computes the cost of all child nodes and selects the maximum
cost, as all conditions must be met. The cost for an AND node can be expressed as:
C(n)=max{C(c1),C(c2),…,C(ck)}C(n)=max{C(c1),C(c2),…,C(ck)}
where C(n) is the cost of node n, and c1,c2,…,ck are the child nodes of n.
f(n)=C(n)+h(n)
where:
· C(n) is the actual cost to reach node n from the start node.
16. Define CSP and Discuss about backtracking search for CSPs.
CSP describes a way to solve a wide variety of problems efficiently by using a factored
representation for state which is represented by a set of variables, each of which has a
value.A problem is solved when each variable has a value that satisfies all the constraints on
the variable. A problem described this way is called a constraint satisfaction problem.
A constraint satisfaction problem consists of three components, X,D, and C:
X is a set of variables, {X1, . . . ,Xn}.
D is a set of domains, {D1, . . . ,Dn}, one for each variablei.
C is a set of constraints that specify allowable combinations of values.
Depth-first search for CSPs with single-variable assignments is called backtracking search
The term backtracking search is used for a depth-first search that chooses values for one variable
at a time and backtracks when a variable has no legal values left to assign.
It repeatedly chooses an unassigned variable, and then tries all values in the domain of that
variable in turn, trying to find a solution. If an inconsistency is detected, then returns failure,
causing the previous call to try another value.
Problem:
Steps:
Constraint Satisfaction Problems (CSPs) can be classified based on the types of constraints
imposed on the variables
1. Unary Constraints:
o Example: X1≠3X_1 \neq 3X1=3 (variable X1X_1X1cannot take the value 3).
2. Binary Constraints:
3. Higher-order Constraints:
1. Hard Constraints:
2. Soft Constraints:
o Constraints that can be violated but have a cost associated with violation.
o Example: Scheduling tasks such that no two workers overlap (some overlap may
be allowed with a penalty).
1. Discrete Constraints:
o Example: X1∈{1,2,3}
2. Continuous Constraints:
o Example: X1∈[0,1]
Problem:
SEND
+ MORE
……………
MONEY
+ 1 0
1 0
so 1+E+0=N, —--------------(1)
Also N+R(+1)=E+10—-------(2)
From 1&2
N+R(+1)=N-1+10
R(+1)=9
Therefore R=8
5. D+E should be such that it generates a carry. Also D+E>11 (Y!= 0,1)
Assume Y=2
D+E=12
D!=8,9
6. N+8+1=15
N=6
Cod
e
Lette
r
S 9
E 5
N 6
D 7
M 1
O 0
R 8
Y 2
19. Evaluate the Constraint Satisfaction problem with an algorithm for solving a
Cryptarithmetic problem
CROSS
+ ROADS
= DANGER
20. Evaluate the Constraint Satisfaction problem with an algorithm for solving a
Cryptarithmetic problem
BASE
+ BALL
= GAMES
EAT
+ THAT
= APPLE
22. Describe the concept of local search algorithms. Provide an example of an optimization
problem and explain how local search algorithms can be applied to solve it.
Local search in AI refers to a family of optimization algorithms that are used to find the best
possible solution within a given search space. Unlike global search methods that explore the
entire solution space, local search algorithms focus on making incremental changes to
improve a current solution until they reach a locally optimal or satisfactory solution. This
approach is useful in situations where the solution space is vast, making an exhaustive search
impractical.
Example - N Queens
State- in local search state is a solution good or bad. Here we can decide at most one queen in
one column. Number of possible states: 8 8
· best possible state(the real solution) will only have 0 possible attacks.
So the objective function here should minimize the attacks until it becomes 0.
•Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to the
problem.It terminates when it reaches a peak value where no neighbor has a higher value.
•The algorithm does not maintain a search tree, so the data structure for the current node need
only record the state and the value of the objective function.
• Hill climbing is sometimes called greedy local search with no backtracking because it grabs
a good neighbor state without looking ahead beyond the immediate neighbors of the current
state.
• Hill climbing often makes rapid progress toward a solution because it is usually quite easy to
improve a bad state.
•Local Maximum:Local maximum is a state which is better than its neighbor states, but there
is also another state which is higher than it.
•Global Maximum:Global maximum is the best possible state of state space landscape It has
the highest value of objective function.
•Current state :It is a state in a landscape diagram where an agent is currently present.
•Flat local maximum:It is a flat space in the landscape where all the neighbor states of
current states have the same value.
•Local Maximum: A local maximum is a peak that is higher than each of its neighboring
states but lower than the global maximum. Hill-climbing algorithms that reach the vicinity of
a local maximum will be drawn upward toward the peak but will then be stuck with nowhere
else to go.
•Ridge : Ridges result in a sequence of local maxima that is very difficult for greedy
algorithms to navigate.
Plateau:
A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from
which no uphill exit exists, or a shoulder, from which progress is possible. A hill-climbing
search might get lost on the plateau. In each case, the algorithm reaches a point at which no
progress is being made.
•Solution: We can allow sideways move in the hope that the plateau is really a shoulder.
But,if we always allow sideways moves when there are no uphill moves, an infinite loop will
occur whenever the algorithm reaches a flat local maximum that is not a shoulder. One
common solution is to put a limit on the number of consecutive sideways moves allowed.
24. What challenges arise when dealing with partial observations in search problems?
When dealing with partial observations in search problems, the primary challenge is that an
agent cannot fully understand the current state of the environment, leading to uncertainty
about which path to take, requiring complex strategies to manage this uncertainty, including:
The agent needs to maintain a representation of all possible states it could be in based on its
limited observations, which can become computationally expensive as the problem space
grows.
Choosing the next action requires considering the potential outcomes of each option based on
the current belief state, often involving probabilistic calculations to make informed decisions.
Standard search algorithms like BFS or DFS need modifications to handle partial
observations, often requiring additional mechanisms to explore multiple possible paths and
update the belief state as new information becomes available.
· Dealing with information gaps:
The agent may need to actively gather more information through sensing actions to reduce
uncertainty and make better decisions.
25. Compare and contrast Simple hill climbing with steepest Ascent hill climbing.
• In metallurgy, annealing is the process used to temper or harden metals and glass by heating
them to a high temperature and then gradually cooling them, thus allowing the material to
reach a low energy crystalline state.
• Inspired by metallurgy, SA permits bad moves to states with a lower value but lets us escape
states that lead to a local optima.
· When T is high, the probability of accepting a worse state is higher, allowing the algorithm to
explore more freely. As T decreases, the probability of accepting worse solutions decreases,
focusing more on exploitation and less on exploration.
· In the early stages (high temperature), the algorithm explores a wide range of solutions, even
accepting worse solutions to escape local optima. In the later stages (low temperature), it focuses
on refining the solution by accepting only better solutions.
Annealing Schedule:
· The schedule controls how the temperature T decreases over time. In the early stages, the
temperature is high, allowing more exploration (accepting worse solutions). Over time, as the
temperature lowers, the algorithm becomes more focused on improving the solution.When the
temperature reaches zero, the search ends, and the algorithm returns the current state as the
best-found solution.
· The rate at which the temperature decreases is critical. A slow decrease (cooling schedule) allows
more exploration, which may yield better results but takes longer. A fast decrease leads to quicker
convergence but increases the risk of getting stuck in a local optimum.
Probabilistic Acceptance:
· Unlike greedy algorithms, simulated annealing sometimes accepts worse solutions, making it less
likely to get trapped in local minima and more likely to find a global optimum.
27. For tic toe game, draw a game tree from root node (initial stage) to leaf node (win or
lose) in AI.
28. What are the states are used to represent a game tree
2. The current player: It refers to the player who will be making the next move.
3. The next available moves: For humans, a move involves placing a game token while
the computer selects the next game state.
4. The game state: It includes the grouping of the three previous concepts.
5. Final Game States:.In final game states, AI should select the winning move in such a
way that each move assigns a numerical value based on its board state.
29. Define the water jug problem in AI. Also, suggest a solution to it.
Problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any measuring
marker on it. There is a pump that can be used to fill the jugs with water. How can you get
exactly 2 gallon of water from the 4-gallon jug?
State Space Representation: we will represent a state of the problem as a tuple (x, y) where x
represents the amount of water in the 4-gallon jug and y represents the amount of water in the
3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.
Assumptions:
Operators (Actions)
30. What is alpha-beta pruning? How alpha-beta pruning can improve the MIN MAX
algorithm?
● The problem with minimax search is that the number of game states it has to examine is
exponential in the number of moves.
● By performing pruning, we can eliminate large part of the tree from consideration.
● Alpha beta pruning, when applied to a minimax tree, returns the same move as minimax
would, but prunes away branches that cannot possibly influence the final decision.
● Alpha Beta pruning gets its name from the following two parameters that describe bounds
o α : the value of the best (i.e., highest-value) choice we have found so far at any
choice point along the path of MAX.
a lower bound on the value that a max node may ultimately be assigned
● β: the value of best (i.e., lowest-value) choice we have found so far at any choice point
along the path of MIN.
an upper bound on the value that a minimizing node may ultimately be assigned
Alpha Beta search updates the values of α and β as it goes along and prunes the remaining
branches at a node(i.e., terminates the recursive call) as soon as the value of the current node is
known to be worse than the current α and β value for MAX and MIN, respectively.
Unit 3
Knowledge Representation
(iii) you can fool all of the people some of the time.
∃t(∀pFool(p,t))
∀x(PurpleMushroom(x)→¬Poisonous(x))
∀x(Person(x)→∃h(Heart(h)∧Has(x,h)))
2. For the given sentence “All pompieans were Romans” write a well formed formula in
predicate logic.
3. Describe First Order Logic in AI.
Predicate logic in artificial intelligence, also known as first-order logic or first order predicate
logic in AI, is a formal system used in logic and mathematics to represent and reason about
complex relationships and structures.
It also either forms the foundation of many other representation languages and has been studied
intensively for many decades.
o First-order logic (like natural language) does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world:
o Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ......
o Relations: It can be unary relation such as: red, round, is adjacent, or n-any
relation such as: the sister of, brother of, has color, comes between
Function: Father of, best friend, third inning of, end of,
4. What is Propositional Logic? Define the various Inference Rules with the help of
examples.
Propositional logic is a branch of logic that deals with statements that are either true or false.
Inference rules are used to derive new propositions from a set of given propositions. Some
examples of inference rules in propositional logic include:
· Modus ponens: Takes two premises, one in the form "If p then q" and another in the form "p",
and returns the conclusion "q".
KB ╞ α
Knowledge base KB entails sentence α if and only if α is true in all worlds where KB is true
– E.g., the KB containing “the Giants won” and “the Reds won” entails “Either the Giants won or
the Reds won”
– E.g., x+y = 4 entails 4 = x+y
This means:
9. As per the law, it is a crime for an American to sell weapons to hostile nations. Country
A, an enemy of America, has some missiles, and all the missiles were sold to it by Robert,
who is an American citizen." Justify "Robert is a criminal." By applying the
forward-chaining algorithm OR Backward-chaining algorithm.
Facts Conversion into FOL
o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
Missile(T1) …(3)
· Robert is American
American(Robert). …(8)
Step-1:
In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.
Step-2:
At the second step, we will see those facts which infer from available facts and with satisfied
premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from
the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we
can add Criminal(Robert) which infers all the available facts. And hence we reached our goal
statement.
Backward Chaining:
In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and then
infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and at
last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.
Step-2:
At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can
see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we
will add all the conjunctive facts below the first level and will replace p with Robert.
Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.
Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.
Skolemization is a process in formal logic used to eliminate existential quantifiers (∃) from
logical formulas in first-order logic by introducing Skolem functions or Skolem constants.
· Unification in First-Order Logic (FOL) is a process of making two logical expressions identical
by finding a substitution for their variables.
· Unification tries to determine if there exists a substitution that, when applied to the terms or
predicates, makes the two expressions syntactically identical.
· Let Ψ and Ψ be two atomic sentences and 𝜎 be a unifier such that, Ψ 𝜎 = Ψ 𝜎, then it can be
1 2 1 2
expressed as UNIFY(Ψ , Ψ ).
1 2
Substitution θ = {John/x} is a unifier for these atoms and applying this substitution, and
both expressions will be identical.
Resolution is a single inference rule which can efficiently operate on the conjunctive
normal form or clausal form.
14. How is knowledge represented in ontological engineering, and what role does ontological
engineering play in building intelligent systems?
· Complex domains such as shopping on the Internet or driving a car in traffic require
more general and flexible representations.
· For instance, we’ll define general concepts like “physical object,” and specific
details about types of objects (such as robots, televisions, or books) can be added as
needed. This allows new knowledge to be integrated into the framework over time
without needing an exhaustive initial description.
· The general framework of concepts is called an upper ontology because of the
convention of drawing graphs with the general concepts at the top and the more
specific concepts below them
we need a model of the mental objects that are in someone’s head (or something’s
knowledge base) and of the mental processes that manipulate those mental objects.
Mental Objects are the static entities within an agent's mind, like ideas, beliefs, or
memories.
Mental Events refer to the occurrences that happen within an agent's mind, such as
thoughts, decisions, or intentions.
In artificial intelligence (AI), knowledge representation is a critical area that enables systems
to simulate human-like reasoning and decision-making. Various techniques are employed to
represent knowledge in AI systems, each with specific applications and advantages.
Logic-Based Representation:
• Propositional Logic: Uses simple statements that are either true or false.
• Predicate Logic: Extends propositional logic with predicates that can express relations
among objects and quantifiers to handle multiple entities.
Semantic Networks:
• Graph structures used to represent semantic relations between concepts. Nodes represent
objects or concepts, and edges represent the relations between them.
• Example: A node for "Socrates" linked by an "is a" edge to a "Human" node, and
Frame-Based Representation:
• A frame is a record like structure which consists of a collection of attributes and its values
to describe an entity in the world.
• Frames are the AI data structure which divides knowledge into substructures by
representing stereotypes situations. It consists of a collection of slots and slot values. These
slots may be of any type and sizes. Slots have names and values which are called facets.
Production Rules:
• Consist of sets of rules in the form of IF-THEN constructs that are used to derive
conclusions from given data.
• Example: IF the patient has a fever and rash, THEN consider a diagnosis of measles.
Inference refers to the process of deriving a logical conclusion (a statement considered true) based
on a set of given premises, using specific rules of inference that allow you to move from known
information to new, logically valid conclusions, taking into account the relationships between
objects and their properties within a domain; essentially, it's the act of reasoning based on
established logical principles within the framework of predicate logic
Ques 1: Discuss about various types of agent architectures used in Artificial Intelligent
Answer: Based on the goals of the agent application, a variety of agent architectures exist to
help. This section will introduce some of the major architecture types and applications for
which they can be used:
1. Reactive architectures
2. Deliberative architectures
3. Blackboard architectures
5. Hybrid architectures
6. Mobile architectures
1. REACTIVE ARCHITECTURES
2. In this architecture, agent behaviors are simply a mapping between stimulus and response.
3. The agent has no decision-making skills, only reactions to the environment in which it exists.
4. the agent simply reads the environment and then maps the state of the environment to one or
more actions. Given the environment, more than one action may be appropriate, and therefore
The advantage of reactive architectures is that they are extremely fast. This kind of architecture
can be implemented easily in hardware, or fast in software lookup.
The disadvantage of reactive architectures is that they apply only to simple environments.
Sequences of actions require the presence of state, which is not encoded into the mapping
function.
2. DELIBERATIVE ARCHITECTURES
1. A deliberative architecture, as the name implies, is one that includes some deliberation over the
action to perform given the current set of inputs.
2. Instead of mapping the sensors directly to the actuators, the deliberative architecture considers
the sensors, state, prior results of given actions, and other information in order to select the best
action to perform.
3. The mechanism for action selection as is undefined. This is because it could be a variety of
mechanisms including a production system, neural network, or any other intelligent algorithm.
The advantage of the deliberative architecture is that it can be used to solve much more complex
problems than the reactive architecture. It can perform planning, and perform sequences of
actions to achieve a goal.
The disadvantage is that it is slower than the reactive architecture due to the deliberation for the
action to select.
3. BLACKBOARD ARCHITECTURES
1. The blackboard architecture is a very common architecture that is also very interesting.
2. The first blackboard architecture was HEARSAY-II, which was a speech understanding system.
This architecture operates around a global work area call the blackboard.
3. The blackboard is a common work area for a number of agents that work cooperatively to solve
a given problem.
4. The blackboard therefore contains information about the environment, but also intermediate
work results by the cooperative agents.
5. In this example, two separate agents are used to sample the environment through the available
sensors (the sensor agent) and also through the available actuators (action agent).
6. The blackboard contains the current state of the environment that is constantly updated by the
sensor agent, and when an action can be performed (as specified in the blackboard), the action
agent translates this action into control of the actuators.
7. The control of the agent system is provided by one or more reasoning agents.
8. These agents work together to achieve the goals, which would also be contained in the
blackboard.
9. In this example, the first reasoning agent could implement the goal definition behaviors, where
the second reasoning agent could implement the planning portion (to translate goals into
sequences of actions).
10. Since the blackboard is a common work area, coordination must be provided such that agents
don’t step over one another.
11. For this reason, agents are scheduled based on their need. For example, agents can monitor the
blackboard, and as information is added, they can request the ability to operate.
12 The scheduler can then identify which agents desire to operate on the blackboard, and then
invoke them accordingly.
13. The blackboard architecture, with its globally available work area, is easily implemented with
a multithreading system.
14. Each agent becomes one or more system threads. From this perspective, the blackboard
architecture is very common for agent and non-agent systems.
1. BDI, which stands for Belief-Desire-Intention, is an architecture that follows the theory of
human reasoning as defined by Michael Bratman.
2. Belief represents the view of the world by the agent (what it believes to be the state of the
environment in which it exists). Desires are the goals that define the motivation of the agent (what
it wants to achieve).
3. The agent may have numerous desires, which must be consistent. Finally, Intentions specify
that the agent uses the Beliefs and Desires in order to choose one or more actions in order to meet
the desires.
4. As we described above, the BDI architecture defines the basic architecture of any deliberative
agent. It stores a representation of the state of the environment (beliefs), maintains a set of goals
(desires), and finally, an intentional element that maps desires to beliefs (to provide one or more
actions that modify the state of the environment based on the agents needs).
5. HYBRID ARCHITECTURES
2. For example, the architecture of a network stack is made up of a pipe-and-filter architecture and
a layered architecture.
3. This same stack also shares some elements of a blackboard architecture, as there are global
elements that are visible and used by each component of the architecture.
4. The same is true for agent architectures. Based on the needs of the agent system, different
architectural elements can be chosen to meet those needs.
6. MOBILE ARCHITECTURES
1. The final architectural pattern that we’ll discuss is the mobile agent architecture.
2. This architectural pattern introduces the ability for agents to migrate themselves between hosts.
The agent architecture includes the mobility element, which allows an agent to migrate from one
host to another.
3. An agent can migrate to any host that implements the mobile framework.
4. The mobile agent framework provides a protocol that permits communication between hosts
for agent migration.
5. This framework also requires some kind of authentication and security, to avoid a mobile agent
framework from becoming a conduit for viruses. Also implicit in the mobile agent framework is a
means for discovery.
6. For example, which hosts are available for migration, and what services do they provide?
Communication is also implicit, as agents can communicate with one another on a host, or across
hosts in preparation for migration.
Ques 2: What role does bargaining play in resolving conflicts and reaching agreements
among intelligent agents?
Ans: Bargaining is a crucial mechanism for resolving conflicts and reaching agreements among
intelligent agents, whether human, organizational, or artificial. Its primary role is to create a
framework where agents with differing interests or objectives can negotiate terms that lead to
mutually acceptable outcomes. Here are some of the key roles bargaining plays:
1. Conflict Resolution
2. Information Exchange
• Revealing Preferences and Constraints: Through bargaining, agents share their priorities,
constraints, and trade-offs, making it easier to tailor agreements that satisfy all parties.
• Building Trust: Transparent negotiation fosters trust and reduces uncertainty, which is
critical for future interactions.
3. Incentive Alignment
• Balancing Interests: Bargaining ensures that agreements are perceived as fair and
beneficial by all parties, aligning incentives for cooperation.
• Avoiding Exploitation: It helps prevent one party from dominating or exploiting another,
especially when there is an imbalance of power or information.
4. Efficient Resource Allocation
5. Dynamic Adaptation
• Mitigating Power Imbalances: Bargaining frameworks can level the playing field by
providing weaker agents with mechanisms to voice their interests.
• Exploiting Leverage: Agents with more information, resources, or bargaining power can
strategically influence outcomes while adhering to negotiated norms.
7. Facilitating Decision-Making
• Achieving Consensus: It helps agents converge on a decision that might not be ideal for
everyone but is acceptable to all.
Practical Examples:
• Human Negotiations: Labor unions bargaining with employers over wages and benefits.
Ques 3: How do intelligent agents perceive and act within their environment in the context
of multi-agent systems?
Ans: In the context of multi-agent systems (MAS), intelligent agents perceive and act within their
environment using a combination of sensory inputs, decision-making processes, and actions
guided by their goals, roles, and interactions with other agents. Below is an outline of how these
agents operate in MAS environments:
1. Perception
Agents gather information from their environment to build an understanding of the current state
and make decisions.
• Sensors or Inputs: Agents rely on sensors (in physical systems) or data streams (in digital
systems) to perceive their surroundings.
o Example: A robot uses cameras and LiDAR to detect obstacles; a financial trading agent
uses market data feeds.
• State Representation: Agents represent the environment’s state using data structures that
encode relevant information, such as:
2. Decision-Making
Agents use the perceived data to decide their actions, guided by their objectives and constraints.
• Autonomy: Each agent operates independently to fulfill its assigned tasks or achieve its
goals.
• Rationality: Agents aim to act optimally, maximizing their utility or minimizing costs
based on a defined objective function.
• Decision Models:
o Deliberative Models: Advanced planning and reasoning, often based on search algorithms
or reinforcement learning.
o Hybrid Models: Combine reactive and deliberative approaches for balance and flexibility.
• Multi-Agent Interaction: Agents consider the presence and behavior of other agents during
decision-making.
3. Action
Agents perform actions to modify the environment or their own states to achieve their goals.
• Actuators or Outputs: Physical agents (e.g., robots) use actuators like motors, while virtual
agents execute digital commands.
o Example: A smart thermostat adjusts room temperature; an e-commerce agent updates
product pricing.
o Individual Actions: Independent behavior without explicit interaction with other agents.
o Collaborative Actions: Joint actions with other agents, such as synchronized tasks or
shared plans.
• Feedback Loops: Actions modify the environment, and the resulting state is perceived
again, creating a continuous feedback loop.
4. Environment Characteristics
The nature of the environment shapes how agents perceive and act:
• Static vs. Dynamic: Whether the environment changes over time independently of agents.
• Discrete vs. Continuous: The nature of the state space and actions (e.g., grid-based vs.
real-valued).
• Accessible vs. Partially Observable: The extent to which agents can perceive the entire
environment.
5. Communication
2. Traffic Systems: Autonomous vehicles managing traffic flow through local and global
decision-making.
Ques 4: Explain the role of the Belief-Desire-Intention (BDI) model in the architecture of
intelligent agents. How does it facilitate decision-making?
• Can be incomplete, uncertain, or dynamic, requiring updates as the agent gathers more
data.
• Example: A delivery robot believes that a package is at location A based on sensor inputs.
• Commitments to intentions help the agent remain focused, even in the presence of
distractions or new information.
The BDI model facilitates decision-making through a structured process that involves reasoning
about beliefs, selecting desires, and committing to intentions:
• The agent perceives the environment and updates its beliefs accordingly.
• This process ensures that the agent’s understanding of the world is as accurate and current
as possible.
2. Goal Selection
• It selects a subset of desires that are feasible and align with its current context and
priorities.
• The agent deliberates among competing desires, choosing which ones to commit to as
intentions.
• This step often involves reasoning about constraints, resource availability, and trade-offs.
• Example: If multiple delivery routes exist, the agent may choose the fastest one based on
its current belief about traffic conditions.
4. Action Execution
• Plans are predefined or dynamically generated sequences of actions aimed at achieving the
intended goals.
5. Re-Evaluation
• The agent continuously re-evaluates its beliefs, desires, and intentions as the environment
changes or new information becomes available.
• This dynamic adaptability allows the agent to remain effective in uncertain or dynamic
scenarios.
Human-Like Reasoning
• The BDI model mimics human cognitive processes, making it intuitive for modeling
decision-making in agents.
• By continuously updating beliefs and re-evaluating goals, the agent can respond
effectively to changing environments.
Goal-Oriented Behavior
• The separation of desires and intentions ensures that agents pursue well-defined objectives
without being overwhelmed by distractions or less critical goals.
Scalability and Modularity
• The model’s modular nature allows it to be applied across a wide range of applications,
from simple to complex systems.
• Autonomous Systems: Autonomous vehicles using BDI to navigate and avoid obstacles.
• Virtual Assistants: Managing user tasks and adapting to changing user needs.
• Game AI: Modeling intelligent behavior for non-player characters (NPCs) in games.
• Conflict Resolution: Resolving conflicting desires and managing competing intentions can
be non-trivial.
The BDI model provides a structured framework for decision-making by enabling agents to:
• Reason Effectively: Integrate new information and dynamically adjust their course of
action.
• Focus on Goals: Prioritize and commit to objectives that align with current conditions and
long-term aims.
In summary, the BDI model empowers intelligent agents to act rationally and adaptively in
complex, real-world scenarios, enabling them to make decisions that balance their goals with
environmental constraints.
Ans: In the context of multi-agent systems (MAS), the terms argument, negotiation, and
bargaining represent distinct but interrelated processes that enable software agents to resolve
conflicts, coordinate, and make decisions collaboratively or competitively. Here's how each
concept is defined and applied:
Definition:
Argumentation in MAS refers to the exchange of logical statements or evidence between agents to
persuade or justify their positions, beliefs, or actions. It involves reasoning and dialogue aimed at
reaching a consensus, resolving disputes, or supporting decisions.
Key Features:
Applications:
Example:
Two autonomous agents in a delivery system debate the optimal route for shared resources, with
one agent arguing for a faster route and another arguing for a safer one.
Definition:
Negotiation is the process by which agents interact and exchange proposals to reach a mutually
acceptable agreement on shared issues or conflicts. It often involves iterative communication and
compromise.
Key Features:
1. Iterative Process: Agents make offers, counter-offers, or concessions over multiple rounds.
2. Autonomy: Each agent acts independently, aiming to maximize its own utility while
reaching a feasible agreement.
Applications:
• Resource allocation in cloud computing.
Example:
Agents in a smart grid system negotiate the allocation of limited energy resources, balancing
energy supply and demand while satisfying individual agent requirements.
Definition:
Bargaining is a specific type of negotiation where agents work to divide a limited resource or
resolve a conflict by reaching a compromise. The focus is on determining how resources, benefits,
or costs should be allocated.
Key Features:
1. Division of Resources: Typically involves the allocation of a scarce resource, such as time,
space, or money.
2. Strategic Interaction: Agents use strategies like concessions, threats, or offers to influence
the outcome.
3. Utility Maximization: Each agent seeks to maximize its own gain, subject to the
constraints of fairness or agreement protocols.
Applications:
Example:
Two autonomous drones bargain over access to a shared charging station, with each proposing
schedules that balance their energy needs and operational deadlines.
Ques 6: Compare and contrast reactive and deliberative agent architectures. Provide
examples of scenarios where each is best suited.
Ans: Reactive and deliberative agent architectures represent two distinct paradigms in designing
intelligent agents, with each having unique strengths, limitations, and applications. Here's a
detailed comparison:
Comparison Table
Memory and History Typically stateless; does not Maintains a memory of past
maintain history. states or decisions.
Decision-Making Reactive, rule-based, or Goal-driven, involving
reflexive. deliberation and search
algorithms
Ques 7: Discuss the importance of modularity in the design of agent architectures. How does
it enhance scalability and maintainability?
Enhancing Scalability
When new features or capabilities need to be added, modular architectures allow developers to
integrate additional modules without significantly altering the existing structure. For example,
adding a new skill or functionality to a modular agent simply involves developing a new module
and connecting it to the existing system.
By encapsulating specific tasks or processes, modules can be optimized individually for better
performance. This ensures that resources are allocated efficiently, which is crucial for large-scale
systems.
In distributed agent systems, modular design aligns well with the need to run different modules on
separate nodes or devices, promoting scalability across hardware and network boundaries.
Improving Maintainability
Modular design localizes functionality, making it easier to isolate and identify issues within
individual modules. Developers can test each module independently before integrating them into
the larger system.
Modules designed for one project can often be reused in another, reducing redundancy and effort.
This also ensures consistency and quality across different projects.
Modularity naturally divides a system into components with well-defined interfaces. This fosters
clearer documentation, aiding future developers in understanding and modifying the system.
Updates or improvements to a single module can often be implemented without affecting other
parts of the system. For example, upgrading a decision-making module to incorporate more
sophisticated algorithms won't disrupt perception or action modules.
5. Supports Robustness
In case of a module failure, the problem is confined to that module, and fallback or redundancy
strategies can be applied without bringing down the entire system.
Ques 8: How do multi-attribute utility functions impact the negotiation process among
agents? Provide a real-world example.
Ans: Multi-attribute utility functions significantly enhance the negotiation process among agents
by enabling a structured and quantitative evaluation of various trade-offs among multiple
attributes or objectives. These functions allow agents to evaluate proposals holistically,
accounting for the relative importance of different attributes in achieving their goals. This
approach fosters more rational, transparent, and mutually beneficial decision-making during
negotiations.
2. Preference Elicitation: These functions require agents to explicitly define their preferences
and the relative importance (weights) of each attribute. This clarity reduces ambiguity and
miscommunication during negotiations.
3. Optimization: Agents can evaluate the utility of different options systematically, leading to
decisions that maximize their overall satisfaction while considering the other party's preferences.
Scenario: A company is negotiating with suppliers to procure materials. The company evaluates
proposals based on cost, delivery time, quality, and sustainability practices.
2. Negotiation:
o Supplier A offers low cost and quick delivery but average quality and sustainability
practices.
o Supplier B offers moderate cost, excellent quality, and strong sustainability practices, but
longer delivery times.
3. Utility Calculation:
Supplier A's higher utility score makes it the preferred choice under these weights. However, if
sustainability becomes more critical, the company can adjust the weights, potentially favoring
Supplier B.
Ans: In the context of multi-agent systems (MAS), trust and reputation are key concepts that help
agents assess the reliability and reliability of other agents in the system, especially when there is
incomplete or uncertain information. These concepts enable agents to make informed decisions
about whether to collaborate, exchange information, or enter into agreements.
Trust refers to an agent's belief or confidence in another agent's behavior or actions, based on past
experiences, observations, or available evidence. It reflects an agent's willingness to depend on
another agent to perform specific tasks or fulfill commitments, even when the outcome is
uncertain. Trust is a dynamic and evolving concept in MAS, influenced by the history of
interactions and the agent's perception of the other’s intentions, capabilities, and reliability.
• Direct Trust: Built from personal experiences between two agents. For example, if agent A
has had several successful transactions with agent B, agent A will develop trust in agent B’s
reliability.
• Indirect Trust: Derived from recommendations or experiences from other agents. For
example, agent A may trust agent B based on information or referrals from trusted third parties or
agents in the network.
• Contextual Trust: Trust can vary depending on the context or environment. An agent
might trust another agent for specific tasks but not for others if they have specialized skills.
Trust is often evaluated in multi-agent systems through factors like honesty, competence, and
consistency.
Reputation refers to the overall perception or evaluation of an agent's past behavior and
performance, typically based on feedback from other agents within the system. It is an aggregated
measure of an agent's reliability, credibility, and trustworthiness, built over time from the
experiences or evaluations of others. Reputation systems allow agents to evaluate each other
without requiring direct experience, relying instead on the collective judgment of the community.
• Public Reputation: The reputation is known by all agents in the system, usually shared or
recorded in a centralized or distributed reputation database. Reputation systems can use collective
assessments (ratings, reviews) to compute the overall standing of each agent.
Reputation is particularly important in scenarios where agents do not have direct interactions with
one another and must rely on the experiences and evaluations of others to decide whether to trust
a particular agent.
• Trust and Reputation as Complementary Concepts: Reputation helps agents make initial
decisions or form expectations about others, especially in the absence of direct experience. Trust,
however, is built through individual interactions and is more personal and specific to the agent's
history with others. An agent may trust another based on their own interactions, but that trust
could be informed or influenced by the agent's reputation in the broader system.
• Reputation Influences Trust: An agent may be more inclined to trust another if the latter
has a good reputation in the system. Reputation acts as a signal that guides trust-building
processes, especially in large-scale or anonymous systems where agents may not have direct
knowledge of one another.
• Trust Enhances Reputation: Agents with high trustworthiness in their interactions can
accumulate positive feedback, which, in turn, improves their reputation. A positive reputation
then helps them build trust with new or unfamiliar agents.
Answer: In a multi-agent system negotiation is form of interaction that occurs among agents with
different goals.
Negotiation and bargaining is a process by which a joint decision is reached by two or more
agents, each trying to reach an individual goal or objective.
Major challenge of negotiation and bargaining is to allocate scarce resources among agents
representing self-interested parties. The resources can be bandwidth, commodities, money,
processing power etc. The resource becomes scarce as competing claims for it can't be
simultaneously satisfied.
II. The decision process that each agent uses to determine its positions, concessions and
criteria for agreement.
Any negotiation and bargaining mechanism should have the following attributes:
Symmetry: The mechanism should not be biased against any agent for arbitrary or inappropriate
reasons.
Unit 5
Applications
1. Healthcare: IE can extract patient information from clinical notes, aiding in medical
research, diagnosis, and treatment planning.
2. Finance: In finance, IE helps in extracting key information from financial reports,
news articles, and market analysis, supporting investment decisions and risk
management.
· Textual Reviews: Free-form text where customers write about their experience,
including likes, dislikes, suggestions, etc.
5. What are language models, and how do they contribute to natural language
processing tasks?
Ans: Language models are a type of statistical or neural network-based framework designed to
understand and generate human language. They predict the likelihood of a sequence of words
occurring in a sentence, effectively capturing linguistic patterns, syntax, and semantics.
1. Text Generation: They can generate coherent and contextually relevant text, used in
applications like chatbots, content creation, and storytelling.
2. Machine Translation: By understanding context and semantics in both source and target
languages, they improve translation accuracy.
3. Sentiment Analysis: They help identify sentiment in texts, aiding in brand monitoring and
customer feedback analysis.
4. Summarization: Language models can condense documents into concise summaries while
retaining essential information.
5. Question Answering: They improve systems designed to answer questions based on provided
texts, enhancing search engines and virtual assistants.
6. Speech Recognition: By predicting likely sequences of words, they aid in converting spoken
language into text more accurately.
Hence, language models are fundamental to many NLP tasks, enhancing the ability of machines
to understand and interact in human language.
6. How does information retrieval play a crucial role in enhancing search engines and
recommendation systems?
Ans: Information retrieval (IR) is fundamental to the functionality and effectiveness of
search engines and recommendation systems. Here’s how it enhances both:
3. Relevance Ranking: IR models rank search results based on relevance, user behavior,
and context, ensuring that users receive the most pertinent information.
So, information retrieval is pivotal in ensuring that both search engines and
recommendation systems deliver relevant, timely, and personalized content to users,
enhancing overall user experience.
7. Explain the importance of pre-trained language models in various AI applications.
Ans: Pre-trained language models have revolutionized the field of AI, particularly in
natural language processing (NLP). Their importance spans various applications and
functionalities:
1. Transfer Learning: Pre-trained models can be fine-tuned for specific tasks with
relatively small datasets, reducing training time and resource costs.
5. Reduced Need for Large Labeled Datasets: Since they are pre-trained on vast
amounts of unlabeled text, they mitigate the need for extensive labeled datasets, which
are often expensive and time-consuming to produce.
7. Continuous Improvement: Regular updates and new models (like GPT, BERT, etc.)
help keep applications at the forefront of technology, benefiting from advancements in
research and architecture.
2. Syntax Analysis:
- Part-of-Speech Tagging: Identifying the grammatical roles of words (nouns, verbs,
etc.).
- Parsing: Analyzing sentence structure to understand relationships between words.
3. Semantic Analysis:
- Named Entity Recognition (NER): Identifying and classifying key entities in text
(like names, dates, locations).
- Word Sense Disambiguation: Determining the correct meaning of a word based on
context.
4. Discourse Integration:
- Analyzing text to understand the context and flow of conversation, which is crucial
for applications like chatbots.
5. Sentiment Analysis:
- Evaluating the sentiment (positive, negative, neutral) expressed in the text, commonly
used in brand monitoring and customer feedback analysis.
NLP is fundamental in bridging the gap between human communication and machine
understanding, with a comprehensive process that transforms raw text into meaningful
insights and actions.
10. What is Robotics? Differentiate between Robotic System and Other AI Program.
Describe the various Components of a Robot. How does the computer vision
contribute in robotics?
Ans: What is Robotics?
Robotics is a multidisciplinary field that integrates engineering, computer science, and
various technologies to design, build, and operate robots. Robots are automated
machines that can perform tasks traditionally done by humans, often with greater
efficiency, precision, and for tasks that may be dangerous or infeasible for human
workers.
Components of a Robot
1. Sensors: Devices that perceive environmental conditions (e.g., cameras for vision,
LIDAR for distance sensing).
2. Actuators: Motors or mechanisms that allow the robot to move or manipulate objects
(e.g., wheels, robotic arms).
3. Controller: The computer or microcontroller that processes input from sensors and
controls the actuators (often running algorithms for decision-making).
4. Power Supply: Provides energy to the robot's components (e.g., batteries, solar
panels).
5. Software: The program that integrates input and output to execute tasks (includes AI
algorithms for learning and adaptation).
Hence, robotics is a complex field focused on creating intelligent machines with the
ability to interact with the physical world, distinguishable from typical AI programs
through their physical presence and real-time interaction. Various components work in
unison to perform tasks, with computer vision significantly enhancing the robots'
capabilities to perceive and respond to their environments.