Ai Model Question Paper-4
Ai Model Question Paper-4
SECTION-A
I. Answer any four questions. Each question carries two marks. (4*2=8)
1.Compare weak AI with strong AI.
Ans:
WEAK AI STRONG AI
• Narrow Al is a type of Al which • Super Al is a level of
is able to perform a dedicated Intelligence of Systems at
task with intelligence. The which machines could surpass
most common and currently human intelligence, and can
available Al is Narrow Al in the perform any task better than
world of Artificial Intelligence. human with cognitive
properties. It is an
outcome of general Al.
Deepak .M
Assistant professor
Baldwin Methodist College
• Direct Instruction (by being told)
• Analogy
• Induction
• Deduction
4.Justify “Probabilistic Reasoning in AI – A way to deal with Uncertainty”
Ans: Probabilistic Reasoning in AI - In probabilistic reasoning, probabilistic
theory can help us give an estimate of how much an event is likely to occur or
happen? In this theory, we find the probabilities of all the alternatives that are
possible in any experiment. The sum of all these probabilities for an
experiment is always because all these events/alternatives can happen only
within this experiment.
• The truth value of a proposition is extended from {0, 1} to [0, 1], with binary
logic as its special case.,
5.What do you mean by chunking in NLP?
Ans: Chunking is used to collect the individual piece of information and
grouping them into bigger pieces of sentences.
6.state the relationship between biological networks and artificial neural
networks.
Ans:
Biological Neural Network Artificial Neural Network
Dendrites Inputs
Cell nucleus Nodes
Cell body Weights
axon output
SECTION-B
II. Answer any four questions. Each questions carries five marks. (4*5=20)
7.With an example, Explain A* search.
Ans: A* search is the most commonly known form of best-first search. It uses
heuristic function h(n), and cost to reach the node n from the start state g(n). It
has combined features of UCS and greedy best-first search, by which it solves
Deepak .M
Assistant professor
Baldwin Methodist College
the problem efficiently. A* search algorithm finds the shortest path through the
search space using the heuristic function. This search algorithm expands less
search tree and provides optimal result faster. A* algorithm is similar to UCS
except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the
node. Hence, we can combine both costs as following, and this sum is called as
a fitness number.
Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the
f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to
reach any node from start state.
Here we will use OPEN and CLOSED list.
Deepak .M
Assistant professor
Baldwin Methodist College
8.With neat diagram, Explain 8-queens problem.
Ans: 8-queens problem: The aim of this problem is to place eight queens on a
chessboard in an order where no queen may attack another. A queen can
attack other queens either diagonally or in same row and column. From the
following figure, we can understand the problem as well as its correct solution.
Deepak .M
Assistant professor
Baldwin Methodist College
2. Complete-state formulation: It starts with all the 8-queens on the
chessboard and moves them around, saving from the attacks.
Following steps are involved in this formulation
• States: Arrangement of all the 8 queens one per column with no queen
attacking the other queen.
• Actions: Move the queen at the location where it is safe from the
attacks.
This formulation is better than the incremental formulation as it reduces
the state space from 1.8\times1014 to 2057, and it is easy to
find the solutions.
9.How Propositional logic in Artificial Intelligence represents Data to
Machine? Explain all logical connectives with truth table.
Ans: There are two types of prepositions, atomic and complex, that can be
represented through propositional logic.
1. Atomic means a single preposition like the sky is blue', 'hot days are humid,
water is liquid, etc. It is made up of only one proposition sign. These are the
sentences that must be true or untrue in order to pass.
2. Complex prepositions are those, which have been formed by connecting
one, two, or more sentences. In propositional logic, there are five symbols
(logical connectives) to create the syntax to represent the connection of two or
more sentences.
Example P1: It is raining today P2: Street is wet P=, P1 A P2; It is raining today
and street is wet.
Logical connectives are used to connect two simpler propositions or
representing a sentence logically. We can create compound propositions with
the help of logical connectives. There are mainly five connectives, which are
given as follows:
The proposition symbols P, Q etc. are sentences
• If P is a sentence, ~P is a sentence (negation)
• If P and Q are sentences, PA Q is a sentence (conjunction)
• If P and Q are sentences, PV Q is a sentence (disjunction)
Deepak .M
Assistant professor
Baldwin Methodist College
• If P and Q are sentences, P⇒ Q is a sentence (implication)
• If P and Q are sentences, P→ Q is a sentence (biconditional)
Deepak .M
Assistant professor
Baldwin Methodist College
➢ Implication: A sentence such as P → Q, is called an implication.
Implications are also known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it represented as P → Q
Deepak .M
Assistant professor
Baldwin Methodist College
Such reasoning is characteristic of commonsense reasoning, where default
rules are applied when case-specific information is not available.
The conclusion of nonmonotonic argument may turn out to be wrong. For
example, if Tweety is a penguin, it is incorrect to conclude that Tweety flies.
Nonmonotonic reasoning often requires jumping to a conclusion and
subsequently retracting that conclusion as further information becomes
available.
• All systems of nonmonotonic reasoning are concerned with the
issue of consistency. Inconsistency is resolved by removing the
relevant conclusion(s) derived previously by default rules. Simply
speaking, the truth value of propositions in a nonmonotonic logic
can be classified into the following types:
1. facts that are definitely true, such as "Tweety is a bird"
2. default rules that are normally true, such as "Birds fly"
3. tentative conclusions that are presumably true, such as "Tweety flies"
When an inconsistency is recognized, only the truth value of the last type
is changed.
A related issue is belief revision. Revising a knowledge base often follows the
principle of minimal change: one conserves as much information as possible.
One approach towards this problem is truth maintenance system, in which a
"justification" for each proposition is stored, so that when some propositions
are rejected, some others may need to be removed, too.
➢ conflicts in defaults
➢ computational expense: to maintain the consistency in a huge
knowledge base is hard, if not impossible.
Deepak .M
Assistant professor
Baldwin Methodist College
• For real-world systems such as Robot navigation, we can use non-
monotonic reasoning.
• In non-monotonic reasoning, we can choose probabilistic facts or
can make assumptions.
Disadvantages of Non-monotonic Reasoning:
• In non-monotonic reasoning, the old facts may be invalidated by
adding new sentences.
• It cannot be used for theorem proving.
11.Explain the challenges of pragmatic analysis in NLP.
Ans: Challenges of Pragmatic Analysis in NLP
• Pragmatic analysis is a complex process that involves analyzing the context in
which a text is used. It is challenging because the context can vary depending
on various factors, such as the speaker's intention, the audience, the setting,
and the cultural background Therefore, developing an NLP system that uses
pragmatic analysis requires a significant amount of data and a sophisticated
analysis technique.
Another challenge of pragmatic analysis is that it can be computationally
expensive. Pragmatic analysis involves analyzing the context of a text, which
can be a time-consuming process. Therefore, developing an NLP system that
uses pragmatic analysis requires a high-performance computing system that
can analyses large amounts of data quickly.
In conclusion, pragmatic analysis is a crucial aspect of NLP that can help
overcome some of the limitations of NLP. It enables computers to understand
the meaning of a text beyond the surface level, leading to improved accuracy
and better performance of NLP systems. Developing an NLP system that uses
pragmatic analysis requires a significant amount of data and a sophisticated
analysis technique, but the benefits are well worth the effort. As the amount of
data available continues to increase, the need for accurate NLP systems that
use pragmatic analysis will only continue to grow.
12.Breifly explain machine learning life cycle.
Ans: Machine Learning Life Cycle
The life cycle of a machine learning project involves a series of steps that
include:
Deepak .M
Assistant professor
Baldwin Methodist College
1. Planning
2. Data Preparation
3. Model Engineering
4. Model Evaluation
5. Model Deployment
6. Monitoring and Maintenance
Each phase in the machine learning cycle follows a quality assurance
framework for co improvement and maintenance by strictly following
requirements and constraints.
1. Planning: The planning phase involves assessing the scope, success
metric, and feasibility of application. You need to understand the
business and how to use machine learning to improve the
current process.
you need to create a feasibility report. It will consist of the information
about
• Availability of the data: do we have enough data available to train
the model? Ca a constant supply of new and updated data? Can
we use synthetic data to reduce Applicability: will this solution
solve the problem or improve the current process C we even use
machine learning to solve this issue?
Deepak .M
Assistant professor
Baldwin Methodist College
2. Data Preparation:
The data preparation section is further divided into four parts: data
procurement and labelling cleaning, management, and processing.
➢ Data Collection and Labelling
We need first to decide how we will collect the data by gathering the internal
data, open source, buying it from the vendors, or generating synthetic data.
Each method has pros and cons, and in some cases, we get the data from all
four methodologies. After collection, we need to label the data. Buying cleaned
and labelled data is not feasible for all companies, and you may also need to
make changes to the data selection during the development process. That is
why you cannot buy it in bulk and why the data on eventually be useless for the
solution. The data collection and labelling require most of the company
resources: money, time, professionals, subject matter experts, and legal
agreements.
➢ Data Cleaning
Next, we will clean the data by imputing missing values, analyzing wrong-
labelled data, removing outliers, and reducing the noise. You will create a data
pipeline to automate this process and perform data quality verification.
➢ Data Processing
The data processing stage involves feature selection, dealing with imbalanced
classes, feature engineering, data augmentation, and normalizing and scaling
the data. For reproducibility, we will store and version the metadata, data
modelling, transformation pipelines, and feature stores.
➢ Data Management
Finally, we will figure out data storage solutions, data versioning for
reproducibility storing metadata, and creating ETL pipelines. This part will
ensure a constant data stream for model training.
3.Model Engineering:
In this phase, we will be using all the information from the planning phase to
build and train a machine learning model. For example: tracking model metrics,
Deepak .M
Assistant professor
Baldwin Methodist College
ensuring scalability and robustness, and optimizing storage and compute
resources.
1. Build effective model architecture by doing extensive research.
2. Defining model metrics.
3. Training and validating the model on the training and validation dataset.
4. Tracking experiments, metadata, features, code changes, and machine
learning pipelines.
5. Performing model compression and ensembling.
6. Interpreting the results by incorporating domain knowledge experts.
4. Model Evaluation:
Now that we have finalized the version of the model, it is time to test various
metrics. Why? So that we can ensure that our model is ready for production.
We will first test our model on a test dataset and make sure we involve subject
matter experts to identify the error in the predictions.
We also need to ensure that we follow industrial, ethical, and legal frameworks
for building Al solutions.
Furthermore, we will test our model for robustness on random and real-world
data. Making sure that the model inferences fast enough to bring the value.
Finally, we will compare the results with the planned success metrics and
decide onto deploy the model or not. In this phase, every process is recorded
and versioned to maintain whether quality and reproducibility.
5.Model Deployment: In this phase, we deploy machine learning models to the
current system. For example, introducing automatic warehouse labeling using
the shape of the product. We will be deploying a computer vision model into
the current system, which will use the images from the camera to print the
labels.
Generally, the models can be deployed on the cloud and local server, web
browser, package a software, and edge device. After that, you can use API, web
app, plugins, or dashboard to access the predictions.
In the deployment process, we define the inference hardware. We need to
make sure we have enough RAM, storage, and computing power to produce
Deepak .M
Assistant professor
Baldwin Methodist College
fast results. After that, we will evaluate the model performance in production
using A/B testing, ensuring user acceptability
The deployment strategy is important. You need to make sure that the changes
are seamless and that they have improved the user experience. Moreover, a
project manager should prepare a disaster management plan. It should include
a fallback strategy, constant monitoring anomaly detection, and
minimizing losses.
6.Monitoring and Maintenance:
After deploying the model to production, we need to constantly monitor and
improve the system. We will be monitoring model metrics, hardware and
software performance, and customer satisfaction.
The monitoring is done completely automatically, and the professionals are
notified about the anomalies, reduced model and system performance, and
bad customer reviews. After we get a reduced performance alert, we will
assess the issues and try to train the model on new data or make changes to
model architectures. It is a continuous process.
In rare cases, we have to revamp the complete machine learning life cycle to
improve the data processing and model training techniques, update new
software and hardware, and introduce a new framework for
continuous integration.
SECTION-C
III. Answer any four questions. Each question carries eight marks. (4*8=32)
13.Explain any 8 top technologies used in AI.
Ans:1. Robotic Process Automation: It uses scripts and mimics the human
process and fed to a robot to complete it effectively
2. Speech Recognition: Siri is the best example of speech recognition which
understands and interacts with the voice response of human language by
mobile apps.
3. Virtual Agents: The Chatbot is a suitable example that is programmed to
interact with a human.
Deepak .M
Assistant professor
Baldwin Methodist College
4. Machine Learning Platform: The main aim is to develop techniques that
enable the computer to learn. They are currently developed for prediction and
acts as an audience management tool. It is most profitable for digital
marketing.
5. Al Optimized Hardware: The new graphics and processing unit are designed
and developed to perform Artificial Intelligent oriented tasks.
6. Decision Management: Intelligent machines are designed to frame new
rules and logic to Al systems for setting up, prolonged maintenance and
optimum tuning and make you run a profitable organization.
7. Deep Learning Platform: It is mainly used for classification and pattern
recognition for large scale data.
8. Biometrics: This technology is used to identify and analyze the human
attributes and physical features of a body's shape and form.
9.Content Writing: Artificial Intelligence helps in content writing such as
articles, blogs, reports by suggesting possible words that suit well for sentences
and also provide spell correction and grammatical mistakes to their online
world.
10.Image Recognition: Image recognition is the method of recognizing and
distinguishing an object or trait in a digital image or video, and AI is frequently
being piled on top of this technology to great effect.
11.Peer to Peer Networks: When multiple systems are connected and share
resources without the data going through the server computer. It is also used in
cryptocurrencies.
14. With a neat diagram, Explain the knowledge-Based System in AI.
Ans: Knowledge-based agents represent searchable knowledge that can be
reasoned. These agents maintain an internal state of knowledge, take decisions
regarding it, update the data, and perform actions on this data based on the
decision. Basically, they are intelligent and respond to stimuli like how humans
react to different situations.
Deepak .M
Assistant professor
Baldwin Methodist College
The above diagram is representing a generalized architecture for a knowledge-
based agent. The knowledge-based agent (KBA) take input from the
environment by perceiving the environment. The input is taken by the
inference engine of the agent and which also communicate with KB to decide
as per the knowledge store in KB. The learning element of KBA regularly
updates the KB by learning new knowledge.
A knowledge-based system comprises of two distinguishable features which
are:
• A Knowledge base
• An Inference Engine.
Knowledge base: A Knowledge base represents the actual facts which exist
in the real world. It is the central component of a knowledge-based agent. It
is a set of sentences which describes the information related to the world.
Inference Engine: Inference means deriving new sentences from old.
Inference system allows us to add a new sentence to the knowledge base. A
sentence is a proposition about the world. Inference system applies logical
rules to the KB to deduce new information.
Inference system generates new facts so that an agent can update the KB.
An inference system works mainly in two rules which are given as:
• Forward chaining
• Backward chaining.
Actions performed by the Knowledge base Agent
When there is a need to add/ update some new information or sentences in
the knowledge-based system, we require an inference system. Also, to know
Deepak .M
Assistant professor
Baldwin Methodist College
what information is already known to the agent, we require the inference
system. The technical words used for describing the mechanism of the
inference system are: TELL and ASK. When the agent solves a problem, it calls
the agent program each time. The agent program performs three things:
• It TELLS the knowledge base what it has perceived from the environment.
• It ASKS the knowledge base about the actions it should take?
• It TELLS the action which is chosen, and finally, the agent
executes that action.
Deepak .M
Assistant professor
Baldwin Methodist College
and knowledge level. At this level, an automated taxi agent actually implement
his knowledge and logic so that he can reach to the destination.
Approaches to designing a knowledge-based Agent
1. Declarative approach: We can create a knowledge-based agent by initializing
with an empty knowledge base and telling the agent all the sentences with
which we want to start with. This approach is called Declarative approach.
2. Procedural approach: In the procedural approach, we directly encode
desired behavior as a program code. Which means we just need to write a
program that already encodes the desired behavior or agent.
Deepak .M
Assistant professor
Baldwin Methodist College
16.What is robotic engineering? With an Example, Explain forward kinematics
& inverse kinematics for 2-DOF. Use Geometrical approach.
Ans: Robotics is a sub-domain of engineering and science that includes
mechanical engineering, electrical engineering, computer science, and others.
This branch deals with the design, construction, use to control robots, sensory
feedback and information processing.
➢ Forward Kinematics: Forward kinematics involves determining the
position and orientation of the end effector of a robot given the joint
angles. It answers the question. "Where is the end effector located in
the robot's workspace?" This information is crucial for tasks such as
trajectory planning and motion control. To calculate the forward
kinematics, engineers use kinematic equations and the Denavit-
Hartenberg parameters. These parameters describe the relationship
between adjacent robot links and joints, allowing for the transformation
of joint angles into Cartesian coordinates.
➢ Inverse kinematics, on the other hand, involves finding the joint angles
required to achieve a desired position and orientation of the end
effector. The rotation matrix is crucial in this process as it helps us
determine the orientation of the end effector based on the given
position. By decomposing the rotation matrix, we can extract the joint
angles needed to achieve the desired orientation.
Deepak .M
Assistant professor
Baldwin Methodist College
17.With an Example, Explain Left most and Right most derivation. Construct
parse tree.
Ans: Left-most Derivation: In the left-most derivation, the sentential form of an
input is scanned and replaced from the left to the right. The sentential form in
this case is called the left-sentential form.
Deepak .M
Assistant professor
Baldwin Methodist College
Ex: X-X+XIX XIX a over an alphabet (a).
The leftmost derivation for the string "a+a"a" may be –
X-X+X-a+X-a+X*X-a+a Xa+a"a
Deepak .M
Assistant professor
Baldwin Methodist College
18.Explain (a) CNN (b) RNN (c) LSTM
Ans:
a.) Convolutional Neural Networks (CNNs) are a powerful class of deep
learning models widely applied in various tasks, including object detection,
speech recognition, computer vision, image classification. and
bioinformatics. They have also demonstrated success in time series
prediction tasks. CNNs are feedforward neural networks that leverage
convolutional structures to extract features from data. Unlike traditional
methods, CNNs automatically learn and recognize features from the data
without the need for manual feature extraction by humans. The design of
CNNs is inspired by visual perception. The major components of CNNs
include the convolutional layer, pooling layer, and fully connected layer.
Below Figure presents a typical CNN architecture for image classification
tasks.
Convolutional Layer: The convolutional layer is a pivotal component of CNNs.
Through multiple convolutional layers, the convolution operation extracts
distinct features from the input.
➢ Pooling Layer: Typically following the convolutional layer, the pooling
layer reduces the number connections in the network by performing
Deepak .M
Assistant professor
Baldwin Methodist College
down-sampling and dimensionality reduction on the in data. Its primary
purpose is to alleviate the computational burden and address overfitting
issues.
b.) Recurrent Neural Networks (RNNs) are a class of deep learning models
that possess internal memory. enabling them to capture sequential
dependencies. Unlike traditional neural networks that treat inputs as
independent entities, RNNs consider the temporal order of inputs, making
them suitable for tasks involving sequential information. By employing a
loop, RNNs apply the same operation to each element in a series, with the
current computation depending on both the current input and the previous
computations.
The ability of RNNs to utilize contextual information is particularly valuable in
tasks such as natural language processing, video classification, and speech
recognition. For example, in language modelling understanding the preceding
words in a sentence is crucial for predicting the next word. RNNs excel at
capturing such dependencies due to their recurrent nature.
However, a limitation of simple RNNs is their short-term memory, which
restricts their ability to retain information over long sequences. To overcome
this, more advanced RNN variants have been developed, including Long Short-
Term Memory (LSTM), bidirectional LSTM, Gated Recurrent Unit (GRU),
bidirectional GRU, Bayesian RNN, and others.
b.) Long Short-Term Memory (LSTM)
Long Short-Term Memory (LSTM) is an advanced variant of Recurrent
Neural Networks (RNN) that addresses the issue of capturing long-term
dependencies. LSTM was initially introduced by in 1997 and further
improved by in 2013, gaining significant popularity in the deep learning
community. Compared to standard RNNs, LSTM models have proven to be
more effective at retaining and utilizing information over longer sequences.
In an LSTM network, the current input at a specific time step and the output
from the previous time step are fed into the LSTM unit, which then generates
an output that is passed to the next time step. The final hidden layer of the last
time step, sometimes along with all hidden layers, is commonly employed for
classification purposes.
Deepak .M
Assistant professor
Baldwin Methodist College
Deepak .M
Assistant professor
Baldwin Methodist College