ML
ML
Instructions to Candidates
Answer ALL the questions.
Use of Calculator is allowed. Use of Mobile is NOT allowed.
Type: MCQ
Q11. Explain the role of Kernel function in SVM. Discuss about different types of Kernel functions. (2)
Role of Kernel in SVM: 0.5 Marks
Atleast 3 different kernel functions with one line definition: 0.5 * 3 = 1.5 Marks
Dimensionality reduction techniques are used to reduce the number of features or variables
in a dataset while still retaining the important information. This is particularly useful when
dealing with high-dimensional data where the number of variables is much larger than the
number of observations.
Principal Component Analysis (PCA): PCA is a linear dimensionality reduction technique that
aims to find a new set of uncorrelated variables, known as principal components, that
capture the maximum amount of variance in the original data. The first principal component
captures the direction of maximum variance in the data, the second captures the direction
of the maximum remaining variance, and so on. PCA is commonly used in data visualization,
feature extraction, and data compression.
t-Distributed Stochastic Neighbor Embedding (t-SNE): t-SNE is a nonlinear dimensionality
reduction technique that maps high-dimensional data onto a low-dimensional space
(typically 2D or 3D) by preserving the pairwise similarities between data points. It uses a
probabilistic approach to model the similarity between points in high-dimensional space and
low-dimensional space, with a focus on preserving the structure of the data. t-SNE is
particularly useful for visualizing high-dimensional data, as it can reveal the underlying
structure and relationships between the data points.
Uniform Manifold Approximation and Projection (UMAP): UMAP is a nonlinear
dimensionality reduction technique that is similar to t-SNE, but uses a different approach to
construct the low-dimensional representation. UMAP works by constructing a high-
dimensional graph of the data points and then using a smooth function to map the points
onto a low-dimensional space. This smooth function is designed to preserve the local
structure of the data, which makes UMAP particularly useful for preserving the cluster
structure of the data.
Locally Linear Embedding (LLE): LLE is a nonlinear dimensionality reduction technique that
works by finding a low-dimensional representation of the data that preserves the local
structure of the data. LLE constructs a graph of the data points and then finds a low-
dimensional representation that minimizes the difference between the distances in the high-
dimensional space and the distances in the low-dimensional space. LLE is particularly useful
for preserving the local structure of the data, which makes it useful for data visualization and
anomaly detection.
Q13. Predict whether the tuple (1.8, 2.1) belongs to Class A or Class B using the principles of
Maximum Likelihood Estimation. (3)
Formulae 0.5 marks
Correct Steps 2.0 marks
Correct Answer 0.5 marks
µx µy σx σy