0% found this document useful (0 votes)
13 views5 pages

Model Questions - Bank - DataScience

Uploaded by

arungawle652
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views5 pages

Model Questions - Bank - DataScience

Uploaded by

arungawle652
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Model questions (Big data/data science) with some indicative marks (for some problems –

you can see the final answer).

(1) For the given experimental data,

Sample number Variable 1 Variable 2


1 20 20
2 17 25
3 30 11
4 15 14

Calculate the covariance matrix. (8 marks)

Ans:

(2) Calculate the eigen values (lambda) and the eigen vectors (w) of the covariance
matrix, S given by:

(8 marks)

Ans:

w=

-0.6658 -0.7462
-0.7462 0.6658

lambda =

18.1814 0
0 65.1519
(3) Briefly explain, how many principal components can be obtained from a data of
dimension (200 x 100) (2 Marks)
(4) The eigen values of a (2 x 2) covariance matrix is given by: lambda1 = 18.18, lambda2
= 65.15

Based on the eigen values, rank the principal components. How many number of
components are required to represent more than 45 %, 80 % and 95 % of the variance in the
data. (4 Marks)
Ans: Lambda2 > Lambda 1. So the first principal component can be obtained from the eigen
vector that correspond to lambda2.
We need one component to represent > 45% of the variance in the data and we need two
components to represent > 80% of the variance in the data.

(5) For the data given in the problem 1, the eigen values and the eigen vectors of the
corresponding covariance matrix is given by:

w=

-0.6658 -0.7462
-0.7462 0.6658

lambda =

18.1814 0
0 65.1519

Calculate the first principal component. Make a rough plot showing the first principal
component. (6 Marks)

Ans: First principal component:


pc1 =

-1.61
3.95
-15.06
-1.87
(6) Explain the PCA algorithm and discuss about its advantages and limitations. (10
Marks)
You have to check the lecture slides and your CW1.

(7) For the data shown in problem 1, compute the first two principal components (20
Marks).
You must calculate the covariance matrix, c. Then you have to calculate the eigen values and
the corresponding eigen vectors. From the eigen vectors – you can obtain the principal
components (This is a very long problem). It takes roughly 15 minutes to solve the problem.

(8) Lets say we have a data of dimension, (m x n). Explain, how will you reduce the
dimensions of this data.
Use your knowledge about the PCA algorithm to answer this question.

(9) Compare the advantages and limitations of the PCA with the KNN algorithm.
This is something that you have to answer based on the experience and knowledge that you
gained during the lectures, computer hours and tutorials plus the knowledge that you
gained while you prepared the CW1.
(10) Draw a simple perceptron and label the parts involved. Explain the working
principle of the simple perceptron. (6 Marks).
You must draw the perceptron, with at least two inputs. Clearly label the weights and bias
with suitable notations. Show how you calculate the net input, net output. Mention what
type of propagation function will be used in hidden and output layers.
(11) Draw a model feedforward or a backpropagation neural network and label
the parts involved.
You can draw an ANN of architecture 2-1 with at least two inputs. Clearly show how the
information flows from input to output layer and how the weights and bias are adjusted.
(check the lecture slides)
(12) Based on your understanding, discuss about the advantages and limitations
of the deep neural networks.
This is something that you have to answer based on the experience and knowledge that you
gained during the lectures, computer hours and tutorials plus the knowledge that you
gained while you prepared the CW1.

(13) Draw a DNN of architecture, 3 – 3 – 1. Assume the number of inputs is equal


to two. Clearly label the parts involved and any details that may be relevant to the
constructed network. (4 Marks)
You can draw the architecture. Clearly show how the information flows from input to
output layer and how the weights and bias are adjusted. (check the lecture slides). Note:
You have to show the propagation function, weights and bias involved, net input, net
output, target in the diagram.

(14) Based on your understanding, briefly write about the working principle of the
KNN algorithm. (8 Marks)
This is something that you have to answer based on the experience and knowledge that you
gained during the lectures, computer hours and tutorials plus the knowledge that you
gained while you prepared the CW1.

(15) Using KNN algorithm, show that we have at least two categories of crystals in
the given sample. (8 – 10 Marks)

Crystal number Variable 1 Variable 2


1 18 17
2 16 16
3 100 11
4 90 13

Ans: crystals 1 and 2 belong to category 1 and the crystals 3 and 4 belong to category
2. You have to prove this using KNN algorithm.

(16) Solve the perceptron shown in the figure below, which is supposed to predict
the target value, t for the given inputs, xi. The inputs, xi and the corresponding
target, t can be found in the table below:

Inputs
x1 x2 x3 x4 target, t
2 1 2 2 1
Assume the initial weights, w1 = 0, w2 = 0, w3 = 0 and w4 = 0 and the initial bias, b = 0. The
change in weights and bias can be obtained using the formula, atxi and at, respectively. Fix
the number of epochs to 1.
The propagation function, f is defined as follows:
if i < 0, then the output, o = -1
if i = 0, then the output, o = 0
if i > 0, then the output, o = 1
[4 Marks]
(17) In the above problem (problem 16), after one iteration, (or when epoch = 1),
do you think, the perceptron is fully trained. If the answer is ‘yes’ or ‘no’, provide
justification.
[2 Marks]

You might also like