NM (2020)
NM (2020)
The figures in the right-hand margin indicate marks. k) When is Gauss elimination used?
f) A quadratic equation x>-4x+4=0 is defined with 5. Derive second order Runge-Kutta formula. 5
GROUP-C
7. Answer any two of the following questions: 10x2=20
a) Find a root of the equation x*-3x +1.06=0 by
bisection method correct up to three decimal
places. Establish Newton forward interpolation
formula. 5+5=10
b) Use Gauss elimination method to solve the
following system of linear equations: 10
3x—2y+2z=12
x+2y+3z=11
2x-2y-z=3
c) Find the cube root of 10 by Newton-Raphson
method. 10
d) Apply Runge-Kutta method of order 4 to solve
dy/dx= x+y, where y(0)=1, find the value of y at
x=0.1 and 0.2. 10
576/Comp.Sc (3)
577/Comp.Sc. UG/5th Sem/COM.SC-H-DSE-L-502/20 e) Differentiate between supervised and
unsupervised learning.
U.G. 5th Semester Examination - 2020
f) "In a linear least-squares regression problem,
COMPUTER SCIENCE adding regularization can decrease the error of the
[HONOURS] solution on the training data." Whether this
Discipline Specific Elective (DSE)
statement is true or false— justify your answer.
Course Code : COM.SC-H-DSE-L-502
g) Write down the Bayes' Theorem.
Full Marks : 60 Time : 24 Hours
The figures in the right-hand margin indicate marks. h) What is the oscillation effect in Gradient Descent
Candidates are required to give their answers in technique and what is the reason behind the
their own words as far as practicable. oscillation effect?
a) Briefly describe machine learning. Inter cluster similarity with suitable example.
b) ‘What do you mean by Posterior Probability? k) Write down two drawbacks of single-layer
perceptron network.
) "We can get multiple local optimum solutions if
we solve a linear regression problem by 1) How are artificial neurons different from
minimizing the sum of squared errors using biological neurons?
gradient descent"— justify whether this statement m) How is residual sum of square method useful for
is true or false. fitting a linear model?
) "Low dissimilarity between each member of a n) Differentiate between Logistic Regression and
class and low similarity between members of Linear Regression.
dfferent class are followed in supervised task of
o) Why is it called the Naive' Bayes classifier?
data mining"— exemplify the sentence.
LO=L3 155,00
1 on 10. Define the expected
regression problem y = f(x) where
squared loss function
ye R and x e R®.
for
3. "XOR function is not linearly separable by a single Derive the bias-variance decomposition of the
decision boundary line." Explain. expected squared loss function from first principles.
4. Explain the concept of a Perceptron with a neat 5+5=10
diagram and represent the Boolean functions of AND 11. The values of independent variable x and dependent
using Perceptron. 2+3=5 variable y is given below:
5. Explain the Minkowski distance and relate it with X y
Manhattan and Euclidean distance. What is clustering? 0 2
3+2=5 1 3
6. Illustrate the candidate elimination algorithm with a 2 4
suitable example. 3 5
7. Explain 'Naive' Bayes Classifier with an example. 4 6
GROUP - C Calculate the least square regression line y=ax+b. Also
Answer any two of the following : 10x2=20 estimate the value of y when x is 10. 4+6=10
8. In the context of logistic regression, define the
prediction and loss function. Show that if the class
conditional densities, p(ck)k =1,2 are Gaussian, with