FunAI-Assignment-Week-11
FunAI-Assignment-Week-11
Assignment No. 11
10 Marks
Each question carries 01 Mark each. There are MORE than ONE correct options for
some of the Questions. All correct options must be identified for the answer to be
evaluated as correct.
Q2. Intra-cluster cohesion measures how near the data points in a cluster are to the cluster centroid
and is a reflection of ____________.
A. Similarity.
B. Dissimilarity.
C. Compactness.
D. Isolation.
Q3. Which of the following statements are correct for a Maximum Margin Classifier?
I. Best hyperplane is the one that represents the largest separation, or margin, between the
two classes.
II. The linear classifier defined by the maximum-margin hyperplane is known as a maximum-
margin classifier.
A. Statements I and II
B. Only Statement II
C. Only Statement I
D. None.
Q4. Exclusive reliance on dot products enables to solve non-linear problems in a support vector
machine because of the following:
I. Learning depends only on dot products of sample pairs.
II. Classification depends only on dot products of unknown with samples.
A. Only statement I.
B. Statements I and II.
C. Only statement II.
D. None of above.
Q5. The Least Square Regression is a statistical procedure to find the best fit for a set of data points.
Which of the following statements express the basic idea of the best fit or the Least-square
regression line?
A. Minimizing the sum of the offsets or residuals of points from the plotted curve.
B. The intercept of the least-squares regression line describes some specific property of the
data.
C. The slope of the regression line describes how much we expect the dependent variable to
change, on average, for every unit change in the independent variable.
D. Least-squares regression line is the line that makes the sum of the squares of the vertical
distances of the data points from the line as small as possible.
Q6. Clustering is hard to evaluate, but very useful in practice. Cluster evaluation can be done using
the following:
Q8. Assertion: One of the rules of thumb for preparation of data when using ordinary Least
Squares Regression is to remove collinearity.
Reason: Linear regression will over-fit the data when we have highly correlated input
variables.
Q10. Assertion: The kernel trick is an efficient way to transform data into higher dimensions.
Reason: The kernel trick allows us to project data from a training set which isn't linearly
separable into a higher dimensional space where it becomes linearly separable.