0% found this document useful (0 votes)
12 views2 pages

Objective Set - 2

The document is a practice exam for a machine learning course. It contains 20 multiple choice and fill in the blank questions testing core machine learning concepts. Topics covered include features, kernels, evaluation metrics, naive Bayes assumptions, principal component analysis, decision trees, backpropagation, and more.

Uploaded by

Adeib Arief
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views2 pages

Objective Set - 2

The document is a practice exam for a machine learning course. It contains 20 multiple choice and fill in the blank questions testing core machine learning concepts. Topics covered include features, kernels, evaluation metrics, naive Bayes assumptions, principal component analysis, decision trees, backpropagation, and more.

Uploaded by

Adeib Arief
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 2

ACADEMIC YEAR: 2022-23 SET NO: 2

III B.TECH II SEMESTER (R18) CSE & CSD I MID EXAMINATIONS, MAY - 2023
MACHINE LEARNING
OBJECTIVE EXAM
NAME_____________________________ HALL TICKET NO A

Answer all the questions. All questions carry equal marks. Time: 20min. 10 marks.
I choose correct alternative:
1. [ ]
Feature can be used as a________________ (CO2,K5)
A. predictor B. binary split C. All of above D. None of above

2. [ ]
The effectiveness of an SVM depends upon________________ (CO2,k6)
C. soft margin
A. kernel parameters B. selection of kernel
parameter D. All of the above

3. Which of the following evaluation metrics can not be applied in case of [ ]


logistic regression output to compare with target? CO3,k1)
D. mean-squared-
A. accuracy B. auc-roc C. logloss
error

4. A measurable property or parameter of the data-set [ ]


is______________(CO1,k4)
A. training data B. test data C. feature D. validation data

5. Which of the following can only be used when training data are linearly [ ]
separable ?(CO3,k5)
A. linear logistic B. linear soft margin C. linear hard-margin D. the centroid
regression svm svm method
6. [ ]
Impact of high variance on the training set ? (CO3,k5)
C. both under fitting & D. depends upon the
A. under fitting B. over fitting
over fitting dataset
7. [ ]
The father of machine learning is _____________ (CO1,k1)
A. Geoffrey Everest
B. Geoffrey Hill C. Geoffrey chaucer D. None of the above
Hilton
8. Which of the following is true about Naive Bayes? (CO3,k1) [ ]

A. Assumes that B. Assumes that all C. Both A and B D.None of the


all the features the features in a
above options
in a dataset are dataset are
equally independent
important
Which of the following is a reasonable way to select the number of principal
9. [ ]
components “k”? (CO3,k1)

A. Choose k to D. Choose k to
C. Choose k to be
be the smallest be the largest
99% of m (k =
value so that B. Use the elbow value so that
0.99*m, rounded
at least 99% of method 99% of the
to the nearest
the variance is variance is
integer)
retained retained

10. .............. is a widely used and effective machine learning algorithm [ ]


based on the idea of bagging. (CO3,k1)
A. Regression B. Classification C. Decision Tree D. Random Forest

II Fill in the Blanks:

11. Missing data items are __________________ with Bayes classifier. (CO3,k3)

12. Backpropagation is to update each of the _____________________ in the network (CO2,k1)


13. ______________________ is a machine learning algorithm based on supervised learning (CO1,k2)
_________algorithm finds the most specific hypothesis that fits all the positive examples.
14.
(C01,k2)
15.
______________ learning is a logical approach to machine learning (CO1,k4)
16. _____________ is basically the learning task of the machine (CO1,k6)
____________ is defined as the supposition or proposed explanation based on insufficient
17.
evidence (CO2,k4)
18.
__________ theory predictions are used to predict generative learning algorithms (CO2,k5)

_________________ is considered a latent variable model to find the local maximum likelihood
19. parameters of a statistical model (CO3,k3)

20. Gibbs algorithm is at most _______________ error of the Bayes optimal classifier. (CO3,k4)

-ooOoo—

You might also like