0% found this document useful (0 votes)
55 views5 pages

ML BIT Ans

The document contains 40 multiple choice questions testing knowledge of machine learning concepts. The questions cover topics like supervised and unsupervised learning, regression, classification, clustering, naive bayes, decision trees, and other algorithms like k-nearest neighbors and support vector machines.

Uploaded by

Saitama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views5 pages

ML BIT Ans

The document contains 40 multiple choice questions testing knowledge of machine learning concepts. The questions cover topics like supervised and unsupervised learning, regression, classification, clustering, naive bayes, decision trees, and other algorithms like k-nearest neighbors and support vector machines.

Uploaded by

Saitama
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

ML BIT BANK

1. The probability that a particular hypothesis holds for a data set based on the Prior is called
a. Independent probabilities
b. Posterior probabilities
c. Interior probabilities
d. Dependent probabilities

2. Predicting whether a tumour is malignant or benign is an example of?


a. Unsupervised Learning
b. Supervised Regression Problem
c. Supervised Classification Problem
d. Categorical Attribute

3. Price prediction in the domain of real estate is an example of?


a. Unsupervised Learning
b. Supervised Regression Problem
c. Supervised Classification Problem
d. Categorical Attribute

4. This is the first step in the supervised learning model.


a. Problem Identification
b. Identification of Required Data
c. Data Pre-processing
d. Definition of Training Data Set

5. This is the cleaning/transforming the data set in the supervised


learning model.
a. Problem Identification
b. Identification of Required Data
c. Data Pre-processing
d. Definition of Training Data Set
6. Training data run on the algorithm is called as?
a. Program
b. Training
c. Training Information
d. Learned Function

7. SVM is an example of?


a. Linear Classifier and Maximum Margin Classifier
b. Non-linear Classifier and Maximum Margin Classifier
c. Linear Classifier and Minimum Margin Classifier
d. Non-linear Classifier and Minimum Margin Classifier

8. Which of the following algorithms is an example of the


ensemble learning algorithm?
a. Random Forest
b. Decision Tree
c. kNN
d. SVM

9. The distance between hyper plane and data points is called as:
a. Hyper Plan
b. Margins
c. Error
d. Support Vectors
10. ---------- is a line that linearly separates and classifies a set of
data.
a. Hyper plane
b. Soft Margin
c. Linear Margin
d. Support Vectors

11. Which of the following options is true about the kNN algorithm?
a. It can be used only for classification
b. It can be used only for regression
c. It can be used for both classification and regression
d. It is not possible to use for both classification and
Regression
12. Value to be predicted in machine learning is called as
a. Slope
b. Regression
c. Independent variable
d. Dependent variable
13. If the regression involves only one independent variable, it is
called as
a. Multiple regression
b. One regression
c. Simple regression
d. Independent regression
14. This slope always moves upward on a graph from left to right.
a. Multilinear slope
b. No relationship slope
c. Negative slope
d. Positive slope
15. Which equation below is called as Explained Variation?
a. SSR (Sum of Squares due to Regression)
b. SSE (Sum of Squares due to Error)
c. SST (Sum of Squares Total):
d. R-square (R2)
16. ______measures the sum of the squared distances between each data point and its assigned centroid.
a. SSR (Sum of Squares due to Regression)
b. SSE (Sum of Squares due to Error)
c. SST (Sum of Squares Total):
d. R-square (R2)

17. The principle underlying the Market Basket Analysis is known as


a. Association rule
b. Bisecting rule
c. k-means
d. Bayes’ theorem
18. A Voronoi diagram is used in which type of clustering?
a. Hierarchical
b. Partitioning
c. Density based
d. Intuition based
19. k-means clustering algorithm is an example of which type of clustering method?
a. Hierarchical
b. Partitioning
c. Density based
d. Random
20. Which of the following clustering algorithm is most sensitive to outliers?
a. K-means clustering algorithm
b. K-medians clustering algorithm
c. K-medoids clustering algorithm
d. K-modes clustering algorithm
21. In which of the following situations the K-Means clustering fails to give good results?
a. Data points with outliers
b. Data points with different densities
c. Data points with round shapes
d. All of the above
22. The primary driver of clustering knowledge is discovery rather than ________.
a. Prediction
b. Analysis
c. Learning
d. All of the above
23. Which among the following algorithms are used in Machine learning?
a. Naïve Bayes
b. Support Vector Machine
c. K-Nearest Neighbor
d. All of the above
24.___________ is a disadvantage of decision trees?
a. Decision trees are robust to outliers
b. Decision trees are prone to be overfit
c. Both a and b
d. None of the above
25._________ is a part of machine learning that works with neural networks
a. Artifical Inteligence
b. Deep Learning
c. Both a and b
d. None of the above
26. Automatic Speech Recognition systems find a wide variety of applications in the _________ domains.
a. Medical Assistance
b. Industrial Robotics
c. Defence & Aviation
d. All of the above
27. Missing data items are ........................ with Bayes classifier
a. Ignored
b. Treated as equal compares
c. Treated as unequal compares
d. Replaced with a default value
28. The Bayes rule can be used in .............
a. Solving Queries
b. Increasing Complexity
c. Decreasing Complexity
d. Answering Probabilistic query
29. What is called the average squared difference between classifier predicted output and actual output?
a. Mean Relative error
b. Mean squared error
c. Mean absolute error
d. Root mean squared error
30. Logistic regression is a........... Regression technique that is used to model data having a ............
outcome
a. Linear, Binary
b. Linear, Numeric
c. Nonlinear, Binary
d. Nonlinear, Numeric
31. The effectiveness of an SVM depends upon________________
a. kernel parameters
b. selection of kernel
c. soft margin parameter
d. All of the above
32. Support Vector Machine is_______________
a. geometric model
b. probabilistic model
c. logical model
d. none
33. Following is powerful distance metrics used by Geometric model
a. euclidean distance
b. manhattan distance
c. both a and b
d. square distance
34. Which of the following methods do we use to best fit the data in Logistic Regression?
a. least square error
b. maximum likelihood
c. jaccard distance
d. both a and b
. 35. Point out the wrong statement.
a. regression through the origin yields an equivalent slope if you center the data first
b. normalizing variables results in the slope being the correlation
c. least squares is not an estimation tool
d. none of the mentioned
36. If X and Y in a regression model are totally unrelated,
a. the correlation coefficient would be -1
b. the coefficient of determination would be 0
c. the coefficient of determination would be 1
d. the sse would be 0
37. Problem in multi regression is ?
a. multicollinearity
b. overfitting
c. both multicollinearity & overfitting
d. underfitting
38. Which statement is true about the K-Means algorithm?
a. the output attribute must be cateogrical
b. all attribute values must be categorical
c. all attributes must be numeric
d. attribute values may be either categorical or numeric
39. The most general form of distance is
a. manhattan
b. eucledian
c. mean
d. minkowski
40. Which one of these is not a tree based learner?
a. cart
b. id3
c. bayesian classifier
d. random forest

You might also like