0% found this document useful (0 votes)
11 views5 pages

Ktra Machine Learning

Uploaded by

22110265
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views5 pages

Ktra Machine Learning

Uploaded by

22110265
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Temperature Sunny Cool

Low T T
Low T T
Medium T F
Medium T T
High T F
High F F
 Entropy H(Cool) =1

The machine learning technique of inductive learning is applied in decision trees.

 True

.Which of the following describes the situation where the Naïve Bayes algorithm performs poorly?

 In the event of a zero frequency situation.

.Which of the following claims regarding the k-Nearest Neighbor algorithm is untrue?
 Regression cannot be performed using it.

.Out of all the statements made regarding the Naïve Bayes classifier method, which one is false?

 It is not applicable to multi-class classifications or binary classifications.

.What is the purpose of a decision tree algorithm?

 Classification

.What kind of machine learning algorithm uses labeled data for training in order to predict fresh,
unseen data?

 Supervised Learning

.In the Bayes Theorem, which of the following words is not used?

 Unlikelihood

.Which of the following claims about k Nearest Neighbor as a lazy learning algorithm is untrue?

 Every intermediate outcome is stored.


Khoảng cách Manhattan vàKhoảng cách Euclidean ar

 Khoảng cách Manhattan (al(x1,và1)(x_1, y_1)( x1,và1)và ( (x_2, y_(x2,và2)(x_2, y_2)( x2,và2),
nó là phép tính DTôiMộtNgiờMộtttMộtN=∣x1−x2∣+∣và1−và2vàD_{Manhattan} = |x_1 -DM
anha tt an=∣ x1−x2∣+∣ và1−và2∣

 Khoảng cách Euclid (khoảng cách L2) là đường thẳng DEuclidean=(x1−x2)2+


(y1−y2)2D_{Euclidean} = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}DEuclidean=(x1−x2)2+(y1−y2)2

.Which method of machine learning enables models to decide depending on input from their
surroundings and prior experiences?

 Reinforcement Learning

.What distinguishes supervised learning from unsupervised learning in particular?

 Labeled data is necessary for supervised learning, but not for unsupervised learning.

.Does the Naïve Bayes algorithm's assumption restrict its application?

 True

.How do you prepare raw data for machine learning by cleaning, converting, and standardizing it?

 Data Preprocessing

.Which of the following uses for the Naïve Bayes algorithm is not one?

 Projecting the stock market.

.What is the Bayes theorem formula? Where (X & Y) and (M & N) are events and P(Y), P(M) & P(N)
≠ 0.

 P(X|Y) = [P(Y|X) * P(X)] / P(Y)

.What is machine learning's primary objective?

 To provide computers the ability to learn from data and gradually enhance performance.

.Text classification is the primary application of naïve Bayes classifier methods.

 True
.Which of the following claims about kNN's sluggish learning style is untrue?

 It uses the training data to construct a discriminative function.

.How does one feed data into a machine learning model to modify its internal parameters and
enhance its performance?

 Training model

.Determine the algorithm for parametric machine learning.

 Naïve Bayes

.Machine learning: What is it?


 The independent learning process made possible by computer programs.

.Which of these algorithms for machine learning under supervision isn't one?

 K-means

.Which of the following claims regarding the term Nearest Neighbor is untrue?

 This learning method is not instance-based.

.Is there a machine learning method that is not supervised?

 K-means

.Decision tree uses the inductive learning machine learning approach.

 True

How far apart are a new query instance (3, 4) and a data point (9, 7) in Manhattan space?

 9

.What presumptions underlie the Naïve Bayesian classifier?

 The model is generative and all input attributes are assumed to be independent of one
another.
The Manhattan distance and the Euclidian distance used in the kNN technique for distance
calculation are the same.

 Sai

.Which method works well for an issue involving binary classification?


 Decision Trees

.Which of the following claims regarding the categorization of k-Nearest Neighbors is untrue?

 The output is the object's property value.

.Regarding the Decision tree, which of the following claims is not true?

 It is limited to use with binary classification tasks.

.During training, what is the objective of a decision tree algorithm?

 To minimize impurity

.Which probability-theoretic machine learning technique is especially well-suited for handling textual
data?

 Naive Bayes

.A tree with n leaves can therefore break a set of n instances in a splitting rule at internal nodes of the
tree based on thresholding the value of a particular feature.

 True

.Assume for the sake of simplification that every instance in a decision tree is a vector of y bits (X = {0,
1}y). Regarding the circumstances mentioned above, which of the following claims is not true?
 A decision tree with 2y+1 leaves and a depth of y + 1 may represent any
classifier from {0, 1}y to {0, 1}.
Think about the decision tree that the figure depicts. If driver X begins at 6:30 AM and there are no
other cars on the road, and driver Q starts at 9 AM and there is an accident, what will be driver X's
and driver Q's travel times, respectively?

 SHORT,LONG

.Regarding a splitting rule at internal nodes of the tree based on thresholding the value of a particular
feature, which of the following claims is untrue?

 Multivariate splits are another name for splits that are based on thresholding the value of a
particular feature.

.Which of the following models is a machine learning generative model?

 Naïve Bayes

You might also like