0% found this document useful (0 votes)
17 views10 pages

Unit 4 MCQ

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views10 pages

Unit 4 MCQ

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1.

In a univariate decision tree, how many input dimensions does each internal node typically
test?
a) One
b) Two
c) Three
d) Depends on the dataset
Answer: a) One

2. What type of split does a univariate decision tree use for numeric input dimensions?
a) Binary split
b) Ternary split
c) Quadratic split
d) Exponential split
Answer: a) Binary split

3. Which type of tree induction is used for constructing decision trees from a training sample?
a) Supervised learning
b) Unsupervised learning
c) Heuristic-based local search
d) Reinforcement learning
Answer: c) Heuristic-based local search

4. How is the quality of a split assessed in a classification tree?


a) Using entropy
b) Using mean squared error
c) Using variance
d) Using Gini index or other impurity measures
Answer: d) Using Gini index or other impurity measures

5. What is the goal of postpruning in decision tree construction?


a) To halt tree growth early
b) To prevent overfitting
c) To simplify an already fully grown tree
d) To evaluate performance on a separate pruning set
Answer: c) To simplify an already fully grown tree

6. Which function measures the similarity between data points in a transformed feature space in
Support Vector Machines (SVMs)?
a) Radial basis function (RBF)
b) Sigmoid function
c) Gaussian function
d) Exponential function
Answer: a) Radial basis function (RBF)

7. What type of error function is typically used in regression trees to measure the goodness of a
split?
a) Mean Absolute Error (MAE)
b) Mean Squared Error (MSE)
c) Root Mean Squared Error (RMSE)
d) Variance
Answer: b) Mean Squared Error (MSE)

8. In a univariate decision tree, what happens at each step of the learning process?
a) Greedy selection of the best split
b) Exhaustive search for optimal splits
c) Random selection of split variables
d) Pruning of unnecessary branches
Answer: a) Greedy selection of the best split

9. What is the primary purpose of prepruning in decision tree construction?


a) To evaluate performance on a separate pruning set
b) To prevent overfitting by halting tree growth early
c) To simplify an already fully grown tree
d) To assess the quality of splits based on impurity measures
Answer: b) To prevent overfitting by halting tree growth early

10. Which measure is used to quantify uncertainty about parameter estimates in Bayesian
estimation?
a) Variance
b) Standard deviation
c) Confidence interval
d) All of the above
Answer: d) All of the above

11. In a univariate decision tree, how many branches does a node have for a discrete attribute
with three possible values?
a) 1
b) 2
c) 3
d) Depends on the impurity
Answer: c) 3

12. Which impurity measure is used in classification trees to assess the quality of a split?
a) Mean squared error
b) Entropy
c) Variance
d) Mean absolute error
Answer: b) Entropy

13. What is the role of the complexity parameter in classification tree construction?
a) Controls the size of the tree
b) Determines the number of branches
c) Measures the impurity of nodes
d) None of the above
Answer: a) Controls the size of the tree

14. Which type of tree construction involves recursively splitting nodes based on attributes that
minimize impurity?
a) Prepruning
b) Postpruning
c) Greedy approach
d) Exhaustive search
Answer: c) Greedy approach

15. What is the main difference between prepruning and postpruning in decision tree
construction?
a) Prepruning occurs before data collection, while postpruning occurs after.
b) Prepruning simplifies the tree by removing overfitted subtrees, while postpruning halts tree
growth early.
c) Prepruning evaluates performance on a separate pruning set, while postpruning assesses
quality during tree construction.
d) Prepruning is faster, while postpruning tends to yield more accurate trees.
Answer: d) Prepruning is faster, while postpruning tends to yield more accurate trees.

16. Which function is used to measure the distance between the input vector and the center in a
radial basis function?
a) Gaussian function
b) Euclidean distance
c) Sigmoid function
d) Hyperbolic tangent
Answer: b) Euclidean distance

17. How does a regression tree differ from a classification tree?


a) Regression trees predict continuous outputs, while classification trees predict class labels.
b) Regression trees use entropy as the impurity measure, while classification trees use mean
squared error.
c) Regression trees have binary splits, while classification trees have ternary splits.
d) Regression trees use Gini index as the impurity measure, while classification trees use
variance.
Answer: a) Regression trees predict continuous outputs, while classification trees predict class
labels.

18. What is the primary goal of pruning in decision tree construction?


a) To increase the complexity of the tree
b) To prevent underfitting
c) To reduce computational resources
d) To simplify the tree and prevent overfitting
Answer: d) To simplify the tree and prevent overfitting

19. Which function assigns higher values to points closer to the center in a radial basis function?
a) Sigmoid function
b) Gaussian function
c) Hyperbolic tangent
d) Exponential function
Answer: b) Gaussian function

20. How does the Bayesian estimation process typically start?


a) By defining the model
b) By collecting data
c) By specifying prior distributions
d) By computing the likelihood
Answer: c) By specifying prior distributions

21. What is the primary purpose of recursive splitting in tree construction?


a) To increase impurity
b) To minimize computational resources
c) To simplify the tree
d) To find the best split based on attributes that minimize impurity
Answer: d) To find the best split based on attributes that minimize impurity

22. Which measure is used to assess the quality of a split in regression trees?
a) Entropy
b) Gini index
c) Mean squared error
d) Variance
Answer: c) Mean squared error
21. What is the primary purpose of recursive splitting in tree construction?
a) To increase impurity
b) To minimize computational resources
c) To simplify the tree
d) To find the best split based on attributes that minimize impurity
Answer: d) To find the best split based on attributes that minimize impurity

22. Which measure is used to assess the quality of a split in regression trees?
a) Entropy
b) Gini index
c) Mean squared error
d) Variance
Answer: c) Mean squared error

23. What is the main advantage of postpruning over prepruning in decision tree construction?
a) Postpruning is faster
b) Postpruning yields more accurate trees
c) Postpruning prevents underfitting
d) Postpruning simplifies the tree
Answer: b) Postpruning yields more accurate trees

24. How does a multivariate decision tree differ from a univariate decision tree?
a) Multivariate trees use only one input dimension for splitting
b) Univariate trees are more flexible
c) Multivariate trees can use multiple input dimensions for splitting
d) Univariate trees are more interpretable
Answer: c) Multivariate trees can use multiple input dimensions for splitting

25. What is the primary disadvantage of using complex nodes in decision tree models?
a) Increased interpretability
b) Decreased tree size
c) Increased overfitting
d) Decreased flexibility
Answer: c) Increased overfitting

26. What type of decision boundaries do univariate decision trees create?


a) Complex hyperplanes
b) Arbitrary orientations
c) Axis-aligned splits
d) Nonlinear boundaries
Answer: c) Axis-aligned splits

27. Which function measures the similarity between data points in a transformed feature space in
Support Vector Machines (SVMs)?
a) Radial basis function (RBF)
b) Sigmoid function
c) Gaussian function
d) Exponential function
Answer: a) Radial basis function (RBF)

28. How are impurity measures used in classification tree construction?


a) To maximize impurity at each split
b) To minimize impurity at each split
c) To assign impurity to leaf nodes
d) To evaluate tree complexity
Answer: b) To minimize impurity at each split

29. What is the main goal of rule extraction from decision trees?
a) To increase tree complexity
b) To decrease tree interpretability
c) To understand underlying data relationships
d) To complicate the decision-making process
Answer: c) To understand underlying data relationships

30. In Bayesian estimation, what do prior distributions represent?


a) The likelihood of observing the data
b) The posterior distribution of the parameters
c) The prior knowledge or beliefs about the parameters
d) The complexity parameter of the model
Answer: c) The prior knowledge or beliefs about the parameters

31. How does postpruning contribute to simplifying decision trees?


a) By halting tree growth early
b) By removing overfitted subtrees
c) By increasing tree complexity
d) By adding more branches
Answer: b) By removing overfitted subtrees

32. What is the primary purpose of prepruning in decision tree construction?


a) To simplify an already fully grown tree
b) To prevent underfitting by halting tree growth early
c) To remove overfitted subtrees
d) To evaluate tree performance on a separate pruning set
Answer: b) To prevent underfitting by halting tree growth early
33. Which measure is used to assess the quality of a split in regression trees?
a) Entropy
b) Gini index
c) Mean squared error
d) Variance
Answer: c) Mean squared error

34. What is the main advantage of postpruning over prepruning in decision tree construction?
a) Postpruning is faster
b) Postpruning yields more accurate trees
c) Postpruning prevents underfitting
d) Postpruning simplifies the tree
Answer: b) Postpruning yields more accurate trees

35. How does a multivariate decision tree differ from a univariate decision tree?
a) Multivariate trees use only one input dimension for splitting
b) Univariate trees are more flexible
c) Multivariate trees can use multiple input dimensions for splitting
d) Univariate trees are more interpretable
Answer: c) Multivariate trees can use multiple input dimensions for splitting

36. What is the primary disadvantage of using complex nodes in decision tree models?
a) Increased interpretability
b) Decreased tree size
c) Increased overfitting
d) Decreased flexibility
Answer: c) Increased overfitting

37. What type of decision boundaries do univariate decision trees create?


a) Complex hyperplanes
b) Arbitrary orientations
c) Axis-aligned splits
d) Nonlinear boundaries
Answer: c) Axis-aligned splits

38. How are impurity measures used in classification tree construction?


a) To maximize impurity at each split
b) To minimize impurity at each split
c) To assign impurity to leaf nodes
d) To evaluate tree complexity
Answer: b) To minimize impurity at each split

39. What is the main goal of rule extraction from decision trees?
a) To increase tree complexity
b) To decrease tree interpretability
c) To understand underlying data relationships
d) To complicate the decision-making process
Answer: c) To understand underlying data relationships

40. In Bayesian estimation, what do prior distributions represent?


a) The likelihood of observing the data
b) The posterior distribution of the parameters
c) The prior knowledge or beliefs about the parameters
d) The complexity parameter of the model
Answer: c) The prior knowledge or beliefs about the parameters

You might also like