0% found this document useful (0 votes)
620 views17 pages

Machine Learning Unit 2 MCQ

The document contains 50 multiple choice questions about machine learning topics like backpropagation, decision trees, and artificial neural networks. The questions cover concepts like the objective of backpropagation, representations of different types of nodes in decision trees, advantages of decision trees, definitions of perceptrons and auto-associative networks, and properties of neural networks like learning by example and parallel computation. The document provides the question, possible answers, and an explanation for the correct answer for each multiple choice question.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
620 views17 pages

Machine Learning Unit 2 MCQ

The document contains 50 multiple choice questions about machine learning topics like backpropagation, decision trees, and artificial neural networks. The questions cover concepts like the objective of backpropagation, representations of different types of nodes in decision trees, advantages of decision trees, definitions of perceptrons and auto-associative networks, and properties of neural networks like learning by example and parallel computation. The document provides the question, possible answers, and an explanation for the correct answer for each multiple choice question.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Machine Learning----50-- Objective Questions--UNIT-2

1. What is the objective of backpropagation algorithm?

a) to develop learning algorithm for multilayer feedforward neural network

b) to develop learning algorithm for single layer feedforward neural network

c) to develop learning algorithm for multilayer feedforward neural network, so that network
can be trained to capture the mapping implicitly

d) none of the mentioned

Answer: c

Explanation: The objective of backpropagation algorithm is to to develop learning


algorithm for multilayer feedforward neural network, so that network can be trained to
capture the mapping implicitly.

2. The backpropagation law is also known as generalized delta rule, is it true?

a) yes

b) no

Answer: a

Explanation: Because it fulfils the basic condition of delta rule.

3. What is true regarding backpropagation rule?

a) it is also called generalized delta rule

b) error in output is propagated backwards only to determine weight updates

c) there is no feedback of signal at nay stage

d) all of the mentioned

Answer: d

Explanation: These all statements defines backpropagation algorithm.

4. There is feedback in final stage of backpropagation algorithm?

a) yes

b) no
Answer: b

Explanation: No feedback is involved at any stage as it is a feedforward neural network.

5. What is true regarding backpropagation rule?

a) it is a feedback neural network

b) actual output is determined by computing the outputs of units for each hidden layer

c) hidden layers output is not all important, they are only meant for supporting input and
output layers

d) none of the mentioned

Answer: b

Explanation: In backpropagation rule, actual output is determined by computing the


outputs of units for each hidden layer.

6. What is meant by generalized in statement “backpropagation is a generalized delta rule”


?

a) because delta rule can be extended to hidden layer units

b) because delta is applied to only input and output layers, thus making it more simple and
generalized

c) it has no significance

d) none of the mentioned

Answer: a

Explanation: The term generalized is used because delta rule could be extended to hidden
layer units.

7. What are general limitations of back propagation rule?

a) local minima problem

b) slow convergence

c) scaling

d) all of the mentioned


Answer: d

Explanation: These all are limitations of backpropagation algorithm in general.

8. What are the general tasks that are performed with backpropagation algorithm?

a) pattern mapping

b) function approximation

c) prediction

d) all of the mentioned

Answer: d

Explanation: These all are the tasks that can be performed with backpropagation algorithm
in general.

9. Does backpropagaion learning is based on gradient descent along error surface?

a) yes

b) no

c) cannot be said

d) it depends on gradient descent but not error surface

Answer: a

Explanation: Weight adjustment is proportional to negative gradient of error with respect to


weight.

10. How can learning process be stopped in backpropagation rule?

a) there is convergence involved

b) no heuristic criteria exist

c) on basis of average gradient value

d) none of the mentioned

Answer: c

Explanation: If average gadient value fall below a preset threshold value, the process may
be stopped.
# Decision Trees

11. A _________ is a decision support tool that uses a tree-like graph or model of
decisions and their possible consequences, including chance event outcomes,
resource costs, and utility.

a) Decision tree

b) Graphs

c) Trees

d) Neural Networks

Answer: a

Explanation: Refer the definition of Decision tree.

12. Decision Tree is a display of an algorithm.

a) True

b) False

Answer: a

Explanation: None.

13. What is Decision Tree?

a) Flow-Chart

b) Structure in which internal node represents test on an attribute, each branch


represents outcome of test and each leaf node represents class label

c) Flow-Chart & Structure in which internal node represents test on an attribute,


each branch represents outcome of test and each leaf node represents class label

d) None of the mentioned

Answer: c

Explanation: Refer the definition of Decision tree.

14. Decision Trees can be used for Classification Tasks.


a) True

b) False

Answer: a

Explanation: None.

15. Choose from the following that are Decision Tree nodes?

a) Decision Nodes

b) End Nodes

c) Chance Nodes

d) All of the mentioned

Answer: d

Explanation: None.

16. Decision Nodes are represented by ____________

a) Disks

b) Squares

c) Circles

d) Triangles

Answer: b

17. Chance Nodes are represented by __________

a) Disks

b) Squares

c) Circles

d) Triangles

Answer: c
Explanation: None.

18. End Nodes are represented by __________

a) Disks

b) Squares

c) Circles

d) Triangles

Answer: d

Explanation: None.

19. Which of the following are the advantage/s of Decision Trees?

a) Possible Scenarios can be added

b) Use a white box model, If given result is provided by a model

c) Worst, best and expected values can be determined for different scenarios

d) All of the mentioned

Answer: d

Explanation: None.

20. Which of the following statement(s) is / are true for Gradient Decent
(GD) and Stochastic Gradient Decent (SGD)?
 
In GD and SGD, you update a set of parameters in an iterative manner
to minimize the error function.
In SGD, you have to run through all the samples in your training set for
a single update of a parameter in each iteration.
In GD, you either use the entire data or a subset of training data to
update a parameter in each iteration.
 
A) Only 1
 
B) Only 2
 
C) Only 3
 
D) 1 and 2
 
E) 2 and 3
 
F) 1,2 and 3
 
Solution: (A)
 
In SGD for each iteration you choose the batch which is generally
contain the random sample of data But in case of GD each iteration
contain the all of the training observations.

21. Below are the 8 actual values of target variable in the train file.
 
[0,0,0,1,1,1,1,1]
 
What is the entropy of the target variable?
 
A) -(5/8 log(5/8) + 3/8 log(3/8))
 
B) 5/8 log(5/8) + 3/8 log(3/8)
 
C) 3/8 log(5/8) + 5/8 log(3/8)
 
D) 5/8 log(3/8) – 3/8 log(5/8)
 
Solution: (A)The formula for entropy is
 
So the answer is A.

#ANN

22. A 3-input neuron is trained to output a zero when the input is 110 and a one
when the input is 111. After generalization, the output will be zero when and only
when the input is?
a) 000 or 110 or 011 or 101
b) 010 or 100 or 110 or 101
c) 000 or 010 or 110 or 100
d) 100 or 111 or 101 or 001
View Answer

Answer: c
Explanation: The truth table before generalization is:

Inputs Output
0$

1$

10$

11$

100$

101$

110 0

111 1

where $ represents don’t know cases and the output is random.


After generalization, the truth table becomes:

Inputs Output

000 0

001 1

010 0

011 1

100 0

101 1

110 0

111 1

23. What is perceptron?


a) a single layer feed-forward neural network with pre-processing
b) an auto-associative neural network
c) a double layer auto-associative neural network
d) a neural network that contains feedback
View Answer

Answer: a
Explanation: The perceptron is a single layer feed-forward neural network. It is not
an auto-associative network because it has no feedback and is not a multiple layer
neural network because the pre-processing stage is not made of neurons.

24. What is an auto-associative network?


a) a neural network that contains no loops
b) a neural network that contains feedback
c) a neural network that has only one loop
d) a single layer feed-forward neural network with pre-processing
View Answer

Answer: b
Explanation: An auto-associative network is equivalent to a neural network that
contains feedback. The number of feedback paths(loops) does not have to be one.

25. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with
the constant of proportionality being equal to 2. The inputs are 4, 10, 5 and 20
respectively. What will be the output?
a) 238
b) 76
c) 119
d) 123
View Answer

Answer: a
Explanation: The output is found by multiplying the weights with their respective
inputs, summing the results and multiplying with the transfer function. Therefore:
Output = 2 * (1*4 + 2*10 + 3*5 + 4*20) = 238.

26. Which of the following is true?


(i) On average, neural networks have higher computational rates than conventional
computers.
(ii) Neural networks learn by example.
(iii) Neural networks mimic the way the human brain works.
a) All of the mentioned are true
b) (ii) and (iii) are true
c) (i), (ii) and (iii) are true
d) None of the mentioned
View Answer

Answer: a
Explanation: Neural networks have higher computational rates than conventional
computers because a lot of the operation is done in parallel. That is not the case
when the neural network is simulated on a computer. The idea behind neural nets
is based on the way the human brain works. Neural nets cannot be programmed,
they can only learn by examples.

27. Which of the following is true for neural networks?


(i) The training time depends on the size of the network.
(ii) Neural networks can be simulated on a conventional computer.
(iii) Artificial neurons are identical in operation to biological ones.
a) All of the mentioned
b) (ii) is true
c) (i) and (ii) are true
d) None of the mentioned
View Answer

Answer: c
Explanation: The training time depends on the size of the network; the number of
neuron is greater and therefore the number of possible ‘states’ is increased. Neural
networks can be simulated on a conventional computer but the main advantage of
neural networks – parallel execution – is lost. Artificial neurons are not identical in
operation to the biological ones.

28. What are the advantages of neural networks over conventional computers?
(i) They have the ability to learn by example
(ii) They are more fault tolerant
(iii)They are more suited for real time operation due to their high ‘computational’
rates
a) (i) and (ii) are true
b) (i) and (iii) are true
c) Only (i)
d) All of the mentioned
View Answer

Answer: d
Explanation: Neural networks learn by example. They are more fault tolerant
because they are always able to respond and small changes in input do not
normally cause a change in output. Because of their parallel architecture, high
computational rates are achieved.

29. Which of the following is true?


Single layer associative neural networks do not have the ability to:
(i) perform pattern recognition
(ii) find the parity of a picture
(iii)determine whether two or more shapes in a picture are connected or not
a) (ii) and (iii) are true
b) (ii) is true
c) All of the mentioned
d) None of the mentioned
View Answer

Answer: a
Explanation: Pattern recognition is what single layer neural networks are best at
but they don’t have the ability to find the parity of a picture or to determine whether
two shapes are connected or not.

30. Which is true for neural networks?


a) It has set of nodes and connections
b) Each node computes it’s weighted input
c) Node could be in excited state or non-excited state
d) All of the mentioned
View Answer

Answer: d
Explanation: All mentioned are the characteristics of neural network.

31. What is Neuro software?


a) A software used to analyze neurons
b) It is powerful and easy neural network
c) Designed to aid experts in real world
d) It is software used by Neurosurgeon
View Answer

Answer: b
Explanation: None.

32. Why is the XOR problem exceptionally interesting to neural network


researchers?
a) Because it can be expressed in a way that allows you to use a neural network
b) Because it is complex binary operation that cannot be solved using neural
networks
c) Because it can be solved by a single layer perceptron
d) Because it is the simplest linearly inseparable problem that exists.
View Answer

Answer: d
Explanation: None.

33. What is back propagation?


a) It is another name given to the curvy function in the perceptron
b) It is the transmission of error back through the network to adjust the inputs
c) It is the transmission of error back through the network to allow weights to be
adjusted so that the network can learn
d) None of the mentioned
View Answer

Answer: c
Explanation: Back propagation is the transmission of error back through the
network to allow weights to be adjusted so that the network can learn.

34. Why are linearly separable problems of interest of neural network researchers?
a) Because they are the only class of problem that network can solve successfully
b) Because they are the only class of problem that Perceptron can solve
successfully
c) Because they are the only mathematical functions that are continue
d) Because they are the only mathematical functions you can draw
View Answer

Answer: b
Explanation: Linearly separable problems of interest of neural network researchers
because they are the only class of problem that Perceptron can solve successfully.

35. Which of the following is not the promise of artificial neural network?
a) It can explain result
b) It can survive the failure of some nodes
c) It has inherent parallelism
d) It can handle noise
View Answer

Answer: a
Explanation: The artificial Neural Network (ANN) cannot explain result.

36. Neural Networks are complex ______________ with many parameters.


a) Linear Functions
b) Nonlinear Functions
c) Discrete Functions
d) Exponential Functions
View Answer

Answer: a
Explanation: Neural networks are complex linear functions with many parameters.

37. A perceptron adds up all the weighted inputs it receives, and if it exceeds a
certain value, it outputs a 1, otherwise it just outputs a 0.
a) True
b) False
c) Sometimes – it can also output intermediate values as well
d) Can’t say
View Answer

Answer: a
Explanation: Yes the perceptron works like that.

38. What is the name of the function in the following statement “A perceptron adds
up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a
1, otherwise it just outputs a 0”?
a) Step function
b) Heaviside function
c) Logistic function
d) Perceptron function
View Answer
Answer: b
Explanation: Also known as the step function – so answer 1 is also right. It is a
hard thresholding function, either on or off with no in-between.

39. Having multiple perceptrons can actually solve the XOR problem satisfactorily:
this is because each perceptron can partition off a linear part of the space itself,
and they can then combine their results.
a) True – this works always, and these multiple perceptrons learn to classify even
complex problems
b) False – perceptrons are mathematically incapable of solving linearly inseparable
functions, no matter what you do
c) True – perceptrons can do this but are unable to learn to do it – they have to be
explicitly hand-coded
d) False – just having a single perceptron is enough
View Answer

Answer: c
Explanation: None.

40. The network that involves backward links from output to the input and hidden
layers is called _________
a) Self organizing maps
b) Perceptrons
c) Recurrent neural network
d) Multi layered perceptron
View Answer

Answer: c
Explanation: RNN (Recurrent neural network) topology involves backward links
from output to the input and hidden layers.

41. Which of the following is an application of NN (Neural Network)?


a) Sales forecasting
b) Data validation
c) Risk management
d) All of the mentioned
View Answer

Answer: d
Explanation: All mentioned options are applications of Neural Network.

42. Internal nodes of a decision tree correspond to:


A. Attributes
B. Classes
C. Data instances
D. None of the above

Accepted Answers:
A. Attributes
43. Leaf nodes of a decision tree correspond to:
A. Attributes
B. Classes
C. Data instances
D. None of the above

Accepted Answers:
B. Classes

44. Which of the following criteria is not used to decide which attribute to split next in a
decision tree:

A. Gini index

B. Information gain

C. Entropy

D. Scatter

Accepted Answers:

D. Scatter

45. Which of the following is a valid logical rule for the decision tree below?

A. IF Business Appointment = No & Temp above 70 = No THEN Decision = wear slacks

B. F Business Appointment = Yes & Temp above 70 = Yes THEN Decision = wear shorts

C. IF Temp above 70 = No THEN Decision = wear shorts

D. IF Business Appointment= No & Temp above 70 = No THEN Decision = wear jeans

Accepted Answers:

D. IF Business Appointment= No & Temp above 70 = No THEN Decision = wear jeans

46. A decision tree is pruned in order to:

A. improve classification accuracy on training set

B. improve generalization performance

C. reduce dimensionality of the data

D. make the tree balanced


Accepted Answers:

B. improve generalization performance

47. For questions 7-11, consider the following small data table for two classes of woods.
Using

information gain, construct a decision tree to classify the data set. Answer the following question
for the resulting tree.

Example Density Grain Hardness Class

Example #1 Heavy Small Hard Oak

Example #2 Heavy Large Hard Oak

Example #3 Heavy Small Hard Oak

Example #4 Light Large Soft Oak

Example #5 Light Large Hard Pine

Example #6 Heavy Small Soft Pine

Example #7 Heavy Large Soft Pine

Example #8 Heavy Small Soft Pine

47.(a)Which attribute would information gain choose as the root of the tree?

A. Density

B. Grain

C. Hardness

D. None of the above

Accepted Answers:
C. Hardness

47.(b) What class does the tree infer for the example {Density=Light, Grain=Small, Hardness=Hard}?
A. Oak
B. Pine
C. The example cannot be classified
D. Both classes are equally likely
Accepted Answers:
B. Pine

47.(c) What class does the tree infer for the example {Density=Light, Grain=Small, Hardness=Soft}?
A. Oak
B. Pine
C. The example cannot be classified
D. Both classes are equally likely
Accepted Answers:
A. Oak

47.(d) What class does the tree infer for the example {Density=Heavy, Grain=Small, Hardness=Soft}?
A. Oak
B. Pine
C. The example cannot be classified
D. Both classes are equally likely
Accepted Answers:
B. Pine

47.(e) What class does the tree infer for the example {Density=Heavy, Grain=Small, Hardness=Hard}?
A. Oak
B. Pine
C. The example cannot be classified
D. Both classes are equally likely
Accepted Answers:
A. Oak

48. A perceptron consists of -


A. one neuron
B. two neuron
C. three neuron
D. four neuron
Explanation: perceptron consists of a single neuron.
Ans: A
49. A perceptron can correctly classify instances into two classes where the classes are:
A. Overlapping
B. Linearly separable
C. Non-linearly separable
D. None of the above
Explanation: Perceptron is a linear classifier.
Ans: B
50. The logic function that cannot be implemented by a perceptron having two inputs is?
A. AND
B. OR
C. NOR
D. XOR
Explanation: XOR is not linearly seperable.
Ans: D

You might also like