0% found this document useful (0 votes)
20 views

Week 06 Assignment Solution

Uploaded by

Toygj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Week 06 Assignment Solution

Uploaded by

Toygj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Data Mining: Assignment Week 6: ANN

1. Artificial neural networks can be used for:

A. Pattern Recognition

B. Classification

C. Clustering

D. All of the above

Ans: D

Explanation: ANN are used for all the given tasks in the options.

2. A perceptron can correctly classify instances into two classes where the classes are:

A. Overlapping

B. Linearly separable

C. Non-linearly separable

D. None of the above

Ans: B

Explanation: Perceptron is a linear classifier.

3. The logic function that cannot be implemented by a perceptron having two inputs
is?

A. AND

B. OR

C. NOR

D. XOR

Ans: D

Explanation: XOR is not linearly seperable.


4. A training input x is used for a perceptron learning rule. The desired output is t and
the actual output is o. If learning rate is η, the weight (w) update performed by the
perceptron learning rule is described by?

A. wi ← wi + h(t - o)

B. wi ← wi + h(t - o) x

C. wi ← h(t - o) x

D. wi ← wi + (t - o) x

Ans: B

Explanation: Perceptron training rule: wi = wi + Δ wi

Δ wi= h(t - o) x

where t is the target output for the current training example, o is the output generated
by the perceptron, and h is a positive constant called the learning rate.

5. A neuron with 3 inputs has the weight vector [0.2 -0.1 0.1]^T and a bias θ = 0. If
the input vector is X = [0.2 0.4 0.2]^T , then the total input to the neuron is:

A. 0.2

B. 0.02

C. 0.4

D. 0.10

Ans: B

Explanation: input to neuron = w1*x1+w2*x2+w3*x3 = 0.2*0.2 -


0.1*0.4+0.1*0.2=0.02

6. Suppose we have n training examples xi , i=1...n, whose desired outputs are ti ,


i=1...n. The output of a perceptron for these training examples xi‘s are oi , i=1...n. The
error function minimised by the gradient descend perceptron learning algorithm is:

A. E≡ 1 ∑ ( t i−oi )
2 i=1 .. n

B. 1
E≡ ∑ ( t −o ) 2
2 i=1 .. n i i
1
C. E≡ 2 ∑ ( t i +o i )
2

i=1 .. n

D. E≡ 1 ∑ ( t i +o i )
2 i=1 .. n

Ans : B
1
Explanation: error function is E≡ ∑ ( t −o )2
2 i=1 .. n i i
where t is the target output for the current training example, o is the output generated
by the perceptron.

2
7. The tanh activation function h ( z )= −2 z
−1 is:
1+e

A. Discontinuous and not differentiable

B. Discontinuous but differentiable

C. Continuous but not differentiable

D. Continuous and differentiable

Ans: D

Explanation: tanh is continuous and differentiable.

8. The neural network given bellow takes two binary valued inputs x 1, x 2 ϵ {0,1} and
the activation function is the binary threshold function ( h ( z )=1 if z >0 ;0 otherwise ) . Which
of the following logical functions does it compute?
-1
1

X1 h(X)
5
X2
5

A. OR

B. AND

C. NAND

D. NOR
Ans: A

Explanation: h(X) = 5*X1 + 5*X2 -1 where X1, X2 ϵ {0,1}.

For different values of X1 and X2 we will obtain the value of h(X), this resembles the truth table of
OR.

9. The neural network given bellow takes two binary valued inputs x 1, x 2 ϵ {0,1} and
the activation function is the binary threshold function ( h ( z )=1 if z >0 ;0 otherwise ) . Which
of the following logical functions does it compute?
-1
8

X1 h(x)
5
X2
5

A. OR

B. AND

C. NAND

D. NOR

Ans: B

Explanation: h(X) = 5*X1 + 5*X2 -8 where X1, X2 ϵ {0,1}.

For different values of X1 and X2 we will obtain the value of h(X), this resembles the truth table of
AND.

10. Overfitting is expected when we observe that?

A.With training iterations, error on training set as well as test set decreases

B. With training iterations, error on training set decreases but test set increases

C. With training iterations, error on training set as well as test set increases

D. With training iterations, training set as well as test set error remains constant

Ans: B

Explanation: Overfitting is when training error decreases and test error increases.

You might also like