0% found this document useful (0 votes)
10 views5 pages

Solvingmcq

The document contains a series of questions and answers related to neural networks, activation functions, NumPy operations, and clustering. It includes calculations for neuron outputs, weight updates, and array manipulations, along with the correct answers for each question. Additionally, it covers concepts like ReLU activation, Euclidean distance, and the use of functions like np.dot and np.nanmean.

Uploaded by

shaurayavohra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views5 pages

Solvingmcq

The document contains a series of questions and answers related to neural networks, activation functions, NumPy operations, and clustering. It includes calculations for neuron outputs, weight updates, and array manipulations, along with the correct answers for each question. Additionally, it covers concepts like ReLU activation, Euclidean distance, and the use of functions like np.dot and np.nanmean.

Uploaded by

shaurayavohra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

1. For a neuron using the binary sigmoid activation function (λ=1), if the net input is x = -0.

4,
what is the neuron's output?
a) 0.310
b) 0.401
c) 0.490
d) 0.599
[Hint: Use f(x)=11+e−xf(x)=1+e−x1]

2. Compute the derivative of the bipolar sigmoid function at x = 0.6 (λ=1).


a) 0.217
b) 0.358
c) 0.462
d) 0.532
[Hint: Derivative is f′(x)=12(1−f(x)2)f′(x)=21(1−f(x)2)]

3. A neural network uses ReLU activation. If the weighted sum input to a neuron is -0.7, what
is its output?
a) -0.7
b) 0
c) 0.7
d) 1

4. Given the hyperbolic tangent (tanh) output at x = 0.5 is 0.462, what is its derivative at the
same point?
a) 0.786
b) 0.652
c) 0.540

[Hint: f′(x)=1−tanh⁡2(x)f′(x)=1−tanh2(x)]
d) 0.213

Backpropagation & Weight Updates

5. In a neural network, the error derivative w.r.t. a weight wijwij is calculated as -0.12. If the
learning rate (α) is 0.1, what is the weight update ΔwijΔwij?
a) -0.012
b) 0.012
c) -1.2
d) 0.12

6. A neuron’s output is 0.8 (binary sigmoid), and the target is 1. What is the error signal (δ)
for this neuron?
a) 0.2
b) -0.16
c) 0.128
d) -0.2

MNIST Dataset & Keras


7. What is the correct input shape for a dense (fully connected) layer processing flattened
MNIST images?
a) (28, 28)
b) (784,)
c) (1, 784)
d) (256, 256)

8. Which activation function is typically used in the output layer for MNIST digit
classification?
a) ReLU
b) Sigmoid
c) Softmax
d) Tanh

NumPy Operations

9. Given arrays A = np.array([1, 2, 3]) and B = np.array([4, 5, 6]), what does np.dot(A,
B) return?
a) 32
b) [4, 10, 18]
c) 14
d) Error

10. Which NumPy function replaces all elements >5 in array X with 0?
a) np.replace(X, X>5, 0)
b) X[X > 5] = 0
c) np.where(X > 5, 0, X)
d) Both b and c

Self-Organizing Maps (Kohonen)

11. Given weight vector w=[0.2,0.6]w=[0.2,0.6] and input x=[0.4,0.8]x=[0.4,0.8], what is the
squared Euclidean distance?
a) 0.04
b) 0.08
c) 0.10
d) 0.20

12. If the learning rate is 0.3 and the difference between input and weight is 0.5, what is the
weight update?
a) 0.15
b) 0.30
c) 0.03
d) 1.5

Clustering (Euclidean Distance)


13. Point P(3, 4) is assigned to cluster center C1(1, 1). What is the Euclidean distance between
them?
a) 3.6
b) 5
c) 13
d) 25

Answer Key

1. b) 0.401

2. c) 0.462

3. b) 0

4. a) 0.786

5. b) 0.012

6. b) -0.16

7. b) (784,)

8. c) Softmax

9. a) 32

10. d) Both b and c

11. b) 0.08

12. a) 0.15

13. a) 3.6

Basic Operations

1. What does np.arange(3, 9, 2) return?


a) [3, 5, 7]
b) [3, 5, 7, 9]
c) [3, 9]
d) [3, 4, 5, 6, 7, 8]

2. Given A = np.array([1, 2, 3]) and B = np.array([4, 5, 6]), what is A + B?


a) [5, 7, 9]
b) [1, 2, 3, 4, 5, 6]
c) 24
d) Error
3. What is the output of np.zeros((2, 3))?
a) [[0, 0], [0, 0], [0, 0]]
b) [[0, 0, 0], [0, 0, 0]]
c) [0, 0, 0]
d) 0

Matrix Operations

4. For matrices X = np.array([[1, 2], [3, 4]]) and Y = np.array([[5, 6], [7, 8]]), what is np.dot(X,
Y)?
a) [[19, 22], [43, 50]]
b) [[6, 8], [10, 12]]
c) [[5, 12], [21, 32]]
d) [[7, 10], [15, 22]]

5. What does np.transpose([[1, 2], [3, 4]]) return?


a) [[1, 3], [2, 4]]
b) [[1, 2], [3, 4]]
c) [1, 2, 3, 4]
d) [[4, 3], [2, 1]]

6. Which function computes the element-wise product of two arrays?


a) np.dot()
b) np.matmul()
c) np.multiply()
d) np.cross()

Array Manipulation

7. Given arr = np.array([10, 20, 30, 40]), what does arr[1:3] return?
a) [20, 30]
b) [10, 20]
c) [20, 30, 40]
d) [30, 40]

8. How do you reshape np.array([1, 2, 3, 4, 5, 6]) into a (2, 3) matrix?


a) arr.reshape(3, 2)
b) np.reshape(arr, (2, 3))
c) arr.resize(2, 3)
d) Both a and b

9. What does np.where(arr > 5, 1, 0) do if arr = [4, 7, 2, 9]?


a) [0, 1, 0, 1]
b) [4, 1, 2, 1]
c) [False, True, False, True]
d) Error
Advanced Operations

10. What is the output of np.sum([[1, 2], [3, 4]], axis=1)?


a) [3, 7]
b) [4, 6]
c) [1, 2, 3, 4]
d) 10

11. Given arr = np.array([1, 2, np.nan, 4]), how do you compute the mean ignoring NaN?
a) np.mean(arr)
b) np.nanmean(arr)
c) arr.mean(skipna=True)
d) np.mean(arr[~np.isnan(arr)])

12. What does np.random.seed(42) ensure?


a) Generates 42 random numbers
b) Fixes the random number generator’s output for reproducibility
c) Creates a 42x42 matrix of random values
d) Sets all array values to 42

Answer Key

1. a) [3, 5, 7]

2. a) [5, 7, 9]

3. b) [[0, 0, 0], [0, 0, 0]]

4. a) [[19, 22], [43, 50]]

5. a) [[1, 3], [2, 4]]

6. c) np.multiply()

7. a) [20, 30]

8. b) np.reshape(arr, (2, 3))

9. a) [0, 1, 0, 1]

10. a) [3, 7]

11. b) np.nanmean(arr)

12. b) Fixes the random number generator’s output for reproducibility

You might also like