ML Question Bank and Sol
ML Question Bank and Sol
1.Use Perceptron Learning rule to train the network. The set of input training vector are as follows:
1 0 -1
X1 = -2 X2= 1.5 X3= 1
0 -0.5 0.5
-1 -1 -1
The learning constant C = 0.1 . The desired responses are d1 = -1, d2 = -1, d3 = 1.
Calculate the weight after one complete cycle.
3.
The values of y and their corresponding values of y are shown in the table below
x 0 1 2 3 4
y 2 3 5 4 6
x y xy x2
0 2 0 0
1 3 3 1
2 5 10 4
3 4 12 9
4 6 24 16
Σx = 10 Σy = 20 Σx y = 49 Σx2 = 30
b)
We now calculate a and b using the least square regression formulas for a and b.
b) Now that we have the least square regression line y = 0.9 x + 2.2, substitute x by 10 to find the value of the
corresponding y.
4.
The sales of a company (in million dollars) for each year are shown in the table below.
x (year) 2005 2006 2007 2008 2009
y (sales) 12 19 29 37 45
a) We now use the table to calculate a and b included in the least regression line formula.
x y xy x2
0 12 0 0
1 19 19 1
2 29 58 4
3 37 111 9
4 45 180 16
b)
We now calculate a and b using the least square regression formulas for a and b.
The estimated sales in 2012 are: y = 8.4 * 7 + 11.6 = 70.4 million dollars.
Ans: Perceptron can be viewed as basic building block in a single layer in a neural network, made up of four
different parts:
4. Activation function
Purpose:
Converges after a finite number of iterations to a Converges only asymptotically toward the
hypothesis that perfectly classifies the training minimum error hypothesis, possibly requiring
data, provided the training examples are linearly unbounded time, but converges regardless of
separable. whether the training data are linearly separable.
updates weights based on the error in the updates weights based on the error in the
Thresholded output. Unthresholded linear combination of inputs.
The error is summed over all examples before Weights are updated upon examining each
updating weights. training example.
summed over multiple examples requires more Less computation per weight update step.
computation per weight update step.
It always fall in local minima because it uses It sometimes avoids fall in local minima because it
uses
[1 mark]
1. In the Back-Propagation learning algorithm, what is the object of the learning? Does the Back
Propagation learning algorithm guarantee to find the global optimum solution?
Ans:
The objective is to learn the weights of the interconnections between the inputs and the hidden units
and between the hidden units and the output units.
The algorithms attempts to minimize the squared error between the network output values and the
target values of these outputs.
The learning algorithm does not guarantee to find the global optimum solution.
It guarantees to find at least a local minimum of the error function.
2. What is Artificial Neural Network(ANN).
An artificial neuron network (ANN) is a computational model based on the structure and
functions of biological neural networks. This is a multi-layer fully-connected neural networks
which consist of an input layer, multiple hidden layers, and an output layer. Every node in one
layer is connected to every other node in the next layer.
3. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the constant of
proportionality being equal to 2. The inputs are 4, 10, 5 and 20 respectively. What will be the output?
Ans. The output is found by multiplying the weights with their respective inputs, summing the results and multiplying
with the transfer function. Therefore: Output = 2 * (1*4 + 2*10 + 3*5 + 4*20) = 238.