0% found this document useful (0 votes)
168 views12 pages

ML Question Bank and Sol

The document provides information about using the Perceptron Learning Rule and Delta Rule to train neural networks on multiple datasets. For each dataset, it provides the input vectors, initial weights, learning rate, and desired outputs. It asks to calculate the weights after one complete training cycle for each rule. It also provides information about linear regression, including calculating the regression line and using it to estimate values. It defines a perceptron and perceptron learning rule, and asks about backpropagation, gradient descent rules, and differences between supervised vs unsupervised and batch vs stochastic gradient descent.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views12 pages

ML Question Bank and Sol

The document provides information about using the Perceptron Learning Rule and Delta Rule to train neural networks on multiple datasets. For each dataset, it provides the input vectors, initial weights, learning rate, and desired outputs. It asks to calculate the weights after one complete training cycle for each rule. It also provides information about linear regression, including calculating the regression line and using it to estimate values. It defines a perceptron and perceptron learning rule, and asks about backpropagation, gradient descent rules, and differences between supervised vs unsupervised and batch vs stochastic gradient descent.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

[5 Marks]

1.Use Perceptron Learning rule to train the network. The set of input training vector are as follows:
1 0 -1
X1 = -2 X2= 1.5 X3= 1
0 -0.5 0.5
-1 -1 -1

And the initial weight vector


1
W1= -1
0
0.5
The learning constant C = 0.1 . The desired responses are d1 = -1, d2 = -1, d3 = 1.
Calculate the weight after one complete cycle.
2. Use Delta rule to train the network. The set of input training vector are as follows:
1 0 -1
X1 = -2 X2= 1.5 X3= 1
0 -0.5 0.5
-1 -1 -1

And the initial weight vector


1
W1= -1
0
0.5

The learning constant C = 0.1 . The desired responses are d1 = -1, d2 = -1, d3 = 1.
Calculate the weight after one complete cycle.
3.
The values of y and their corresponding values of y are shown in the table below
x 0 1 2 3 4
y 2 3 5 4 6

a) Find the least square regression line y = a x + b.


b) Estimate the value of y when x = 10.

a) We use a table to calculate a and b.

x y xy x2

0 2 0 0

1 3 3 1

2 5 10 4

3 4 12 9

4 6 24 16

Σx = 10 Σy = 20 Σx y = 49 Σx2 = 30

b)

We now calculate a and b using the least square regression formulas for a and b.

a = (nΣx y - ΣxΣy) / (nΣx2 - (Σx)2) = (5*49 - 10*20) / (5*30 - 102) = 0.9

b = (1/n)(Σy - a Σx) = (1/5)(20 - 0.9*10) = 2.2

b) Now that we have the least square regression line y = 0.9 x + 2.2, substitute x by 10 to find the value of the
corresponding y.

y = 0.9 * 10 + 2.2 = 11.2

4.
The sales of a company (in million dollars) for each year are shown in the table below.
x (year) 2005 2006 2007 2008 2009
y (sales) 12 19 29 37 45

a) Find the least square regression line y = a x + b.


b) Use the least squares regression line as a model to estimate the sales of the company
in 2012.

a) We now use the table to calculate a and b included in the least regression line formula.

x y xy x2

0 12 0 0

1 19 19 1

2 29 58 4

3 37 111 9

4 45 180 16

Σx = 10 Σy = 142 Σxy = 368 Σx2 = 30

b)

We now calculate a and b using the least square regression formulas for a and b.

a = (nΣt y - ΣtΣy) / (nΣt2 - (Σt)2) = (5*368 - 10*142) / (5*30 - 102) = 8.4

b = (1/n)(Σy - a Σx) = (1/5)(142 - 8.4*10) = 11.6

b) In 2012, t = 2012 - 2005 = 7

The estimated sales in 2012 are: y = 8.4 * 7 + 11.6 = 70.4 million dollars.

5. What is a perceptron? Specify Perceptron Learning Rule. What is its purpose?

Ans: Perceptron can be viewed as basic building block in a single layer in a neural network, made up of four
different parts:

1. Input Values or One Input Layer

2. Weights and Bias


3. Net sum

4. Activation function

Purpose:

 It is used for binary classifiers.


Perceptron training Rule

 The perceptron training rule is proven to converge if


 the training instances in X are linearly separable
 a sufficiently small η is used
 The perceptron training rule does not converge if
 the training instances are not linearly separable

5. Explain back-propagation for a multi-layer feed forward network.


7. Derive a gradient descent training rule for a single unit with
output o=w0+w1x1+w1x12+…+wnxn+wnxn2
8. Explain Gradient Descent algorithm.
[2.5 marks]
1.Write the differences between Perceptron Learning rule(PLR) and Delta rule(DR).
Perceptron Learning rule(PLR) Delta rule(DR).

 Converges after a finite number of iterations to a  Converges only asymptotically toward the
hypothesis that perfectly classifies the training minimum error hypothesis, possibly requiring
data, provided the training examples are linearly unbounded time, but converges regardless of
separable. whether the training data are linearly separable.

 Linearly separable data  Linearly non-separable data

 updates weights based on the error in the  updates weights based on the error in the
Thresholded output. Unthresholded linear combination of inputs.

2.What is the differences between Supervised and Unsupervised Learning?


3.What is the differences between Standard gradient descent and Stochastic gradient descent.

Standard/Batch Gradient Descent Stochastic Gradient Descent

 The error is summed over all examples before  Weights are updated upon examining each
updating weights. training example.

 summed over multiple examples requires more  Less computation per weight update step.
computation per weight update step.

 It always fall in local minima because it uses  It sometimes avoids fall in local minima because it
uses

[1 mark]
1. In the Back-Propagation learning algorithm, what is the object of the learning? Does the Back
Propagation learning algorithm guarantee to find the global optimum solution?
Ans:
 The objective is to learn the weights of the interconnections between the inputs and the hidden units
and between the hidden units and the output units.
 The algorithms attempts to minimize the squared error between the network output values and the
target values of these outputs.
 The learning algorithm does not guarantee to find the global optimum solution.
 It guarantees to find at least a local minimum of the error function.
2. What is Artificial Neural Network(ANN).
 An artificial neuron network (ANN) is a computational model based on the structure and
functions of biological neural networks. This is a multi-layer fully-connected neural networks
which consist of an input layer, multiple hidden layers, and an output layer. Every node in one
layer is connected to every other node in the next layer.

3. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the constant of
proportionality being equal to 2. The inputs are 4, 10, 5 and 20 respectively. What will be the output?
Ans. The output is found by multiplying the weights with their respective inputs, summing the results and multiplying
with the transfer function. Therefore: Output = 2 * (1*4 + 2*10 + 3*5 + 4*20) = 238.

4. In which learning the teacher returns reward and punishment to learner?


Ans. Reinforcement Learning

You might also like