Machine Learning Solution
Machine Learning Solution
3) Use Perceptron Learning rule to train the network. The set of input training
vector are as follows:
1 0 -1
X1 = -2 X2= 1.5 X3= 1
0 -0.5 0.5
-1 -1 -1
The learning constant C = 0.1 . The desired responses are d1 = -1, d2 = -1, d3 = 1.
Calculate the weight after one complete cycle.
4.
The values of y and their corresponding values of y are shown in the table below
x 0 1 2 3 4
y 2 3 5 4 6
x y xy x 2
0 2 0 0
1 3 3 1
2 5 10 4
3 4 12 9
4 6 24 16
Σx = 10 Σy = 20 Σx y = 49 Σx2 = 30
We now calculate a and b using the least square regression formulas for a and b.
a = (nΣx y - ΣxΣy) / (nΣx2 - (Σx)2) = (5*49 - 10*20) / (5*30 - 102) = 0.9
b = (1/n)(Σy - a Σx) = (1/5)(20 - 0.9*10) = 2.2
b) Now that we have the least square regression line y = 0.9 x + 2.2, substitute x
by 10 to find the value of the corresponding y.
y = 0.9 * 10 + 2.2 = 11.2
5.a)
What is Back-propagation Learning Algorithm., what is the object of the
learning? Does the Back Propagation learning algorithm guarantee to find the
global optimum solution?
The algorithm is used to effectively train a neural network through a
method called chain rule.
In simple terms, after each forward pass through a network,
backpropagation performs a backward pass while adjusting the model’s
parameters (weights and biases).
The objective is to learn the weights of the interconnections between
the inputs and the hidden units and between the hidden units and the
output units.
The algorithms attempts to minimize the squared error between the
network output values and the target values of these outputs.
The learning algorithm does not guarantee to find the global optimum
solution.
It guarantees to find at least a local minimum of the error function.