Week 7 - Lab
Week 7 - Lab
7
[Back propagation algorithm]
1. Aim/Objective:
What is the objective of this experiment?
2. Theory
Provide explanation on the perceptron learning rule
3. Procedure:
Step by step implementation of the implementation
4. Program
Python code to implement Perceptron learning rule
5. Result
State the your inputs and its corresponding output and figures
6. Conclusion
Summarize what you have learned from the experiment
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
Course Module
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x):
return np.tanh(x)
def tanh_prime(x):
return 1.0 - x**2
class NeuralNetwork:
# Set weights
self.weights = []
# layers = [2,2,1]
# range of weight values (-1,1)
# input and hidden layers - random((2+1, 2+1)) : 3 x 3
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
for k in range(epochs):
if k % 10000 == 0: print ('epochs:', k)
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
[Introduction to Fuzzy/Neural Systems]
7
[Back propagation algorithm]
# reverse
# [level3(output)->level2(hidden)] => [level2(hidden)->level3(output)]
deltas.reverse()
# backpropagation
# 1. Multiply its output delta and input activation
# to get the gradient of the weight.
# 2. Subtract a ratio (percentage) of the gradient from the weight.
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
if __name__ == '__main__':
nn = NeuralNetwork([2,2,1])
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 1, 1, 0])
nn.fit(X, y)
for e in X:
print(e,nn.predict(e))
change_in_net_w5=i3*w5^(1-1);
change_in_net_w6=i3*w6^(1-1);
change_in_net_wb1=b1*wb1^(1-1);
%% gradient descent Input to Hidden layer
G1=change_in_h1*change_in_h1net*change_in_net_w1;
G2=change_in_h2*change_in_h2net*change_in_net_w2;
G3=change_in_h1*change_in_h1net*change_in_net_w3;
G4=change_in_h2*change_in_h2net*change_in_net_w4;
G5=change_in_h1*change_in_h1net*change_in_net_w5;
G6=change_in_h2*change_in_h2net*change_in_net_w6;
Gb1=change_in_error_1*change_in_o1net*w7*change_in_h1net*change_in_net_wb1+chang
e_in_error_2*change_in_o2net*w10*change_in_h2net*change_in_net_wb2;
<Exercise 1.
Lab Activity: Simulation
Design and develop the neural network system for the following experiment
Experiment 1: Back Propagation algorithm
1. Design and train a neural network system based on back propagation
algorithm using hyperbolic sigmoidal activation function.
2. Tune the neural network model and minimize the error by updating the
weights and perform the testing.
Course Module
3. Run the simulation in group and explain the working principles of the
algorithm.
4. Interpret the output of the designed neural network system by varying the
inputs.
1. Chaitanya Singh; How to Install Python. (n.d.). Retrieved 14 May 2020, from
https://beginnersbook.com/2018/01/python-installation/; 14-05-2020