0% found this document useful (0 votes)
37 views

Homework Week 1

The document discusses implementing and training a feedforward neural network (FFNN) for multi-class classification. It defines functions for forward propagation, backpropagation, calculating error, and training the network over multiple iterations. Graphs are shown of error reduction over iterations and comparing predicted vs actual outputs, demonstrating the network is learning the classification task. The trained model is then tested on new data, outputting predictions.

Uploaded by

Anas Sory
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Homework Week 1

The document discusses implementing and training a feedforward neural network (FFNN) for multi-class classification. It defines functions for forward propagation, backpropagation, calculating error, and training the network over multiple iterations. Graphs are shown of error reduction over iterations and comparing predicted vs actual outputs, demonstrating the network is learning the classification task. The trained model is then tested on new data, outputting predictions.

Uploaded by

Anas Sory
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

In [71]: import numpy as np

import matplotlib.pyplot as plt

In [88]: # Reading Data from file

x0 = []
x1 = []
y = []

fichier = "data_ffnn_3classes.txt"
x0,x1,y = np.loadtxt(fichier,delimiter=" ", unpack=True)

x = np.c_[x0,x1]

y = np.reshape(y,(len(y),1))
y = np.c_[y,1-y]

In [89]: # Initializing Network


x_barre = np.c_[np.ones((len(x),1)),x]

N = len(x[0]) # Number of features


K = 5 # Number of hidden neurons
J = 2 # Number of output neurons

#random intialisation of the paramaters of inputs neurons


V = np.random.rand(K,N+1)
#random intialisation of the paramaters of outputs neurons
W = np.random.rand(J,K+1)

In [90]: def ActivationFunction(sigma):

return np.reciprocal(1+np.exp(-sigma))

In [91]: def PartialDerivate(V,W,G,F_barre,F,Y,X_barre):


dEdv = np.zeros((V.shape[0],V.shape[1]))

for k in range(V.shape[0]):
for n in range(V.shape[1]):
for I in range(G.shape[0]):
for j in range(G.shape[1]):
dEdv[k][n] += ((G[I][j]-Y[I][j]) * (G[I][j])*(1-G[I][j])*W[j][k] * (F[I][k]*(1-F[I
][k])*X_barre[I][n]))

dEdw = np.zeros((W.shape[0],W.shape[1]))
for j in range(W.shape[0]):
for k in range(W.shape[1]):
for I in range(len(G)):
dEdw[j][k] += ((G[I][j]-Y[I][j]) * G[I][j]*(1-G[I][j])*F_barre[I][k])

return dEdv, dEdw

In [92]: def BackwardPropagation(V,W,G,F_barre,F,Y,X_barre):

alpha1 = 0.1
alpha2 = 0.1
dEdv, dEdw = PartialDerivate(V,W,G,F_barre,F,Y,X_barre)
W = W - alpha1 * dEdw
V = V - alpha2 * dEdv

return V,W

In [93]: def ForwardPropagation(x_barre, V , W):

x_barrebarre = np.dot(x_barre,V.T)

F = ActiveFunction(x_barrebarre)
F_barre = np.c_[np.ones((len(F),1)),F]
F_barrebarre = np.dot(F_barre,W.T)

G = ActiveFunction(F_barrebarre)

return F, F_barre , G

In [94]: def SSE(y,g):

E = (1/2) * np.sum(np.square(y-g))
return E

In [95]: iteration = 2000


cost_history = np.zeros((iteration,1))

for i in range(iteration):
F, F_barre , G = ForwardPropagation(X_barre, V , W)
V,W= BackwardPropagation(V,W,G,F_barre,F,Y,X_barre)
cost_history[i] = SSE(Y,G)

Homework
1. Implement the back propagation of the above FFNN with the purpose to optimize the modelparameters. That is, train your
model to learn how to solve the above multi-classificationproblem.
2. Show that your algorithm converges by illustrating the error reduction at each iteration.

In [97]: plt.figure()
plt.plot(cost_history)
plt.title("Error reduction at each iteration")

Out[97]: Text(0.5, 1.0, 'Error reduction at each iteration')

1. What are the optimal parameter values for the hidden layer (v) and for the output layer (ω)?

In [81]: # We choose to randomize values of v & w among all the possible values : Number of neurons Input/Out
put - Number of features

1. Show that your classifier works properly by comparing the predicted output values to the actual training output values.

In [98]: plt.plot(G)
plt.show()

In [99]: plt.plot(Y)
plt.show()

1. Test your optimized model by doing forward propagation over the following test data set:(x1, x2)=(2,2), (x1, x2)=(4,4), and
(x1, x2)=(4.5,1.5).

In [100]: X_test = np.array([[2,2],[4,4],[4.5,1.5]])


X_test = np.c_[np.ones((X_test.shape[0],1)),X_test]
print("X test :\n",X_test)
F_test , F_barre_test , G_test = ForwardPropagation(X_test, V , W)
print("G ",G_test)

X test :
[[1. 2. 2. ]
[1. 4. 4. ]
[1. 4.5 1.5]]
G [[8.39942832e-03 9.92125201e-01]
[9.96804280e-01 3.15335915e-03]
[9.99977342e-01 2.40937219e-05]]

You might also like