A-Simple-Neural-Network-From-Scratch - Jupyter Notebook
A-Simple-Neural-Network-From-Scratch - Jupyter Notebook
Get Dataset
#print shape
print("Data Shape = {}".format(data.shape))
print("Target Shape = {}".format(target.shape))
print('Classes : {}'.format(np.unique(target)))
print('Sample data : {} , Target = {}'.format(data[70],target[70]))
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 1 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
Input Units = 4
Hidden Units = 8
Output Units = 3
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 2 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
In [ ]: #HYPERPARAMETERS
#define layer_neurons
input_units = 4 #neurons in input layer
hidden_units = 8 #neurons in hidden layer
output_units = 3 #neurons in output layer
#define hyper-parameters
learning_rate = 0.03
#regularization parameter
beta = 0.00001
#num of iterations
iters = 4001
Dimesions of Parameters
Shape of layer1_weights (Wxh) = (4,8)
Shape of layer1_biasess (Bh) = (8,1)
Shape of layer2_weights (Why) = (8,3)
Shape of layer2_biasess (By) = (3,1)
In [ ]: #PARAMETERS
layer1_weights = np.random.normal(mean,std,(input_units,hidden_unit
layer1_biases = np.ones((hidden_units,1))
layer2_weights = np.random.normal(mean,std,(hidden_units,output_uni
layer2_biases = np.ones((output_units,1))
parameters = dict()
parameters['layer1_weights'] = layer1_weights
parameters['layer1_biases'] = layer1_biases
parameters['layer2_weights'] = layer2_weights
parameters['layer2_biases'] = layer2_biases
return parameters
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 3 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
Activation Function
Sigmoid
0.5
0
−6 −4 −2 0 2 4 6
In [ ]: #activation function
def sigmoid(X):
return 1/(1+np.exp((-1)*X))
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 4 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
1. Forward Propagation
2. Backward Propagation
m = len(train_dataset)
Store derivatives in derivatives dict
3. Update Parameters
In [ ]: #forward propagation
def forward_propagation(train_dataset,parameters):
cache = dict() #to store the intermediate values for bac
m = len(train_dataset) #number of training examples
#forward prop
logits = np.matmul(train_dataset,layer1_weights) + layer1_biases
activation1 = np.array(sigmoid(logits)).reshape(m,hidden_units)
activation2 = np.array(np.matmul(activation1,layer2_weights) + laye
output = np.array(softmax(activation2)).reshape(m,num_classes)
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 5 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
return cache,output
#backward propagation
def backward_propagation(train_dataset,train_labels,parameters,cache
derivatives = dict() #to store the derivatives
#get parameters
layer1_weights = parameters['layer1_weights']
layer2_weights = parameters['layer2_weights']
#calculate errors
error_output = output - train_labels
error_activation1 = np.matmul(error_output,layer2_weights.T)
error_activation1 = np.multiply(error_activation1,activation1)
error_activation1 = np.multiply(error_activation1,1-activation1)
return derivatives
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 6 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
return parameters
return loss,accuracy
Train Function
1. Initialize Parameters
2. Forward Propagation
3. Backward Propagation
4. Calculate Loss and Accuracy
5. Update the parameters
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 7 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
#training function
def train(train_dataset,train_labels,iters=2):
#To store loss after every iteration.
J = []
#WEIGHTS
global layer1_weights
global layer1_biases
global layer2_weights
global layer2_biases
layer1_weights = parameters['layer1_weights']
layer1_biases = parameters['layer1_biases']
layer2_weights = parameters['layer2_weights']
layer2_biases = parameters['layer2_biases']
for j in range(iters):
#forward propagation
cache,output = forward_propagation(train_dataset,parameters)
#backward propagation
derivatives = backward_propagation(train_dataset,train_labels
#append loss
J.append(loss)
return J,final_output
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 8 of 9
a-simple-neural-network-from-scratch - Jupyter Notebook 24/11/2020, 13:10
#one-hot encoding
for i,label in enumerate(target):
train_labels[i,label] = 1
#normalizations
for i in range(input_units):
mean = train_dataset[:,i].mean()
std = train_dataset[:,i].std()
train_dataset[:,i] = (train_dataset[:,i]-mean)/std
In [ ]: #train data
J,final_output = train(train_dataset,train_labels,iters=4001)
In [ ]:
http://localhost:8889/notebooks/notebooks/ml-learn/nn/a-simple-neural-network-from-scratch.ipynb Page 9 of 9