Lab 4
Lab 4
y = y/100
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def derivatives_sigmoid(x):
return x * (1 - x)
# Variable initialization
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
for i in range(epoch):
# Forward Propagation
hinp = hinp1 + bh
hlayer_act = sigmoid(hinp)
output = sigmoid(outinp)
# Backpropagation
EO = y - output
outgrad = derivatives_sigmoid(output)
d_output = EO * outgrad
EH = d_output.dot(wout.T)
d_hiddenlayer = EH * hiddengrad
wh += x.T.dot(d_hiddenlayer) * lr
print("Input: \n", x)
Problem Description
The model is a simple feedforward neural network, which is being trained to map inputs to
outputs based on data provided in the variables x (inputs) and y (target outputs).
1. Input Data (x): The dataset x contains three data points, each with two features (2D data).
Example: [[2, 9], [1, 5], [3, 6]]. This is typically a matrix where each row corresponds to a
training example, and each column corresponds to a feature.
2. Output Data (y): The dataset y contains the target output corresponding to each row in x.
Example: [[92], [86], [89]]. This is a 1D output where each value corresponds to the target
output for the respective input data in x.
3. Normalization:
o The output y is normalized by dividing each value by 100, converting the target
range to between 0 and 1.
4. Model Structure:
o Hidden Layer: There are 3 neurons in the hidden layer (hiddenlayer_neurons = 3).
o Output Layer: The output layer has only 1 neuron (output_neurons = 1).
5. Activation Function: The model uses the sigmoid activation function, which squashes the
output of a neuron into a range between 0 and 1.
6. Backpropagation:
o The model performs forward propagation to compute the output of the network.
o It then computes the error by comparing the predicted output with the actual
output (y).
o The model uses backpropagation to adjust the weights and biases by calculating
gradients based on the error. The weights and biases are updated to minimize the
error.
7. Learning Rate: The learning rate (lr = 0.1) controls how much the weights and biases are
adjusted in each iteration.
Objective
The model's goal is to train the neural network to make predictions based on the input data. By
adjusting the weights and biases over multiple epochs (iterations), the model should improve its
predictions of the target output (y).
The final result of the code should show how well the trained network can predict the output for
the given input data after 5 epochs.
Flow of Execution
1. Initialization: The weights (wh, wout) and biases (bh, bout) are randomly initialized.
2. Training Loop: The model undergoes 5 iterations (epochs), where in each epoch:
o Error between the predicted output and the actual output is computed.
3. Final Output: After training, the predicted output is displayed to compare with the actual
target output.
Once the model runs, you can check how close the predicted values are to the actual target values
in y. The final output is a trained model that has learned the relationship between the input
features and the target output.