0% found this document useful (0 votes)
9 views4 pages

Lab 4

about the lab 4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views4 pages

Lab 4

about the lab 4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

import numpy as np

x = np.array([[2, 9], [1, 5], [3, 6]], dtype=float)

y = np.array([[92], [86], [89]], dtype=float)

x = x/np.amax(x, axis=0) # Maximum of X array longitudinally

y = y/100

# Sigmoid Function

def sigmoid(x):

return 1 / (1 + np.exp(-x))

# Derivative of Sigmoid Function

def derivatives_sigmoid(x):

return x * (1 - x)

# Variable initialization

epoch = 5 # Setting training iterations

lr = 0.1 # Setting learning rate

inputlayer_neurons = 2 # Number of features in dataset

hiddenlayer_neurons = 3 # Number of hidden layer neurons

output_neurons = 1 # Number of neurons at output layer

# Weight and bias initialization

wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))

bh = np.random.uniform(size=(1, hiddenlayer_neurons))

wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))

bout = np.random.uniform(size=(1, output_neurons))

# Draws a random range of numbers uniformly of dim x*y

for i in range(epoch):
# Forward Propagation

hinp1 = np.dot(x, wh)

hinp = hinp1 + bh

hlayer_act = sigmoid(hinp)

outinp1 = np.dot(hlayer_act, wout)

outinp = outinp1 + bout

output = sigmoid(outinp)

# Backpropagation

EO = y - output

outgrad = derivatives_sigmoid(output)

d_output = EO * outgrad

EH = d_output.dot(wout.T)

hiddengrad = derivatives_sigmoid(hlayer_act) # How much hidden layer weights contributed to


error

d_hiddenlayer = EH * hiddengrad

wout += hlayer_act.T.dot(d_output) * lr # Dot product of nextlayererror and currentlayerop

bout += np.sum(d_output, axis=0, keepdims=True) * lr

wh += x.T.dot(d_hiddenlayer) * lr

bh += np.sum(d_hiddenlayer, axis=0, keepdims=True) * lr

print("Input: \n", x)

print("Actual Output: \n", y)

 print("Predicted Output: \n", output)


The code you've shared is implementing a basic neural network for training on a dataset with 2
features and 1 target. The purpose of this model is to predict a target output based on input
features through a training process, specifically using backpropagation and sigmoid activation.

Problem Description

The model is a simple feedforward neural network, which is being trained to map inputs to
outputs based on data provided in the variables x (inputs) and y (target outputs).

Let's break it down:

1. Input Data (x): The dataset x contains three data points, each with two features (2D data).
Example: [[2, 9], [1, 5], [3, 6]]. This is typically a matrix where each row corresponds to a
training example, and each column corresponds to a feature.

2. Output Data (y): The dataset y contains the target output corresponding to each row in x.
Example: [[92], [86], [89]]. This is a 1D output where each value corresponds to the target
output for the respective input data in x.

3. Normalization:

o The input x is normalized by dividing by the maximum value in each feature


column (np.amax(x, axis=0)).

o The output y is normalized by dividing each value by 100, converting the target
range to between 0 and 1.

4. Model Structure:

o Input Layer: There are 2 input features (inputlayer_neurons = 2).

o Hidden Layer: There are 3 neurons in the hidden layer (hiddenlayer_neurons = 3).

o Output Layer: The output layer has only 1 neuron (output_neurons = 1).

5. Activation Function: The model uses the sigmoid activation function, which squashes the
output of a neuron into a range between 0 and 1.

6. Backpropagation:

o The model performs forward propagation to compute the output of the network.

o It then computes the error by comparing the predicted output with the actual
output (y).

o The model uses backpropagation to adjust the weights and biases by calculating
gradients based on the error. The weights and biases are updated to minimize the
error.

7. Learning Rate: The learning rate (lr = 0.1) controls how much the weights and biases are
adjusted in each iteration.

Objective

The model's goal is to train the neural network to make predictions based on the input data. By
adjusting the weights and biases over multiple epochs (iterations), the model should improve its
predictions of the target output (y).
The final result of the code should show how well the trained network can predict the output for
the given input data after 5 epochs.

Flow of Execution

1. Initialization: The weights (wh, wout) and biases (bh, bout) are randomly initialized.

2. Training Loop: The model undergoes 5 iterations (epochs), where in each epoch:

o The forward propagation is performed to calculate the predicted output.

o Error between the predicted output and the actual output is computed.

o Backpropagation is used to update the weights and biases by calculating gradients.

3. Final Output: After training, the predicted output is displayed to compare with the actual
target output.

Once the model runs, you can check how close the predicted values are to the actual target values
in y. The final output is a trained model that has learned the relationship between the input
features and the target output.

You might also like