0% found this document useful (0 votes)
4 views7 pages

Multilayer ANN For Regression 5107

The document details the process of updating weights and biases in a multilayer artificial neural network (ANN) using backpropagation for one iteration with a learning rate of 0.1. It includes calculations for forward propagation, error computation, and the subsequent updates to weights and biases, resulting in specific numerical values. The final output presents the updated weights, biases, and the error after the update.

Uploaded by

vinaynaidu6872
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views7 pages

Multilayer ANN For Regression 5107

The document details the process of updating weights and biases in a multilayer artificial neural network (ANN) using backpropagation for one iteration with a learning rate of 0.1. It includes calculations for forward propagation, error computation, and the subsequent updates to weights and biases, resulting in specific numerical values. The final output presents the updated weights, biases, and the error after the update.

Uploaded by

vinaynaidu6872
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Name : Kola Vinay Kumar H.T.

No : 2403B05107
Course : M.Tech CSE Subject : Deep Learning Techniques

Multilayer ANN for regression

QUESTION 1 :

Do calculations to update the weights and bias parameters of the ANN model shown in
Figure 1 for one iteration. Training data samples are shown in Table 1. Random weights and
bias parameters are shown in Figure 1. Assume the learning rate η=0.1

ANSWER :

To update the weights and biases of the given artificial neural network (ANN) for one iteration
using backpropagation, we need to follow these steps:
Step 1: Forward Propagation

1. Calculate activations for the hidden layer neurons

2. Calculate output neuron activation

Since the output layer uses a linear activation function:

Step 2: Compute Error

where Y=0.01 (given in the table).

Step 3: Backpropagation (Gradient Calculation)


For hidden layer weights and biases

Step 4: Weight and Bias Updates

Using gradient descent:

where η=0.1 (learning rate).

now compute the numerical values for these updates.

Here are the updated weights and biases after one iteration using backpropagation:

• Updated Weights:

o W11= 0.1495

o W12 = 0.2494

o W21 = 0.1989

o W22 = 0.2988
o W01 = 0.3395

o W02 = 0.3838

• Updated Biases:

o b1 = 0.1893

o b2 = 0.3884

o b0 = 0.4913

• Error after this iteration: 0.5904

PROGRAM :

import numpy as np

# Given data

x1, x2, y_true = 0.05, 0.1, 0.01 # Input and target output

eta = 0.1 # Learning rate

# Given initial weights and biases

W11, W12, W21, W22 = 0.15, 0.25, 0.20, 0.3

W01, W02 = 0.4, 0.45

b1, b2, b0 = 0.2, 0.4, 0.6

# Sigmoid activation function and derivative

def sigmoid(z):

return 1 / (1 + np.exp(-z))
def sigmoid_derivative(a):

return a * (1 - a)

# Forward pass - Hidden layer

z1 = (x1 * W11) + (x2 * W21) + b1

z2 = (x1 * W12) + (x2 * W22) + b2

a1 = sigmoid(z1)

a2 = sigmoid(z2)

# Forward pass - Output layer (Linear activation)

z_o = (a1 * W01) + (a2 * W02) + b0

Y_p = z_o # Since it's linear

# Compute error

error = 0.5 * (y_true - Y_p) ** 2

# Backpropagation - Output layer

delta_o = (Y_p - y_true) # Derivative for linear activation

# Gradients for output layer

dE_dW01 = delta_o * a1

dE_dW02 = delta_o * a2

dE_db0 = delta_o
# Backpropagation - Hidden layer

delta_1 = delta_o * W01 * sigmoid_derivative(a1)

delta_2 = delta_o * W02 * sigmoid_derivative(a2)

# Gradients for hidden layer

dE_dW11 = delta_1 * x1

dE_dW12 = delta_2 * x1

dE_dW21 = delta_1 * x2

dE_dW22 = delta_2 * x2

dE_db1 = delta_1

dE_db2 = delta_2

# Update weights and biases using gradient descent

W11_new = W11 - eta * dE_dW11

W12_new = W12 - eta * dE_dW12

W21_new = W21 - eta * dE_dW21

W22_new = W22 - eta * dE_dW22

W01_new = W01 - eta * dE_dW01

W02_new = W02 - eta * dE_dW02

b1_new = b1 - eta * dE_db1


b2_new = b2 - eta * dE_db2

b0_new = b0 - eta * dE_db0

# Print results

print("Updated Weights and Biases:")

print(f"W11: {W11_new:.4f}, W12: {W12_new:.4f}, W21: {W21_new:.4f}, W22:


{W22_new:.4f}")

print(f"W01: {W01_new:.4f}, W02: {W02_new:.4f}")

print(f"b1: {b1_new:.4f}, b2: {b2_new:.4f}, b0: {b0_new:.4f}")

print(f"Error after update: {error:.4f}")

OUTPUT :

Updated Weights and Biases:

W11: 0.1495, W12: 0.2494, W21: 0.1989, W22: 0.2988

W01: 0.3395, W02: 0.3838

b1: 0.1893, b2: 0.3884, b0: 0.4913

Error after update: 0.5904

You might also like