Ex. No: 2 Simulate The Ann Using The Back-Propagation Algorithm Date
Ex. No: 2 Simulate The Ann Using The Back-Propagation Algorithm Date
OBJECTIVE:
To simulate an Artificial Neural Network (ANN) using the back-propagation algorithm in
MATLAB.
AIM:
To understand neural network fundamentals, training procedures, and weight adjustment
techniques by applying forward propagation, back-propagation, and weight updates to a given
dataset.
SOFTWARE REQUIRED:
MATLAB R2022a/ Open CV/ Google Colab
PROCEDURE FOR MATLAB:
1. Click on the MATLAB Icon on the desktop.
2. Click on the ‘FILE’ Menu on menu bar.
3. Click on NEW M-File from the file Menu.
4. Save the file in directory.
5. Click on DEBUG from Menu bar and Click Run.
6. Open the command window\ Figure window for the output
THEORY:
An Artificial Neural Network (ANN) is a computational model inspired by the structure
and function of biological neural networks. It consists of interconnected layers of nodes (neurons),
including an input layer, one or more hidden layers, and an output layer. Each connection has an
associated weight, which is adjusted during training to minimize the error between the predicted
and actual outputs.
1. Structure of an ANN:
• Input Layer: Receives the input features of the dataset.
• Hidden Layers: Intermediate layers that perform non-linear transformations of the input
features.
• Output Layer: Produces the final output predictions.
PROGRAM:
X = [0 0; 0 1; 1 0; 1 1];
T = [0; 1; 1; 0];
input_neurons = 2; hidden_neurons = 4;
output_neurons = 1; alpha = 0.1;
epochs = 10000; rng('shuffle');
W1 = rand(input_neurons, hidden_neurons) - 0.5;
b1 = rand(1, hidden_neurons) - 0.5;
W2 = rand(hidden_neurons, output_neurons) - 0.5;
b2 = rand(1, output_neurons) - 0.5;
sigmoid = @(x) 1./(1 + exp(-x));
dsigmoid = @(y) y .* (1 - y); % derivative w.r.t output y
mse_values = [];
for epoch = 1:epochs
H_input = X * W1 + b1;
H_output = sigmoid(H_input); % Hidden layer output
O_input = H_output * W2 + b2; % Output layer input
O_output = sigmoid(O_input); % Network output
error = T - O_output; dO = error .* dsigmoid(O_output);
dH = dO * W2' .* dsigmoid(H_output);
W2 = W2 + alpha * H_output' * dO; b2 = b2 + alpha * sum(dO);
W1 = W1 + alpha * X' * dH; b1 = b1 + alpha * sum(dH);
if mod(epoch, 1000) == 0
mse = mean(error.^2);
mse_values = [mse_values; mse]; % Store MSE for plotting
fprintf('Epoch %d, MSE: %.5f\n', epoch, mse);
end
end
fprintf('\nTrained ANN Output:\n');
disp(round(O_output))
fprintf('\nExpected Output:\n');
disp(T)
figure;
plot(1000:1000:epochs, mse_values, '-o');
xlabel('Epochs');
ylabel('Mean Squared Error (MSE)');
title('Learning Curve');
2. Activation Function:
• Common activation functions include the sigmoid, tanh, and ReLU. For the
backpropagation algorithm, the sigmoid function is often used:
3. Forward Propagation:
• The input data is passed through the network, layer by layer, to compute the output
predictions. The activations are calculated as follows:
4. Error Calculation:
• The difference between the predicted output and the actual output is calculated using a loss
function (e.g., Mean Squared Error):
5. Back-Propagation Algorithm:
- is
the error term for each unit, Ŋ- is the
learning rate.
6. Weight Update:
• The weights are updated using gradient descent to minimize the error:
PRELAB QUESTIONS:
1. What are the main components of an Artificial Neural Network (ANN)?
2. What is the role of the activation function in an ANN?
3. How is the error calculated in an ANN during training?
4. What is the purpose of the back-propagation algorithm in training an ANN?
5. Why is the sigmoid function commonly used as an activation function in ANNs?
6. How does the learning rate (α\alpha) affect the training of an ANN?
7. What are the potential issues of overfitting and underfitting in neural networks?
8. How can the performance of a trained ANN be evaluated?
POSTLAB QUESTIONS:
1. What was the impact of different learning rates on the training process and final
performance of the ANN?
2. Were there any challenges in implementing the back-propagation algorithm in MATLAB?
How were they addressed?
3. How did the choice of activation function influence the training and performance of the
ANN?
4. How did the hidden layer size affect the network's ability to learn and generalize from the
dataset?
5. Based on your results, what modifications or improvements would you recommend for the
ANN architecture or training process?
RESULT:
CORE COMPETENCY:
MARKS ALLOCATION:
Preparation 20
Conducting 20
Calculation / Graphs 15
Results 10
Viva 10
Record 10
Total 100
`
Signature of faculty
FLOW CHART: