0% found this document useful (0 votes)
8 views2 pages

Exp - No 2

The document outlines the implementation of a Multi-Layer Perceptron (MLP) neural network in Python, detailing its structure which includes input, hidden, and output layers. It describes key components such as feedforward propagation, backpropagation, and the use of activation functions like Sigmoid and ReLU, along with the Mean Squared Error loss function for training. The aim is to create a flexible MLP capable of handling multiple hidden layers and making predictions after training.

Uploaded by

vishwaas579
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views2 pages

Exp - No 2

The document outlines the implementation of a Multi-Layer Perceptron (MLP) neural network in Python, detailing its structure which includes input, hidden, and output layers. It describes key components such as feedforward propagation, backpropagation, and the use of activation functions like Sigmoid and ReLU, along with the Mean Squared Error loss function for training. The aim is to create a flexible MLP capable of handling multiple hidden layers and making predictions after training.

Uploaded by

vishwaas579
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Experiment No-2

Aim: Implement Multi-Layer Perceptron (MLP).

Theory: To implement a Multi-Layer Perceptron (MLP) neural network from scratch in


Python, we'll extend the previous example to support multiple hidden layers. The structure
will be as follows:

1. Input Layer: Takes in the input features.


2. Hidden Layers: Multiple hidden layers with activation functions (e.g., ReLU or
Sigmoid).
3. Output Layer: The final layer that produces the output.

The primary components of an MLP include:

 Feedforward Propagation: Data is passed through the network, layer by layer, to


produce an output.
 Backpropagation: The error is propagated backward through the network to adjust
the weights using the gradient of the error with respect to the weights (via the chain
rule).
 Gradient Descent: Weights are updated based on the gradients computed from
backpropagation to minimize the error.

Key Components of the MLP:

1. Layer Sizes:
o The network can have multiple hidden layers. Each hidden layer has a configurable
number of neurons. The hidden_sizes list in the constructor specifies the number
of neurons in each hidden layer.
2. Activation Functions:
o We support both Sigmoid and ReLU activation functions.
o The sigmoid function is commonly used for binary classification, while ReLU is often
used in deep learning networks as it avoids vanishing gradients.
3. Feedforward Propagation:
o The forward method propagates the input data through the network, layer by
layer, and applies the activation function at each layer.
4. Backpropagation:
o The backward method computes the gradients of the weights and biases by
propagating the error backward from the output layer to the input layer. These
gradients are then used to update the weights using gradient descent.
5. Loss Function:
o The Mean Squared Error (MSE) loss function is used to compute the difference
between the predicted and actual values.
6. Training:
o The train method performs forward and backward passes for each epoch,
updating weights and biases to minimize the loss.
7. Prediction:
o After training, the predict method is used to make predictions on new data.
In this example, implement MLP using:

1. Multiple hidden layers.


2. Sigmoid or ReLU as activation functions.
3. Training using Mean Squared Error (MSE) and Gradient Descent.

Program:

Output:

Conclusion:

You might also like