Exp - No 2
Exp - No 2
1. Layer Sizes:
o The network can have multiple hidden layers. Each hidden layer has a configurable
number of neurons. The hidden_sizes list in the constructor specifies the number
of neurons in each hidden layer.
2. Activation Functions:
o We support both Sigmoid and ReLU activation functions.
o The sigmoid function is commonly used for binary classification, while ReLU is often
used in deep learning networks as it avoids vanishing gradients.
3. Feedforward Propagation:
o The forward method propagates the input data through the network, layer by
layer, and applies the activation function at each layer.
4. Backpropagation:
o The backward method computes the gradients of the weights and biases by
propagating the error backward from the output layer to the input layer. These
gradients are then used to update the weights using gradient descent.
5. Loss Function:
o The Mean Squared Error (MSE) loss function is used to compute the difference
between the predicted and actual values.
6. Training:
o The train method performs forward and backward passes for each epoch,
updating weights and biases to minimize the loss.
7. Prediction:
o After training, the predict method is used to make predictions on new data.
In this example, implement MLP using:
Program:
Output:
Conclusion: