0% found this document useful (0 votes)
9 views7 pages

Data Mining

Uploaded by

Akash Kolinco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views7 pages

Data Mining

Uploaded by

Akash Kolinco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Neural Network Architecture and Data Preparation

The neural network implemented in this code has a 3-4-1 architecture, meaning it has 3 input nodes, 4 hidden nodes, and 1 output
node. This structure is suitable for binary classification tasks with 3-dimensional input data.

The training data (X_train) consists of 8 samples, each with 3 binary features. The corresponding labels (y_train) are binary values. A
single test sample (X_test) is provided to evaluate the model's performance after training.

Network Parameters Activation Function Weight Initialization


• Input nodes: 3 The network uses the sigmoid activation Weights are initialized randomly using
• Hidden nodes: 4 function and its derivative for both numpy's random number generator with
hidden and output layers. The sigmoid a fixed seed (42) for reproducibility. This
• Output nodes: 1
function maps input values to the range ensures consistent results across
• Learning rate: 0.1
(0, 1), making it suitable for binary multiple runs of the code.
• Momentum: 0.01 (unused in current classification tasks.
implementation)
preencoded.png
preencoded.png
Training Process and Model Evaluation
The neural network is trained using a simple implementation of the backpropagation algorithm. The training process involves forward propagation to compute
predictions, followed by backpropagation to update the weights based on the computed error.

1 Forward Propagation
Input data is propagated through the network, applying weights and activation functions at each layer to compute the final output.

2 Error Calculation
The difference between predicted and actual outputs is calculated to determine the model's error.

3 Backpropagation
The error is propagated backwards through the network, adjusting weights to minimize the loss function.

4 Weight Update
Weights are updated using the calculated gradients and the specified learning rate.

preencoded.png
code:

preencoded.png
preencoded.png
preencoded.png
preencoded.png

You might also like