Basic Artificial Neural Netword
Basic Artificial Neural Netword
REPORT
ID: 22127385
Class: 22TGMT
TABLE OF CONTENT
2
learning rate = 0.1
3
• 100 Epochs: Significant improvement, rapid decrease in training loss.
• 500 Epochs: Continued improvement, slower rate of loss decrease.
• 1000 Epochs: Diminishing returns, training loss plateaus.
4
• 2000 Epochs: Potential overfitting, marginal decrease in training loss, risk
of poor generalization.
2. Experiment: Change the learning rate
5
6
1. Low Learning Rate (0.01):
- Slow convergence.
7
- Gradual decrease in training loss.
- Requires more epochs to reach optimal performance.
2. Moderate Learning Rate (0.05):
- Faster convergence compared to 0.01.
- Steady decrease in training loss.
- Balanced between speed and stability.
3. Optimal Learning Rate (0.1):
- Efficient convergence.
- Rapid decrease in training loss.
- Good balance between speed and accuracy.
4. High Learning Rate (0.3):
- Very fast initial convergence.
- Risk of overshooting the optimal point.
- Potential for oscillations in training loss.
5. Higher Learning Rate (0.5):
- Unstable training.
- Large oscillations in training loss.
- Difficulty in reaching optimal performance.
6. Very High Learning Rate (1.0):
- Highly unstable training.
- Training loss may diverge.
- Model fails to converge.
3. Experiment: Change the network architecture (more hidden
neurons
8
9
10
Observations with Increasing Hidden Neurons (Learning Rate = 0.01)
11
12