0% found this document useful (0 votes)
21 views12 pages

Basic Artificial Neural Netword

The report details experiments conducted on basic artificial neural networks, focusing on the effects of varying the number of epochs, learning rates, and network architecture. Key findings include that increasing epochs leads to diminishing returns after 1000, while optimal learning rates balance speed and accuracy. Additionally, the number of hidden neurons impacts training loss, with too few leading to underfitting and too many risking overfitting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views12 pages

Basic Artificial Neural Netword

The report details experiments conducted on basic artificial neural networks, focusing on the effects of varying the number of epochs, learning rates, and network architecture. Key findings include that increasing epochs leads to diminishing returns after 1000, while optimal learning rates balance speed and accuracy. Additionally, the number of hidden neurons impacts training loss, with too few leading to underfitting and too many risking overfitting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

UNIVERSITY OF SCIENCE – VNUHCM

FACULTY OF INFORMATION TECHNOLOGY

REPORT

LAB 04: Basic Artificial Neural


Networks.

Student’s name: Nguyen Quoc Thang

ID: 22127385

Class: 22TGMT
TABLE OF CONTENT

1. Experiment: Change the number of eposch..........................................

2. Experiment: Change the learning rate...................................................

3. Experiment: Change the network architecture (more hidden neurons).......

1. Experiment: Change the number of eposch

2
learning rate = 0.1

3
• 100 Epochs: Significant improvement, rapid decrease in training loss.
• 500 Epochs: Continued improvement, slower rate of loss decrease.
• 1000 Epochs: Diminishing returns, training loss plateaus.

4
• 2000 Epochs: Potential overfitting, marginal decrease in training loss, risk
of poor generalization.
2. Experiment: Change the learning rate

5
6
1. Low Learning Rate (0.01):
- Slow convergence.
7
- Gradual decrease in training loss.
- Requires more epochs to reach optimal performance.
2. Moderate Learning Rate (0.05):
- Faster convergence compared to 0.01.
- Steady decrease in training loss.
- Balanced between speed and stability.
3. Optimal Learning Rate (0.1):
- Efficient convergence.
- Rapid decrease in training loss.
- Good balance between speed and accuracy.
4. High Learning Rate (0.3):
- Very fast initial convergence.
- Risk of overshooting the optimal point.
- Potential for oscillations in training loss.
5. Higher Learning Rate (0.5):
- Unstable training.
- Large oscillations in training loss.
- Difficulty in reaching optimal performance.
6. Very High Learning Rate (1.0):
- Highly unstable training.
- Training loss may diverge.
- Model fails to converge.
3. Experiment: Change the network architecture (more hidden
neurons

8
9
10
Observations with Increasing Hidden Neurons (Learning Rate = 0.01)

1. Few Hidden Neurons (3 neurons):

- Slow decrease in training loss.


- Potential underfitting.

2. Moderate Hidden Neurons (5 neurons):

- Steady decrease in training loss.


- Better fitting.

3. Many Hidden Neurons (10 neurons):

- Rapid decrease in training loss.


- Good fitting, potential overfitting.

11
12

You might also like