Neural Networks-A Diffusion Model Changing The Landscape
Neural Networks-A Diffusion Model Changing The Landscape
Networks
The Core of Deep Learning
Input Layer Hidden Layer Output Layer
Inputs Weights b
x1 w1
Output
x2 w2
Mathematically:
y = f(WX + b)
This lets neurons recognize complex patterns instead of just
linear relationships.
Activation Functions
Without activation functions, neural networks are just
stacked linear equations. They add non-linearity,
enabling networks to learn complex patterns like curves,
edges, and language structures.
Common types:
• MSE (Mean Squared Error): For regression
(predicting numbers).
• Cross-Entropy Loss: For classification (assigning
labels).
Example:
• Early layers in an image model detect edges.
• Middle layers recognize shapes.
• Deep layers identify objects (like faces or cars).
To prevent it:
• Dropout: Randomly disables neurons during
training.
• L1/L2 Regularization: Adds penalties to keep
weights from growing too large.
• More Data: A bigger dataset helps the model
generalize better.