Neural Networks
Neural Networks
Artificial Neural Networks (ANNs) are computer systems inspired by the way the
human brain works. They are made up of simple building blocks called artificial
neurons or units, which are organized into layers. These layers work together to
process information and solve complex tasks, such as recognizing images,
predicting values, or understanding speech.
Structure of an Artificial Neural Network
An Artificial Neural Network is typically made up of three types of layers:
1. Input Layer
o The input layer is the starting point of the neural network.
o It receives raw data from the outside world.
o Each unit (or neuron) in the input layer represents one feature or
attribute of the data.
o Example: If we are building a network to recognize handwritten
digits, each pixel of the image would be an input unit.
2. Hidden Layers
o After the input layer, the data moves through one or more hidden
layers.
o These layers perform complex transformations and calculations to
detect patterns or relationships in the data.
o The hidden layers are called "hidden" because we do not directly see
their operations; they work behind the scenes.
o A simple network might have just one hidden layer, but modern deep
learning models often have many hidden layers.
3. Output Layer
o The final layer in the network is the output layer.
o It produces the result or prediction based on the processed
information.
o Example: In a digit recognition system, the output layer would tell
us whether the input image is a 0, 1, 2, etc.
Connections and Weights
Units in one layer are connected to units in the next layer.
Each connection has an associated weight.
The weight determines how much influence one neuron has on another.
Higher weights mean stronger influence, while lower weights mean
weaker influence.
Example:
If a neural network is learning to recognize animals, the feature "has four legs"
might have a strong weight when identifying a dog, but a weaker weight when
identifying a bird.