ML 6
ML 6
2/2
Only one layer of weights connecting the input layer directly to the output layer.
It is suitable for linear separable problems but lacks the ability to solve complex, non-
linear problems.
1/10
Mathematically:
y = f (∑ wi xi + b)
Where:
xi : Input features
wi : Weights
b: Bias
f : Activation function
Key points:
3. Backpropagation Learning
Backpropagation is an algorithm used for training neural networks by minimizing the error:
Steps:
3. Backward Pass: Propagate the error back through the network and update weights
using gradient descent.
∂E
2/10
∂E
wnew = wold − η
∂w
Where:
η : Learning rate
E : Error function
Enhances the network's ability to capture non-linear patterns without additional layers.
Structure:
Hidden Layer: Computes the distance between input vectors and centroids,
applying the RBF kernel.
3/10
6. Activation Functions
Activation functions introduce non-linearity, enabling networks to learn complex patterns.
Common types:
1
Sigmoid: σ(x) = 1+e−x
Key Idea: Use previous hidden states to influence the current output.
Variants: Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to address
vanishing gradients.
Components:
Pooling Layers: Reduce spatial dimensions and computational complexity (e.g., max
pooling).
4/10
Applications: Image recognition, object detection, and medical imaging.
Architecture of ANN
The architecture of an ANN is typically composed of the following components:
1. Input Layer
2. Hidden Layer(s)
One or more layers situated between the input and output layers.
Each neuron in the hidden layer computes a weighted sum of inputs, adds a bias, and
applies an activation function to introduce non-linearity.
3. Output Layer
The number of neurons in this layer corresponds to the number of output classes (for
classification) or the output variables (for regression).
5/10
Components of ANN Architecture
1. Neurons (Nodes)
n
z = ∑ w i xi + b
i=1
Where:
xi : Input features
wi : Weights
b: Bias
z : Weighted sum
The result is passed through an activation function.
Bias: Allows the activation function to shift to better fit the data.
3. Activation Function
Sigmoid
ReLU
Tanh
4. Connections (Edges)
6/10
Represent the flow of data between neurons.
Can be:
Fully Connected: Every neuron is connected to all neurons in the next layer.
Each neuron calculates a weighted sum of its inputs, adds bias, applies the
activation function, and passes the result to the next layer.
2. Loss Calculation:
The difference between the predicted output and the actual target is measured
using a loss function (e.g., Mean Squared Error, Cross-Entropy Loss).
3. Backpropagation:
Gradients are calculated using the chain rule, and weights are updated using
gradient descent or its variants (e.g., Adam, RMSprop).
4. Iteration:
The process repeats for multiple epochs until the loss converges or meets a defined
threshold.
7/10
Diagram of ANN Architecture
Input Layer → Hidden Layer(s) → Output Layer
Would you like me to provide a more detailed explanation of any specific component?
Artificial Neural Networks (ANNs) can be categorized based on the number and arrangement
of layers in their architecture. Here are the main types:
Key Feature: No hidden layers; the input is directly mapped to the output.
8/10
3. Feedforward Neural Network
Structure: Data flows in one direction (from input to output).
Use Case: Sequential data like time-series forecasting, speech, and text processing.
9/10
7. Radial Basis Function Network (RBFN)
Structure: Hidden layer neurons use radial basis functions as activation functions.
10/10