Soft Computing Unit-2
Soft Computing Unit-2
Perceptron Learning
The activation function is then used to transform this weighted sum zzz
into a value within a specific range. The activation function can be any
function, such as a step function (which gives an output of -1 or 1), a
sigmoid function (which gives the probability of each class), or even mean
squared error.
Single-Layer Perceptron:
This is the basic form of the perceptron, consisting of one layer of
neurons (or units).
It has an input layer, a set of weights, an activation function, and an
output layer. It is used for simple binary classification tasks where data is
linearly separable.
Input Layer: Receives raw data (features) and passes it to the next layer.
Hidden Layer: Processes input data using weights and activation
functions, capturing patterns or relationships in the data.
Output Layer: Produces the final result (classification or prediction)
based on the processed data.
Applications of Perceptron:
Advantages:
Disadvantages:
w i =w i +η⋅(y− y ^ )⋅x i
where:
Architecture of Adaline
First, calculate the net input to your Adaline network then apply the activation
function to its output then compare it with the original output if both the equal,
then give the output else send an error back to the network and update the
weight according to the error which is calculated by the delta learning rule. i.e
where w_{i} ,y_{in} and t are the weight, predicted output, and true value
respectively.
Adaline Algorithm:
Applications of Adaline:
Advantages of Adaline:
Disadvantages of Adaline:
Architecture of Madaline
There are three types of a layer present in Madaline First input layer contains
all the input neurons, the Second hidden layer consists of an adaline layer, and
weights between the input and hidden layers are adjustable and the third layer is
the output layer the weights between hidden and output layer is fixed they are
not adjustable.
Madaline Algorithm:
2. Input Data: Prepare your training dataset with input features and target
outputs.
3. Feedforward Process:
- For each input sample, calculate the output of each neuron by taking the
weighted sum of inputs and applying an activation function.
- Pass the output to the next layer until you reach the final output layer.
4. Error Calculation: Compare the predicted output with the actual target output.
6. Iteration: Repeat the feedforward and weight adjustment steps for several
epochs until the weights stabilize or you reach a desired accuracy.
7. Final Prediction: After training, use the network to predict outputs for new
data by following the feedforward process.
This algorithm helps the Madaline network learn patterns in the data for
classification tasks
Advantages of Madaline:
Disadvantages of Madaline:
Applications of Madaline:
Architecture:
Key Features of Backpropagation:
1. Forward Propagation:
o Input data is passed through the network, layer by layer, until the
final output is produced.
o Each neuron in the network performs a weighted sum of inputs,
applies an activation function, and passes the output to the next
layer.
2. Calculate Error:
o The error is calculated as the difference between the predicted
output and the actual target output.
3. Backward Propagation:
o The error is propagated backward from the output layer to the
hidden layers, adjusting weights to reduce the error.
o This is done using the gradient descent optimization algorithm to
minimize the error.
4. Weight Update:
o Weights are updated in the direction that reduces the error, based
on the gradient of the error with respect to the weights.
5. Repeat:
o The process is repeated for multiple iterations (epochs) until the
error is minimized or converges.
Applications of Backpropagation:
Advantages:
Disadvantages:
1. Slow Convergence: It may take a long time to train, especially with deep
networks.
2. Prone to Overfitting: Can overfit the training data, especially if not
regularized properly.
3. Requires Large Datasets: Needs a substantial amount of labeled data for
effective training.
Training Algorithm:
1. Initialize: Set the number of hidden neurons and randomly select centers
from training data.
2. Calculate Spread: Define the spread (sigma) for the RBFs.
3. Compute Hidden Layer Output: For each input, calculate the output of
RBF neurons using:
- Output = exp(-((Input - Center)²) / (2 * sigma²))
4. Output Layer Calculation: Combine hidden layer outputs with weights
to get the final output.
5. Error Calculation: Compare predicted output with actual target.
6. Adjust Weights: Update weights based on error.
7. Repeat: Iterate until convergence.
8. Predict: Use the trained network for new data.
This allows the RBF network to learn and make predictions.
Applications of RBF Networks:
1. Function Approximation: Approximating complex functions based on
available data.
2. Classification: Used for tasks like handwritten digit recognition or
categorizing data points into different classes.
3. Time Series Prediction: Predicting future values based on historical data
patterns.
4. Signal Processing: Filtering and analyzing signals in various applications
like audio and image processing.
5. Control Systems: In robotics and autonomous systems for real-time
control.
Advantages of RBF Networks:
1. Fast Learning: The network can learn quickly because the hidden layer
weights are determined by the proximity of the input to the centers.
2. Effective for Nonlinear Problems: RBF networks are well-suited for
handling complex, nonlinear decision boundaries.
3. Good Generalization: RBF networks often generalize well to unseen
data, especially for function approximation.
Disadvantages of RBF Networks:
1. Sensitive to Centers Selection: The choice of centers can affect
performance, and selecting them optimally can be challenging.
2. Need for Clustering: Finding the optimal centers (e.g., through K-
means) adds complexity to the training process.
3. Computationally Expensive for Large Datasets: RBF networks may
require significant computational resources for large datasets due to the
need to compute pairwise distances.
Applications of Neural Networks in Forecasting, Data Compression,
and Image Compression
Neural networks are powerful tools for predicting future values or trends
based on historical data. Here’s how they are applied in different
forecasting domains:
Time Series Forecasting: Neural networks can model time-dependent
patterns and trends, making them ideal for predicting future values in
time series data (e.g., stock prices, weather conditions, sales, or demand
forecasting).
o Example: Predicting stock market trends using Recurrent Neural
Networks (RNNs) or Long Short-Term Memory (LSTM)
networks, which can handle sequential data and capture temporal
dependencies.
Data compression is the process of reducing the size of data to save space
or transmission time. Neural networks are increasingly used in this
domain for both lossy and lossless compression.