AbhishekYadav Assignment 02
AbhishekYadav Assignment 02
Name: -AbhishekYadav
Yadav
No.:-33
Roll No.: -06
Assignment ::-2
-02
Subject:- SOFT COMPUTING
1. Supervised Learning
• Definition: Learning from labeled data,
where the input-output pairs are known.
• Process: The system is trained using a
dataset that includes input features and
the correct output. The model makes
predictions and is corrected by comparing
with actual outputs.
• Goal: Minimize the error between
predicted and actual outputs.
• Examples:
o Artificial Neural Networks (ANNs)
o Support Vector Machines (SVM)
• Applications: Classification, regression,
speech recognition.
2. Unsupervised Learning
• Definition: Learning from data that has no
labels.
• Process: The system tries to find patterns,
structures, or relationships within the
input data without any output
supervision.
• Goal: Group or cluster similar data points.
• Examples:
o K-Means Clustering
o Self-Organizing Maps (SOMs)
• Applications: Market segmentation,
anomaly detection, pattern recognition.
3. Reinforcement Learning
• Definition: Learning through interaction
with an environment to achieve a goal.
• Process: An agent performs actions in an
environment, receives feedback in the
form of rewards or penalties, and learns
the optimal policy.
• Goal: Maximize cumulative reward over
time.
• Examples:
o Q-Learning
o Temporal Difference Learning
• Applications: Robotics, game playing,
autonomous systems.
4. Evolutionary Learning
• Definition: Learning based on the
principles of natural evolution such as
selection, mutation, and crossover.
• Process: A population of solutions evolves
over time to find the best one.
• Goal: Optimize a given fitness function.
• Examples:
o Genetic Algorithms (GA)
o Genetic Programming (GP)
• Applications: Optimization problems,
feature selection, machine learning model
design.
5. Hebbian Learning
• Definition: Based on the biological
principle: “Neurons that fire together,
wire together.”
• Process: Strengthens the connection
between two neurons if they are activated
simultaneously.
• Goal: Enhance correlation between inputs
and outputs.
• Applications: Pattern recognition,
associative memory.
6. Competitive Learning
• Definition: Neurons compete to become
active; only one or few neurons "win" and
get updated.
• Process: Helps in clustering and
categorization.
• Goal: Group similar input patterns.
• Examples: Self-Organizing Maps (Kohonen
Networks)
• Applications: Data compression,
clustering.
QUES 3:- Compare and contrast between
ADALINE v/s MADALINE.
ANS 3:- ADALINE (Adaptive Linear
Neuron)
Aspect Description
Full Form Adaptive Linear Neuron
Developer Bernard Widrow and Marcian
Aspect Description
Hoff (1960)
Single-layer neural network
Structure
with a single output neuron
Takes multiple inputs and
Input Type
produces one output
Uses the Least Mean Square
Learning
(LMS) or delta rule for weight
Algorithm
updates
Linear in training (raw output),
Activation
threshold applied for final
Function
output
Binary classification (linearly
Use Case
separable problems)
Cannot solve non-linearly
Limitation
separable problems (like XOR)
Output Type Binary (after applying
Aspect Description
threshold)
Key Differences:
Feature ADALINE MADALINE
Architecture Single-layer Multi-layer
Complexity Simple More complex
Problem Linearly Can solve non-
Feature ADALINE MADALINE
Solving separable linearly
problems only separable
problems
Learning Delta rule MADALINE rules
Algorithm (LMS) (heuristic)
Output
Single Multiple
Neurons
Analogy:
• ADALINE is like a single opinion based on a
few facts.
• MADALINE is like a team of experts
collaborating to form a better decision.
QUES4:- Discuss about error back propagation
algorithm (EBPA) and its characteristic.
ANS 4:- Error Backpropagation Algorithm
(EBPA)
The Error Backpropagation Algorithm is a
supervised learning algorithm used for
training multilayer feedforward neural
networks (especially Multilayer MLPs). It’s
one of the most widely used algorithms in
artificial neural networks.
Characteristics of EBPA:
Characteristic Description
Type Supervised Learning
Architecture Requires multilayer
Characteristic Description
Requirement feedforward network
(minimum 1 hidden layer)
Activation Must be differentiable (e.g.,
Function sigmoid, tanh, RLU)
Typically Mean Squared Error
Error Function
(MSE)
Learning Rule Gradient Descent
Training
Iterative (epoch-based)
Method
Weight Update Based on gradient of the error
May take many iterations;
Convergence
sensitive to learning rate
Can occur with too many
Overfitting
layers or training for too long
Suitable for deep networks
Scalability
but computationally
Characteristic Description
expensive
Can be improved using
techniques like momentum,
Optimization learning rate decay, batch
training, or adaptive methods
(e.g., Adam, RMSprop)
Steps in EBPA:
1. Initialize weights and biases randomly.
2. Feedforward the input to compute the
output.
3. Compute the error at the output layer.
4. Backpropagate the error to calculate
gradients.
5. Update weights and biases using
gradient descent.
6. Repeat steps 2–5 for many iterations
(epochs) until the error is minimized.
Advantages:
• Can learn complex non-linear mappings.
• Widely used in practical applications like
image recognition, speech processing, etc.
• Forms the backbone of deep learning.
Limitations:
• Prone to local minima.
• Slow convergence if not optimized
properly.
• Requires large datasets for good
generalization.
• Sensitive to initial weight values and
hyperparameters.
QUES 5:- Explain architecture of Adaptive
Resonance Theory (ART).
ANS 5:- Architecture of Adaptive
Resonance Theory (ART)
Adaptive Resonance Theory (ART) is a neural
network model developed by Stephen
Grossberg and Gail Carpenter in the 1980s. It
was designed to address the stability-
plasticity dilemma — the challenge of
learning new information (plasticity) without
forgetting previously learned knowledge
(stability).
ART networks are mainly used for pattern
recognition and clustering, especially when
the data is noisy or presented in an
online/real-time manner.
THANK YOU
(RAVINDRA YADAV06)
(Abhishek YADAV 33)