0% found this document useful (0 votes)
64 views

CCS364-Soft Computing-Unit 5 - Applications - Lecture Notes

The document discusses modeling two-input and three-input functions using fuzzy logic, detailing the steps involved in defining fuzzy sets, rules, and implementing fuzzy inference systems. It also covers printed character recognition (PCR) using multilayer perceptrons, outlining the architecture, training process, and applications of PCR. Additionally, it touches on handwritten neural recognition, including definitions, types, techniques, and the training process involved.

Uploaded by

aasifarahman2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

CCS364-Soft Computing-Unit 5 - Applications - Lecture Notes

The document discusses modeling two-input and three-input functions using fuzzy logic, detailing the steps involved in defining fuzzy sets, rules, and implementing fuzzy inference systems. It also covers printed character recognition (PCR) using multilayer perceptrons, outlining the architecture, training process, and applications of PCR. Additionally, it touches on handwritten neural recognition, including definitions, types, techniques, and the training process involved.

Uploaded by

aasifarahman2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Dr

.G
Topic: Modeling a Two Input Sine Function

ay
Explain the Modeling a Two Input Sine Function

et
1. Sine function definition

ri
2. Definition of Modeling a Two input Sine function
3. Example - Two-Dimensional Sine Function

De
4. Fuzzy Logic Components- Input Variables, Output Variable, Membership Functions,
Fuzzy Rules
5. Steps Implement Fuzzy Logic to model the two-input sine function - Define Fuzzy

v
Sets and Membership Functions, Define Fuzzy Rules, Create Fuzzy Inference

iS
System, Defuzzify the Output, Fuzzy Logic Model Implementation, Model Testing
and Evaluation, Model Deployment

.V
Sine Function:

 The sine function is a fundamental mathematical function in trigonometry given by:

where θ is the angle between the hypotenuse and the adjacent side.
Modeling a Two Input Sine Function :

 Modeling a two-input sine function using fuzzy systems involves creating a Fuzzy system
model that maps two input variables and produces an output similar to a sine function of
those input variables.
Example: Two-Dimensional Sine Function:

 This function takes two inputs, x and y, and produces an output that is the sine of the
Euclidean distance between the input points.
f(x, y) = sin(√(x^2 + y^2))

 Fuzzy Logic Components


1. Input Variables: x and y (e.g.: angle in radians)
2. Output Variable: z (e.g. sine value)
3. Membership Functions: define fuzzy sets for input and output
4. Fuzzy Rules: define relationships between input and output

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
2

.G
 Steps to Implement Fuzzy Logic to model the two-input sine function:

ay
et
ri
De
v iS
.V
Given Two input Sine Function: f(x, y) = sin(x2 + y2)
Step 1: Define Fuzzy Input variables: x, y
Sets and Membership Set x=0.5 and y=0.5
Functions Output variable: z (approximation of sine function)

Create membership functions for x and y


Input Membership Functions: Gaussian or triangular for x and
y and Low, medium, high fuzzy sets for each input
Output Membership Functions: Triangular or trapezoidal for z
and Negative, zero, positive fuzzy sets for output

Convert x = 0.5 and y=0.5 into fuzzy sets.


Step 2: Define Fuzzy - Create a set of fuzzy rules that describe the relationship
Rules between the inputs x and y, and the output z.
- IF-THEN rules, e.g.:
(1). IF x is low AND y is low THEN z is negative

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
3

.G
(2). IF x is high AND y is high THEN z is positive
(3). (More rules can be added to cover all combinations

ay
and interactions.)

et
Step 3: Create Fuzzy - Combine fuzzy rules and membership functions to produce an
Inference System output

ri
- Calculate output using fuzzy logic operations (AND, OR,

De
NOT)
- Consider using Mamdani or Sugeno fuzzy models
Step 4: Defuzzify the - Convert fuzzy output back to a crisp value using a

v
Output defuzzification method such as:

iS
1. Centroid
2. Weighted average

.V
3. Mean of maximum

Step 5: Fuzzy Logic - Implement the fuzzy logic model using a programming
Model language (e.g., Python, MATLAB)
Implementation - Use a fuzzy logic library (e.g., SciKit-Fuzzy, Fuzzy Logic
Toolbox)
Step 6: Model Testing - Test the model with sample inputs
and Evaluation - Evaluate model performance using metrics (e.g., mean squared
error, mean absolute error)
- Refine the model as needed:
1. Adjust membership functions
2. Tune fuzzy rules
3. Change defuzzification method
Step 7: Model - Deploy the fuzzy logic model in a suitable application
Deployment - Monitor and maintain the model

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
4

.G
Topic: Modeling a Three Input Non-Linear Function

ay
Explain the Modeling a Three Input Non-Linear Function

et
1. Non-Linear function definition

ri
2. Steps Implement Fuzzy Logic to model the two-input sine function - Define Fuzzy
Sets and Membership Functions, Define Fuzzy Rules, Create Fuzzy Inference

De
System, Defuzzify the Output, Fuzzy Logic Model Implementation, Model Testing
and Evaluation, Model Deployment

v iS
Non-Linear Function:

.V
 Non-linear functions are mathematical functions that do not follow a straight-line relationship
between inputs and outputs.
 Example:
A non-linear function with quadratic and interaction terms:

 Graph:
The graph of this function would be a complex, non-linear surface

Steps to Implement Fuzzy Logic to model the three-input Non-Linear function:

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
5

.G
ay
et
ri
De
v iS
Given three input Non-Linear function - f(x,y,z)=x2+2y2+2xy−z
Step 1: Define Fuzzy Input variables: x, y, z (e.g., variables affecting the system like
Sets and Membership Fuel Flow, Air flow, Turbine speed for Power Generation System
.V
Functions Output variable: output (System response = approximation of
non-linear function, e.g Electricity Output)

Create input and output membership functions


Input Membership Functions: Gaussian or triangular for x, y
and z and Low, medium, high fuzzy sets for each input
Output Membership Functions: Triangular or trapezoidal for
output and Negative, zero, positive fuzzy sets for output
Step 2: Define Fuzzy - Create a set of fuzzy rules that describe the relationship
Rules between the inputs x and y, and the output z.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
6

.G
- IF-THEN rules, e.g.:
(1). Rule 1: IF x is Low AND y is Low AND z is Low THEN

ay
output is Low
(2). Rule 2: IF x is High AND y is High AND z is High

et
THEN output is High
(3). (More rules can be added to cover all combinations and

ri
interactions.)

De
Step 3: Create Fuzzy - Combine fuzzy rules and membership functions to produce an
Inference System output

v
- Calculate output using fuzzy logic operations (AND, OR,

iS
NOT)
- Consider using Mamdani or Sugeno fuzzy models

.V
Step 4: Defuzzify the - Convert fuzzy output back to a crisp value using a
Output defuzzification method such as:
1. Centroid
2. Weighted average
3. Mean of maximum
Step 5: Fuzzy Logic - Implement the fuzzy logic model using a programming
Model language (e.g., Python, MATLAB)
Implementation - Use a fuzzy logic library (e.g., SciKit-Fuzzy, Fuzzy Logic
Toolbox)
Step 6: Model Testing - Test the model with sample inputs
and Evaluation - Evaluate model performance using metrics (e.g., mean squared
error, mean absolute error)
- Refine the model as needed:
1. Adjust membership functions
2. Tune fuzzy rules
3. Change defuzzification method
Step 7: Model - Deploy the fuzzy logic model in a suitable application
Deployment - Monitor and maintain the model

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
7

.G
Topic: Printed Character Recognition (PCR)

ay
Explain the Printed Character Recognition using Multilayer Perceptron

et
(1). Definition of PCR

ri
(2). Types – OCR, ICR, Barcode recognition
(3). Application of Printed Character Recognition using Multilayer Perceptrons –

De
Architecture / Key components (Artificial neurons/perceptrons, Activation
functions, weights, Biases)
(4). Training Process of printed character recognition using a Multilayer Perceptron

v
(MLP) – Data collection, Data preprocessing, Feature Extraction (Edge Detection,

iS
Contour Detection, XOR Operation), Build the MLP model, Feed XOR Output to
MLP Classifier for Training the Model, Evaluation and Testing, Post-processing for

.V
error correction and text formatting

Printed Character Recognition:

 Printed Character Recognition (PCR) is a technology used to recognize and classify printed
characters from images or scanned documents.
 Techniques used in PCR is Neural Networks: Multilayer Perceptrons (MLPs), Convolutional
Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
Types of PCR:
1. Optical Character Recognition (OCR): Recognizes characters from scanned documents.
2. Intelligent Character Recognition (ICR): Recognizes handwritten characters.
3. Barcode Recognition: Recognizes barcode symbols.
Application of Printed Character Recognition using Multilayer Perceptrons:

 Multilayer Perceptrons (MLPs) are a type of feedforward neural network that can learn
patterns in character data.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
8

.G
ay
et
ri
De
v iS
.V
Architecture / Key Components of MLP for Printed Character Recognition:

 Artificial Neurons (Perceptrons): Compute weighted sum of inputs (z = w1*x1 + w2*x2


+ ... + wn*xn), apply activation function to output : a = φ(z)
 Activation Functions: Introduce non-linearity (e.g., ReLU, Sigmoid, Tanh, Softmax) to
separate classes and Enable learning of complex relationships
 Weight Matrix: Stores connection strengths between neurons
 Bias Terms: Adjust output of neurons, Updated during training

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
9

.G
Training Process of printed character recognition using a Multilayer Perceptron (MLP)

ay
et
ri
De
v iS
.V

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
10

.G
Step 1) Data Collection:

ay
 Identify Sources: Collect samples from public datasets, digital archives, books, and scanned
documents..

et
 Example: MNIST dataset is made up of images of printed digits, 28x28 pixels in size.
 Collect Samples: Gather printed samples in various fonts and sizes.

ri
 Label Data: Annotate each sample with the correct character or word labels.

De
Step 2) Data Preprocessing:
 Resize images to a standard size (e.g., 28x28 pixels).

v
 Normalize pixel values to a range of 0 to 1 to for faster training.

iS
 Reduce and Convert images to grayscale to reduce complexity
Step 3) Feature Extraction:

.V
 Feature extraction transforms raw image data into meaningful and distinguishable features help
neural networks differentiate between different characters or words
 Edge detection
o Use Edge detection algorithms like Canny to highlight the edges of the characters in the
images.
 Contour detection
o Use OpenCV functions to extract contours that identify shapes and structures of
characters
 Perform an XOR operation between the edges and contours
o To enhance the features and make them more distinguishable for the classifier

Step 4) Building the MLP Model

 Create Input Layer:


o It Flattens the 28x28 pixel images into a 784-dimensional vector with 784 neurons
o No activation function (simply passes input to hidden layers)

 Add one or more Hidden Layers:


o Multiple layers of artificial neurons (perceptrons): 2-3 layers with 128-256 neurons per
layer to process input data

 Create Output Layer:


o Classification: The output layer produces the final classification. Each neuron in this layer
represents a class
o The number of neurons should match the number of character classes (e.g., 10 for digits
0-9).
o Use softmax Activation function for the output layer
o softmax Activation function converts the outputs into probabilities that the input belongs
to a particular class of digits/letters.

Step 5) Feed XOR Output to MLP Classifier for Training the Model

 Forward Propagation: Use ReLU (Rectified Linear Unit) Activation Function to the weighted
sum of inputs to introduce non-linearity.
 Define Loss Function: Use categorical cross-entropy, which is suitable for multi-class
classification problems.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
11

.G
 Backpropagation and Optimizer to adjust weight and bias: Adam optimizer is commonly
used due to its efficiency and good performance.

ay
 Training: Train the model on the training dataset for several epochs (iterations)

et
Step 6) Evaluation and Testing

ri
 Metrics: Evaluate the model using accuracy and confusion matrix to understand its performance.
Testing: Test the model on a separate test dataset to check its generalization ability (Handling

De

new, unseen data)

v
Step 7) Post-processing for error correction and text formatting:

iS
- Error correction (spell, grammar, punctuation)

.V
- Text formatting (font, spacing, capitalization, punctuation)
- Machine learning models (sequence-to-sequence, language models)
- Post-processing pipelines (text preprocessing, error correction, text formatting)

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
12

.G
Topic: Handwritten Neural Recognition

ay
Explain the Handwritten Neural Recognition

et
(1). Definition of Handwritten Neural Recognition

ri
(2). Types – Online and Offline HCR
(3). Techniques – Preprocessing, Feature extraction, Classification

De
(4). Application of Printed Character Recognition using Multilayer Perceptrons –
Architecture / Key components (Artificial neurons/perceptrons, Activation
functions, weights, Biases)

v
(5). Training Process of printed character recognition using a Multilayer Perceptron

iS
(MLP) – Data collection, Data preprocessing, Feature Extraction (Edge Detection,
Contour Detection, XOR Operation), Build the MLP model, Feed XOR Output to

.V
MLP Classifier for Training the Model, Evaluation and Testing, Post-processing for
error correction and text formatting

Handwritten Neural Recognition:

 Also known as Handwritten Character Recognition (HCR)


 It is a field of research in machine learning and computer vision.
 It involves training neural networks to recognize and classify handwritten characters, digits,
or symbols.
Types of HCR
1. Online HCR: Recognizes handwriting in real-time, using touchscreen or stylus input.
2. Offline HCR: Recognizes handwriting from static images.
Techniques
1. Pre-processing: Normalization, binarization, and noise removal.
2. Feature extraction: Edge detection, contour analysis, and stroke direction.
3. Classification: Softmax, SVM, and k-NN.
Training Process of Handwritten Neural Recognition:

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
13

.G
ay
et
ri
De
v iS
.V

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
14

.G
ay
et
ri
De
v iS
.V

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
15

.G
Step 1) Data Collection:

ay
 Identify Sources: Collect samples from public datasets, digital archives, books, and scanned
documents..

et
 Example: MNIST dataset is made up of images of Handwritten digits, 28x28 pixels in size.
 Collect Samples: Gather printed samples in various fonts and sizes.

ri
 Label Data: Annotate each sample with the correct character or word labels.

De
Step 2) Data Preprocessing:
 Resize images to a standard size (e.g., 28x28 pixels).

v
 Normalize pixel values to a range of 0 to 1 to for faster training.

iS
 Reduce and Convert images to grayscale to reduce complexity
Step 3) Feature Extraction:

.V
 Feature extraction transforms raw image data into meaningful and distinguishable features help
neural networks differentiate between different characters or words
 Edge detection
o Use Edge detection algorithms like Canny to highlight the edges of the characters in the
images.
 Contour detection
o Use OpenCV functions to extract contours that identify shapes and structures of
characters
 Perform an XOR operation between the edges and contours
o To enhance the features and make them more distinguishable for the classifier

Step 4) Building the MLP Model

 Create Input Layer:


o It Flattens the 28x28 pixel images into a 784-dimensional vector with 784 neurons
o No activation function (simply passes input to hidden layers)

 Add one or more Hidden Layers:


o Multiple layers of artificial neurons (perceptrons): 2-3 layers with 128-256 neurons per
layer to process input data

 Create Output Layer:


o Classification: The output layer produces the final classification. Each neuron in this layer
represents a class
o The number of neurons should match the number of character classes (e.g., 10 for digits
0-9).
o Use softmax Activation function for the output layer
o softmax Activation function converts the outputs into probabilities that the input belongs
to a particular class of digits/letters.

Step 5) Feed XOR Output to MLP Classifier for Training the Model

 Forward Propagation: Use ReLU (Rectified Linear Unit) Activation Function to the weighted
sum of inputs to introduce non-linearity.
 Define Loss Function: Use categorical cross-entropy, which is suitable for multi-class
classification problems.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
16

.G
 Backpropagation and Optimizer to adjust weight and bias: Adam optimizer is commonly
used due to its efficiency and good performance.

ay
 Training: Train the model on the training dataset for several epochs (iterations)

et
Step 6) Evaluation and Testing

ri
 Metrics: Evaluate the model using accuracy and confusion matrix to understand its performance.
Testing: Test the model on a separate test dataset to check its generalization ability (Handling

De

new, unseen data)

v
Step 7) Post-processing for error correction and text formatting:

iS
- Error correction (spell, grammar, punctuation)

.V
- Text formatting (font, spacing, capitalization, punctuation)
- Machine learning models (sequence-to-sequence, language models)
- Post-processing pipelines (text preprocessing, error correction, text formatting)

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
17

.G
Topic: Fuzzy filtered Neural networks

ay
Explain Fuzzy filtered Neural networks

et
1. Definition (Diagram compulsory)

ri
2. Architecture / Structure and Workflow – Input data, Fuzzy Preprocessing ( Filtering
Mechanism with example), Neural network processing layer (Feedforward NN,

De
Learning algorithm)
3. Example: Image Processing – Steps: Input, Fuzzy filtering, Fuzzy rules, Output of
Fuzzy Filter, Neural networks, Output of Neural networks

v
4. Advantages

iS
Definition of Fuzzy Filtered Neural Networks:

.V
• Integration: Combines neural networks with fuzzy logic to filter noise and redundancy in
data, enhancing learning and performance.
• Fuzzy Filtering: Applies fuzzy rules to preprocess input data, ensuring only relevant and
important information is passed to the neural network. This reduces noise and redundancy.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
18

.G
Architecture / Structure and Workflow:

ay
1. Input Data: Raw data which includes Noise and unwanted information is fed into the system.
2. Fuzzy Preprocessing:

et
(1). Filtering Mechanism: Applies the fuzzy rules to preprocess the data, to reduce noise

ri
and enhance important features using fuzzy membership functions and rules.
Example for Membership Function: A membership function can classify input data as

De
"noisy", "clean"
Example Fuzzy Rule: IF the data is "Noisy" THEN reduce its weight

v
Example of Filtering mechanism: By marking reducing weight for noisy data, the

iS
Fuzzy filter reduces noise and improves important features.
3. Neural Network Processing Layer:

.V
(1). Feedforward Neural network: After fuzzy preprocessing, the data is fed into a neural
network for learning and Prediction.
(2). Learning Algorithm: The neural network uses its learning algorithm (like
backpropagation) to learn from the filtered data.
Example: In image processing, the neural network would learn to classify objects based
on the enhanced image features provided by the fuzzy filter.

Example: Image Processing

1. Objective: To enhance image quality and classify objects within images.


2. Process:
Step 1) Input: Raw images with noise and unwanted data.
Step 2) Fuzzy Filtering: Apply fuzzy logic to detect and reduce noise, enhancing
significant features using Membership functions. For example, pixel intensity
variations can be "low," "medium," or "high."
Step 3) Fuzzy Rule: IF pixel intensity variation is “high” THEN mark as “edge”.
Step 4) Output of Fuzzy Filter: Enhanced image with noise reduced and important
features, like edges, highlighted.
Step 5) Neural Network: Process the filtered image data for object classification.
Step 6) Output of Neural Network Classified objects within the image, with
improved accuracy due to the preprocessing.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
19

.G
ay
et
ri
De
v iS
.V
Benefits / Advantages:

 Improved Accuracy: By reducing noise, the neural network can focus on relevant features,
improving prediction accuracy.
 Enhanced Learning: Filtering out irrelevant data helps the network learn more effectively.
 Flexibility: Can be applied to various domains like image processing, speech recognition, and
data classification.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
20

.G
Topic: Plasma Spectrum Analysis

ay
Explain Plasma Spectrum Analysis

et
1. Definition of Plasma Spectrum Analysis

ri
2. Role of Soft Computing in Plasma Spectrum Analysis – Fuzzy logic, NN, GA,
Hybrid systems

De
3. Hybrid Approach for plasma spectrum analysis – (1). Fuzzy Logic for
Preprocessing – Data filtering, (2).Neural Networks for Prediction – Learning
Pattern, (3) Genetic Algorithms – For Optimization using Parameter Tuning

v
4. Workflow Example for plasma spectrum analysis – Step 1) Collect Raw Data, Step

iS
2) Fuzzy Preprocessing, Step 3) Neural Network Training, Step 4)Optimization,
Step 5) Analysis and Prediction

.V
5. Steps Diagram
6. Advantages

Definition of Plasma Spectrum Analysis:

 Analyzes Plasma: Studies high-energy ionized gases.


 Elemental Composition: Identifies elements in the plasma.
 Properties: Determines properties like temperature and density.
 Diagnostic Tool: Used in research fields like astrophysics and materials science.

Role of Soft Computing in Plasma Spectrum Analysis:

 Fuzzy Logic: Handles uncertainty and noise in spectral data, improving data preprocessing and
filtration.
 Neural Networks: Learns complex patterns within spectral data, helping in the accurate
prediction of plasma parameters like temperature and density.
 Genetic Algorithms: Optimizes the parameters and models used in the analysis, ensuring more
robust and accurate prediction outputs.
 Hybrid Systems: Combines these techniques to provide a complete approach to analyse and
understand plasma spectra.
Hybrid Approach for plasma spectrum analysis

(1). Fuzzy Logic for Preprocessing – Data filtering: Uses fuzzy rules to handle noise and
uncertainty in the spectral data.
Example: Classify data points as "noisy" or "significant."
(2). Neural Networks for Prediction – Learning Pattern: Trains on the filtered data to identify
complex patterns and relationships.
Example: Predict plasma parameters like temperature and density based on spectral features
(3). Genetic Algorithms – For Optimization using Parameter Tuning: Optimizes the parameters
of the neural network and fuzzy system for better performance.
Example: Adjust weights and membership functions to minimize error and improve accuracy.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
21

.G
Workflow Example for plasma spectrum analysis:

ay
Step 1) Collect Raw Data: Gather spectral data from the plasma.

et
Step 2) Fuzzy Preprocessing: Apply fuzzy logic to preprocess the data, reducing noise and
enhancing relevant features.

ri
Step 3) Neural Network Training: Train a neural network with the pre-processed data to learn

De
patterns and predict plasma properties.
Step 4) Optimization: Use genetic algorithms to fine-tune the system’s parameters for optimal
performance.

v
Step 5) Analysis and Prediction: Analyze new spectral data using the trained and optimized

iS
hybrid system to accurately predict plasma parameters.

.V

Benefits / Advantages:
 Improved Accuracy: Enhanced data quality and pattern recognition lead to more precise
predictions.
 Robustness: The system can adapt to different types of spectral data and varying conditions.
 Efficiency: Optimization reduces computational time and effort.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
22

.G
Topic: Soft Computing for Color Recipe Prediction

ay
Explain Soft Computing for Color Recipe Prediction

et
7. Color Recipe Prediction definition

ri
8. How it works
9. Soft Computing Techniques used in CRP – NN, DL, GA, FL

De
10. CANFIS (Co-Active Neuro-Fuzzy Inference System) for Color Recipe Prediction
11. Benefits / Advantages / Reasons for choosing CANFIS – Handling Uncertainty,
Accurate Predictions, Reduced Design time, Supervised learning

v
12. Diagram – Compulsory

iS
13. Architecture / Key components – Input Layer, Fuzzy layer, Neuro-Fuzzy layer,
Inference layer, Output layer

.V
14. Steps Diagram
15. Evaluation metrics – Accuracy, Harmony, Color difference

 Color recipes are combinations of colors used to achieve a specific visual effect or aesthetic.
Color Recipe Prediction:

 Automatically generates color palettes.


 Suggests matching colors.
 Predicts color trends.
 Creates visually appealing designs.
 Color Recipe Prediction (CRP) in soft computing refers to the use of computational
intelligence techniques to predict and generate color recipes
How It Works:

1. Computer analyzes existing color combinations.


2. Learns patterns and relationships between colors.
3. Predicts new, harmonious color recipes.

Soft Computing Techniques used in CRP:


CRP employs various soft computing techniques, including:
1. Neural Networks (NN)

2. Deep Learning (DL)


3. Genetic Algorithms (GA)
4. Fuzzy Logic (FL)

5. Evolutionary Computation (EC)

CANFIS (Co-Active Neuro-Fuzzy Inference System) for Color Recipe Prediction:

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
23

.G
 CANFIS (Co-Active Neuro-Fuzzy Inference System) is a hybrid intelligent system that

ay
combines the benefits of neural networks and fuzzy logic for Color Recipe Prediction (CRP)
Benefits / Advantages / Reasons for choosing CANFIS:

et
(1). Handling Uncertainty: Fuzzy logic in CANFIS efficiently handles color ambiguity and

ri
uncertainty.
Accurate Predictions: CANFIS predicts harmonious color recipes.

De
(2).
(3). Reduced Design Time: Quick and effective color recipe generation.
(4). High-Dimensional Data: CANFIS efficiently handles high-dimensional color data.

v
Supervised Learning: CANFIS excels in supervised learning tasks like CRP.

iS
(5).

.V
Architecture / Key components:
(1). Input Layer: This layer takes color features like RGB (Red, Green, Blue) or HSV (Hue,
Saturation, Value) values as input
(2). Fuzzy Layer: Input colors are converted into fuzzy sets. This helps handle any uncertainties
in the color data.
(3). Neuro-Fuzzy Layer: This is an adaptive neural network where fuzzy weights are adjusted
based on learning from data.
(4). Inference Layer: Uses a Sugeno fuzzy inference system to predict color recipes.
(5). Output Layer: Outputs the predicted color recipe, essentially a mix of colors that achieve
the desired target.
.

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
24

.G
Example: Predicting a Custom Shade of Blue for Fabric Dyeing (optional)

ay
Step 1: Input Data
Target Color: RGB(0, 0, 255) - Bright Blue

et
Available Colorants: Direct Blue 85, Direct Blue 79, Direct Blue 1
Step 2: Fuzzification

ri
 Convert target color to fuzzy sets:
o Bright Blue: 0.8

De
o Deep Blue: 0.2
Step 3: Neuro-Fuzzy Layer

v
 Train network with historical color data

iS
 Adjust fuzzy weights
Step 4: Inference Layer

.V
Apply fuzzy rules to predict colorant concentrations:

Step 5: Output Layer:


 Predicted Recipe: 60% Direct Blue 85, 30% Direct Blue 79, 10% Direct Blue 1

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V


Dr
25

.G
ay
et
ri
De
v iS
.V
Evaluation Metrics

 Accuracy: Measures how closely the predicted color matches the target color.
 Harmony: Assesses the visual appeal and coherence of the color recipe.
 Color Difference (ΔE): Quantifies the difference between the predicted and target colors;
lower values indicate better matches.
 User Satisfaction: End-user's satisfaction with the predicted color recipe

CCS364 – SOFT COMPUTING LECTURE NOTES - UNIT I Dr.GAYETRI DEVI S.V

You might also like