Soft Computing Lab Record
Soft Computing Lab Record
NAME:
REGISTER NO:
ROLL NO:
PROGRAMME:
YEAR:
SEMESTER:
An Autonomous Institution
CERTIFICATE
Name …………………………………………………………………………………………………….
Certified that this is the bonafide record of work done by the above student in the
191CSV78– SOFT COMPUTING during the academic year 2024-25
.
Signature of Head of the Department Signature of Course Incharge.
Submitted for the University Practical Examination held on.................................. at VEL TECH
MULTITECH Dr.RANGARAJAN Dr.SAKUNTHALA ENGINEERING COLLEGE, #42,
AVADI-VELTECH ROAD, AVADI, CHENNAI-600062.
Signature of Examiners
Date …………….
An Autonomous Institution
191CSV78
SOFT COMPUTING LABORATORY
Mission
M1 - To provide good teaching and learning environment with conducive research
atmosphere in the field of Computer Science and Engineering.
M2 - To propagate lifelong learning.
M3 - To impart the right proportion of knowledge, attitudes and ethics in students
to enable them take up positions of responsibility in the society and make
significant contributions.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
PSO’
PROGRAMME SPECIFIC OUTCOMES
s
An ability to apply, design and development of application oriented
PSO1 software systems and to test and document in accordance with Computer
Science and Engineering.
The design techniques, analysis and the building, testing, operation and
PSO2 maintenance of networks, databases, security and computer systems
(both hardware and software).
An ability to identify, formulate and solve hardware and software
PSO3
problems using sound computer engineering principles.
PROGRAMME OUTCOMES (POs)
PO’s PROGRAMME OUTCOMES
Engineering Knowledge: Apply knowledge of mathematics,
PO1 science, engineering fundamentals and an Engineering Specialization to the
solution of complex engineering problems.
Problem Analysis: Identify, formulate, review research
literature and analyze complex engineering problems reaching substantiated
PO2
conclusions using first principles of mathematics, natural sciences, and
engineering sciences.
Design / Development of solutions: Design solutions for complex
engineering problems and design system components or processes that meet
PO3
specified needs with appropriate consideration for public health and safety,
cultural, societal, and environmental considerations.
Conduct Investigations of Complex Problems: Use research-based
knowledge and research methods including design of experiments, analysis
PO4
and interpretation of data, and synthesis of the information to provide valid
conclusions.
Modern tool usage: Create, select, and apply appropriate techniques,
resources, and modern engineering and IT tools including prediction and
PO5
modeling to complex engineering activities with an understanding of the
limitations.
The Engineer and Society: Apply reasoning informed by the contextual
PO6 knowledge to assess societal, health, safety, legal and cultural issues and the
consequent responsibilities relevant to the professional engineering practice.
Environment and sustainability: Understand the impact of the professional
PO7 engineering solutions in societal and environmental contexts, and demonstrate
the knowledge of, and need for sustainable development.
Ethics: Apply ethical principles and commit to professional ethics and
PO8
responsibilities and norms of the engineering practice.
Individual and team work: Function effectively as an individual, and as a
PO9
member or leader in diverse teams, and in multidisciplinary settings.
Communication: Communicate effectively on complex engineering activities
with the engineering community and with society at large, such as, being able
PO10
to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions
Project Management and Finance: Demonstrate knowledge and
understanding of the engineering and management principles and apply these
PO11
to one’s own work, as a member and leader in a team, to manage projects and
in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation and
PO12 ability to engage in independent and life-long learning in the broadest context
of technological change.
COURSE OBJECTIVES
COURSE OUTCOMES
Course
CO Statements
Outcome
CO1
CO2
CO3
CO4
CO5
PSO2
PSO3
PO10
PO11
PO12
PO 1
PO 2
PO 3
PO 4
PO 5
PO 6
PO 7
PO 8
PO 9
me
CO1
CO2
CO3
CO4
CO5
CO
COURSE PLAN
Ex.
Name of the Exercise CO Page No.
No
1 CO3
2 CO1,CO2
3 CO1,CO2
4 CO4
5 CO3,CO4
6 CO3,CO4
vii
i
CONTENT
ix
EXP 1 Implementation of fuzzy control inference system
Date:
AIM: Understand the concept of fuzzy control inference system using python programming
language.
Algorithm:
Step 1: Define Fuzzy Sets input and output variables.
Step 2: Create Fuzzy Rules
Step 3: Perform Fuzzy Inference
Step 4: Defuzzify the output fuzzy sets to obtain a crisp output value.
Step 5: Use the defuzzified output as the control action.
Step 6: Implement Control Action.
Step 7: Repeat the above steps in a loop as needed for real-time
control. End of the fuzzy control algorithm.
First, you'll need to install the scikit-fuzzy library if you haven't already. You can install it using
the following command:
pip install scikit-fuzzy
Now, let's implement the fuzzy inference system:
PROGRAM:
import numpy as np
import skfuzzy as fuzz
from skfuzzy import control as ctrl
1
rule2 = ctrl.Rule(temperature['medium'], fan_speed['medium'])
rule3 = ctrl.Rule(temperature['high'], fan_speed['high'])
Output:
2
Result: Thus the above program for fuzzy control interface system executed successfully with
desired output.
3
EXP 2 Programming exercise on classification with a discrete perceptron
AIM: Understand the concept of classification with discrete perceptron using python
programming language.
Algorithm:
Step 1: Initialize weights W and bias b to small random values
Step 2: Define learning rate
Step 3: Define the number of training epochs
Step 4: Define the training data (features and labels
Step 5: Define the perceptron training algorithm
Step 6: The perceptron is now trained, and you can use it to make predictions
PROGRAM:
import numpy as np
class DiscretePerceptron:
def init (self, input_size):
self.weights = np.zeros(input_size)
self.bias = 0
def main():
# Generate some example data points for two classes
class_0 = np.array([[2, 3], [3, 2], [1, 1]])
class_1 = np.array([[5, 7], [6, 8], [7, 6]])
# Combine the data points and create labels (0 for class 0, 1 for class 1)
inputs = np.vstack((class_0, class_1))
targets = np.array([0, 0, 0, 1, 1, 1])
4
# Train the perceptron
perceptron.train(inputs, targets)
Output:
Result: Thus the above program classification with discrete perceptron executed successfully
with desired output.
5
EXP 3 Implementation of XOR with backpropagation algorithm
AIM: Understand the concept of XOR with backpropagation algorithm using python
programing language.
Algorithm:
1. Initialize the neural network with random weights and biases.
2. Define the training data for XOR
3. Set hyperparameters:
Learning rate (alpha)
Number of epochs (iterations)
Number of hidden layers and neurons per layer
Activation function (e.g., sigmoid)
4. Repeat for each epoch:
a. Initialize the total error for this epoch to 0.
b. For each training example in the dataset:
i. Forward propagation:
Compute the weighted sum of inputs and biases for each neuron in
the hidden layer(s) and output layer.
Apply the activation function to each neuron's output.
ii. Compute the error between the predicted output and the actual output for the
current training example.
iii. Update the total error for this epoch with the squared error from step ii.
iv. Backpropagation:
Compute the gradient of the error with respect to the output layer neurons.
Backpropagate the gradients through the hidden layers.
Update the weights and biases using the gradients and the learning rate.
c. Calculate the average error for this epoch by dividing the total error by the number
of training examples.
d. Check if the average error is below a predefined threshold or if the desired
accuracy is reached.
- If yes, exit the training loop.
5. Once training is complete, you can use the trained neural network to predict XOR
values for new inputs.
6. End.
PROGRAM:
import numpy as np
def sigmoid_derivative(x):
6
7
return x * (1 - x)
# Training loop
for _ in range(epochs):
# Forward propagation
hidden_layer_activation = np.dot(input_data, hidden_weights)
hidden_layer_output = sigmoid(hidden_layer_activation)
# Calculate error
error = target_data - predicted_output
# Backpropagation
output_delta = error * sigmoid_derivative(predicted_output)
hidden_layer_error = output_delta.dot(output_weights.T)
hidden_layer_delta = hidden_layer_error * sigmoid_derivative(hidden_layer_output)
# Update weights
output_weights += hidden_layer_output.T.dot(output_delta) * learning_rate
hidden_weights += input_data.T.dot(hidden_layer_delta) * learning_rate
8
Output:
Result: Thus the above program classification with discrete perception executed successfully with
desired output.
9
EXP 4 Implementation of self-organizing maps for a specific application.
AIM: Understand the concept of self-organizing maps for a specific application using
python programming language.
Algorithm:
4. Repeat the training process until convergence (or a predetermined number of epochs).
6. Visualization (optional):
- Visualize the trained SOM grid to understand the data distribution and clustering.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
# Generate some sample data (replace this with your own dataset)
np.random.seed(42)
data = np.random.rand(100, 2)
10
11
# SOM parameters
grid_size = (10, 10) # Grid size of the SOM
input_dim = 2 # Dimensionality of the input data
learning_rate = 0.2
num_epochs = 1000
# Training loop
for epoch in range(num_epochs):
for input_vector in data:
# Find the Best Matching Unit (BMU)
distances = np.linalg.norm(weight_matrix - input_vector, axis=-1)
bmu_coords = np.unravel_index(np.argmin(distances), distances.shape)
12
Output:
Result: Thus the above program for self-organizing map executed successfully with desired
output.
13
EXP 5 Programming exercises on maximizing a function using Genetic algorithm.
AIM: Understand the concept of maximizing function using Genetic algorithm using python
programming.
Algorithm:
1. Initialize the population with random solutions.
2. Define the fitness function to evaluate how good each solution is.
3. Set the maximum number of generations.
4. Set the mutation rate (probability of changing a gene in an individual).
5. Set the crossover rate (probability of two individuals mating).
6. Repeat for each generation:
a. Evaluate the fitness of each individual in the population using the fitness function.
b. Select the best individuals based on their fitness to become parents.
c. Create a new generation by crossover (mixing) the genes of the parents.
d. Apply mutation to some individuals in the new generation.
e. Replace the old population with the new generation.
7. Repeat for the specified number of generations.
8. Find and return the individual with the highest fitness as the best solution.
PROGRAM:
import random
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound):
population = initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
Output:
Result: Thus the above program maximizing function using genetic algorithm executed
successfully with desired output.
17
EXP 6 Implementation of two input sine function.
AIM: Understand the concept of implementation of two input sine function using Genetic
algorithm.
Algorithm:
# Genetic Algorithm for Two-Input Sine Function Optimization
1. Define the fitness function
2. Initialize the population
3. Define functions for genetic operations
4. Implement the main genetic algorithm loop
5. Print the final best solution found by the genetic algorithm.
PROGRAM
import random
import math
18
# Perform mutation in the population
def mutate(individual, mutation_prob=0.01):
x, y = individual
if random.random() < mutation_prob:
x += random.uniform(-0.1, 0.1)
if random.random() < mutation_prob:
y += random.uniform(-0.1, 0.1)
return x, y
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound):
population = initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
Output:
Generation 1: Best individual - (-5.806639394411164, 2.957052015269947), Fitness -
0.6422076600091893
Generation 2: Best individual - (-3.7004701839702663, 4.4413546380285975), Fitness - -
0.43325964387284566
19
Generation 3: Best individual - (-3.7004701839702663, 5.464316418988149), Fitness - -
0.20013884834113005
Generation 4: Best individual - (5.481791654037208, 3.3095163097626763), Fitness - -
0.8854619344317294
Generation 5: Best individual - (4.897491323013819, 3.3095163097626763), Fitness - -
1.150052992911647
Generation 6: Best individual - (4.976671184995054, 3.3095163097626763), Fitness - -
1.1324158225088536
Generation 7: Best individual - (3.9420165382340246, 3.3095163097626763), Fitness - -
0.8847869227205696
Generation 8: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 9: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 10: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 11: Best individual - (4.34542752972704, 5.481189847293816), Fitness - -
1.6521667383260996
Generation 12: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 13: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 14: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 15: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 16: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 17: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 18: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 19: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 20: Best individual - (4.266603727856264, 5.481189847293816), Fitness - -
1.621017281069609
Generation 21: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 22: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 23: Best individual - (4.976671184995054, 5.481189847293816), Fitness - -
1.6840251615701645
Generation 24: Best individual - (4.897491323013819, 5.481189847293816), Fitness - -
1.7016623319729578
Generation 25: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 26: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 27: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
20
Generation 28: Best individual - (-1.2170450032547304, 4.380981364013678), Fitness - -
1.8836650637984946
Generation 29: Best individual - (-1.2170450032547304, 4.380981364013678), Fitness - -
1.8836650637984946
Generation 30: Best individual - (-1.2170450032547304, 4.380981364013678), Fitness - -
1.8836650637984946
Result: Thus the above program implementation of two input sine function using genetic
algorithm executed successfully.
21
EXP 7 Implementation of three input nonlinear function
AIM
Algorithm
# Genetic Algorithm for Three-Input Nonlinear Function Optimization
1. Define the fitness function.
2. Initialize the population.
3. Define functions for genetic operations.
4. Implement the main genetic algorithm loop.
5. Print the final best solution found by the genetic algorithm.
PROGRAM
import random
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound):
population = initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
generations = 50
pop_size = 100
lower_bound = -1
upper_bound = 1
24
Output:
Generation 1: Best individual - (-0.05856140717606745, 0.031920444393859077,
0.1749430018353162), Fitness - 23.638248996079486
Generation 2: Best individual - (-0.0435664961811546, -0.21954873032302427, -
0.16051643562429213), Fitness - 16.78431041370941
Generation 3: Best individual - (-0.08047256311183462, 0.08748607229595336, -
0.033675015554337134), Fitness - 27.03730906535697
Generation 4: Best individual - (0.09173450837429278, 0.2701480951847052, -
0.04923516012359691), Fitness - 16.563306003309293
Generation 5: Best individual - (0.06931525412773312, 0.05650887471327237, -
0.6038978838220976), Fitness - 10.126283111803192
Generation 6: Best individual - (0.09551296321256389, 0.3037721680736508,
0.09828297902264888), Fitness - 12.980004103640377
Generation 7: Best individual - (0.36966788594966404, 0.020338697069605532,
0.0003553226927579256), Fitness - 12.951119752237492
Generation 8: Best individual - (-0.13483009446855374, 0.031089470762199117, -
0.5756450454760131), Fitness - 7.188833165750407
Generation 9: Best individual - (-0.063607907585292, -0.04456979821453749, -
0.45149742141484683), Fitness - 9.073275131295658
Generation 10: Best individual - (0.011176844816005782, 0.38683718057120575, -
0.4586668963552707), Fitness - -7.626399941503013
Generation 11: Best individual - (0.42384530453390673, -0.017961106838255136, -
0.4918457018441281), Fitness - -9.349262068882442
Generation 12: Best individual - (0.33991169237820235, 0.31387850909754794, -
0.4929268418150241), Fitness - -19.707456018578803
Generation 13: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 14: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 15: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 16: Best individual - (0.4112385766210301, 0.5040715286796309, -
0.3178766342306278), Fitness - -23.14240113919352
Generation 17: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 18: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 19: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 20: Best individual - (0.30622472006746704, 0.4950670130302236, -
0.44378439110485673), Fitness - -23.373347854338835
Generation 21: Best individual - (0.30622472006746704, 0.4950670130302236, -
0.44378439110485673), Fitness - -23.373347854338835
Generation 22: Best individual - (0.3063202575245222, 0.4950822379662878, -
0.4437892287140689), Fitness - -23.379191958933983
Generation 23: Best individual - (0.33335312358816604, 0.49763301297219154, -
0.43647155864564097), Fitness - -24.763115313088132
Generation 24: Best individual - (0.40994188262594977, 0.4096825578747779, -
25
0.42523970750461587), Fitness - -26.30751096118025
Generation 25: Best individual - (0.39756456561304454, 0.48504626799164063, -
0.4088104102096408), Fitness - -26.9186217943733
Generation 26: Best individual - (0.38380825053477785, 0.5006633169359437, -
0.4288824223540211), Fitness - -27.051357458861553
Generation 27: Best individual - (0.4057387303749639, 0.5681504161728289, -
0.4242177696903832), Fitness - -26.948969587424035
Generation 28: Best individual - (0.40511387532462084, 0.47856788928174143, -
0.36781261632617945), Fitness - -25.457363863631336
Generation 29: Best individual - (0.40671661145515176, 0.4783203340919677, -
0.3735290334571525), Fitness - -25.777462228224486
Generation 30: Best individual - (0.40696169049260167, 0.4789885169063051, -
0.3799677735908236), Fitness - -26.080160960767703
26
Result:
Thus the above program genetic algorithm for three input non-linear function optimization
executed successfully.
27
28