0% found this document useful (0 votes)
33 views16 pages

Soft Computing

Soft computing lab notes

Uploaded by

jitbhav97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views16 pages

Soft Computing

Soft computing lab notes

Uploaded by

jitbhav97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

SRI MUTHUKUMARAN INSTITUTE OF TECHNOLOGY

Chikkarayapuram, Near Mangadu, Chennai – 600 069.


Academic Year 2024 -2025 / Even Semester

DEPARTMENT OF INFORMATION TECHNOLOGY

CCS 364 SOFT COMPUTING LABORATORY


LAB MANUAL
REGULATION:2021

Prepared by
N. SENTHIL KUMAR, AP/IT
EXP 1: Implementation of fuzzy control/ inference system
DATE :

AIM :
To Implement of fuzzy control/ inference system

ALGORITHM:
Step 1: Define Input and Output Variables
Step 2: Define Membership Functions
Step 3: Define Rules
Step 4: Create Control System
Step 5: Create Simulation
Step 6: Pass Inputs and Compute Output
PROGRAM:
import numpy as np
import skfuzzy as fuzz
from skfuzzy import control as ctrl

# Define input variables


temperature = ctrl.Antecedent(np.arange(0, 101, 1), 'temperature')
humidity = ctrl.Antecedent(np.arange(0, 101, 1), 'humidity')

# Define output variable


fan_speed = ctrl.Consequent(np.arange(0, 101, 1), 'fan_speed')

# Define membership functions for input variables


temperature['cold'] = fuzz.trimf(temperature.universe, [0, 0, 50])
temperature['medium'] = fuzz.trimf(temperature.universe, [20, 50, 80])
temperature['hot'] = fuzz.trimf(temperature.universe, [50, 100, 100])
humidity['low'] = fuzz.trimf(humidity.universe, [0, 0, 50])
humidity['medium'] = fuzz.trimf(humidity.universe, [20, 50, 80])
humidity['high'] = fuzz.trimf(humidity.universe, [50, 100, 100])
# Define membership functions for output variable
fan_speed['low'] = fuzz.trimf(fan_speed.universe, [0, 0, 50])
fan_speed['medium'] = fuzz.trimf(fan_speed.universe, [20, 50, 80])
fan_speed['high'] = fuzz.trimf(fan_speed.universe, [50, 100, 100])

# Define rules
rule1 = ctrl.Rule(temperature['cold'] & humidity['low'], fan_speed['low'])
rule2 = ctrl.Rule(temperature['medium'] | humidity['medium'], fan_speed['medium'])
rule3 = ctrl.Rule(temperature['hot'] & humidity['high'], fan_speed['high'])

# Create control system


fan_ctrl = ctrl.ControlSystem([rule1, rule2, rule3])
fan = ctrl.ControlSystemSimulation(fan_ctrl)

# Pass inputs and compute output


fan.input['temperature'] = 30
fan.input['humidity'] = 70
fan.compute()

# Print the computed fan speed


print(fan.output['fan_speed'])
OUTPUT:
Fan speed 50

RESULT:
Thus, the implementation of fuzzy control/ inference system has been verified.
EXP 2: Programming exercise on classification with a discrete perceptron with output using
python
DATE:

AIM :
To Implement of Programming exercise on classification with a discrete perceptron

ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the perceptron class
Step 3: Define the dataset
Step 4: Initialize and train the perceptron
Step 5: Test the perceptron

PROGRAM:
import numpy as np
class Discrete Perceptron With Output:
def __init__(self, num_features):
self.weights = np.zeros(num_features + 1) # Additional weight for bias
self.learning_rate = 0.1

def predict(self, features):


# Add bias term
features_with_bias = np.insert(features, 0, 1)
activation = np.dot(self.weights, features_with_bias)
return self.step_function(activation)

def step_function(self, x):


return 1 if x >= 0 else 0
def train(self, features, target):
prediction = self.predict(features)
error = target - prediction
# Update weights
self.weights += self.learning_rate * error * np.insert(features, 0, 1)
# Sample dataset
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 1, 1, 1]) # OR gate
# Initialize perceptron
perceptron = Discrete Perceptron With Output(num_features=2)
# Train perceptron
num_epochs = 10
for epoch in range(num_epochs):
for features, target in zip(X, y):
perceptron.train(features, target)
# Test perceptron
for features, target in zip(X, y):
prediction = perceptron.predict(features)
print(f"Features: {features}, Target: {target}, Prediction: {prediction}")

OUTPUT:
Features: [0 0], Target: 0, Prediction: 0 Features: [0 1], Target: 1, Prediction: 1 Features: [1
0], Target: 1, Prediction: 1 Features: [1 1], Target: 1, Prediction: 1

RESULT:
Thus, the implementation of Programming exercise on classification with a discrete
perceptron has been verified.
EXP3: Implementation of XOR with backpropagation algorithm.
DATE:

AIM:
To Implement a XOR with backpropagation algorithm

ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the activation function (sigmoid) and its derivative
Step 3: Define the input dataset (X) and output dataset (y) for XOR
Step 4: Initialize the weights randomly and set learning rate and number of epochs.
Step 5: Initialize weights randomly with mean 0
Step 6: Training the neural network
Step 7: Print the final output after training

PROGRAM:
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Initialize weights and biases
self.weights_input_hidden = np.random.randn(self.input_size, self.hidden_size)
self.bias_hidden = np.random.randn(self.hidden_size)
self.weights_hidden_output = np.random.randn(self.hidden_size, self.output_size)
self.bias_output = np.random.randn(self.output_size)
# Learning rate
self.learning_rate = 0.1
def sigmoid (self, x):
return 1 / (1 + np.exp(-x))
def sigmoidvderivative(self, x):
return x * (1 - x)
def forward (self, X):
# Forward pass through the network
self.hidden_input = np.dot(X, self.weights_input_hidden) + self.bias_hidden
self.hidden_output = self.sigmoid(self.hidden_input)
self.output = np.dot(self.hidden_output, self.weights_hidden_output) + self.bias_output
return self.output

def backward(self, X, y, output):


# Backpropagation
self.output_error = y - output
self.output_delta = self.output_error
self.hidden_error = self.output_delta.dot(self.weights_hidden_output.T)
self.hidden_delta = self.hidden_error * self.sigmoid_derivative(self.hidden_output)
# Update weights and biases
self.weights_hidden_output += self.hidden_output.T.dot(self.output_delta) *
self.learning_rate
self.bias_output += np.sum(self.output_delta) * self.learning_rate
self.weights_input_hidden += X.T.dot(self.hidden_delta) * self.learning_rate
self.bias_hidden += np.sum(self.hidden_delta) * self.learning_rate
def train(self, X, y, epochs):
for epoch in range(epochs):
output = self.forward(X)
self.backward(X, y, output)
def predict(self, X):
return self.forward(X)
# XOR input and output
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
# Initialize and train the neural network
model = NeuralNetwork(input_size=2, hidden_size=3, output_size=1)
model.train(X, y, epochs=10000)
# Predictions
print("Predictions after training:")
for i in range(len(X)):
prediction = model.predict(X[i])
output = 1 if prediction >= 0.5 else 0
print(f"Input: {X[i]}, Target: {y[i]}, Predicted: {output} (Output: {prediction})")

OUTPUT:

Predictions after training: Input: [0 0], Target: [0], Predicted: 0 (Output: [5.55111512e-15])
Input: [0 1], Target: [1], Predicted: 1 (Output: [1.]) Input: [1 0], Target: [1], Predicted: 1
(Output: [1.]) Input: [1 1], Target: [0], Predicted: 0 (Output: [4.88498131e-15])

RESULT:
Thus, the Implement a XOR with backpropagation algorithm has been verified.
EXP4: Implementation of self organizing maps for a specific application
DATE:

AIM:
To Implement a self-organizing maps for a specific application

ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the SOM class
Step 3: Prepare and normalize the data
Step 4: Instantiate and train the SOM
Step 5: Visualize the SOM

PROGRAM:
import math
class SOM:
# Function here computes the winning vector
# by Euclidean distance
def winner (self, weights, sample):
D0 = 0
D1 = 0
for i in range(len(sample)):
D0 = D0 + math.pow((sample[i] - weights[0][i]), 2)
D1 = D1 + math.pow((sample[i] - weights[1][i]), 2)
# Selecting the cluster with smallest distance as winning cluster
if D0 < D1:
return 0
else:
return 1
# Function here updates the winning vector
def update(self, weights, sample, J, alpha):
# Here iterating over the weights of winning cluster and modifying them
for i in range(len(weights[0])):
weights[J][i] = weights[J][i] + alpha * (sample[i] - weights[J][i])
return weights
# Driver code
def main():
# Training Examples ( m, n )
T = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
m, n = len(T), len(T[0])
# weight initialization ( n, C )
weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]]
# training
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
for j in range(m):
# training sample
sample = T[j]
# Compute winner vector
J = ob.winner(weights, sample)
# Update winning vector
weights = ob.update(weights, sample, J, alpha)
# classify test sample
s = [0, 0, 0, 1]
J = ob.winner(weights, s)
print("Test Sample s belongs to Cluster : ", J)
print("Trained weights : ", weights)
if __name__ == "__main__":
main()

OUTPUT:
Test Sample s belongs to Cluster : 0
Trained weights : [[0.6000000000000001, 0.8, 0.5, 0.9], [0.3333984375, 0.0666015625, 0.7,
0.3]]

RESULT:
Thus, the Implementation of self- organizing maps for a specific application Python program
has been verified.
EXP5: Programming exercises on maximizing a function using Genetic algorithm
Date:

AIM:
To implement a Programming exercise on maximizing a function using Genetic algorithm
using Python.

ALGORITHM:
Step1: Randomly initialize populations p
Step2: Determine fitness of population
Step3: Until convergence repeat:
a) Select parents from population
b) Crossover and generate new population
c) Perform mutation on new population
d) Calculate fitness for new population
PROGRAM:
import numpy as np
# Define the function to be maximized
def fitness_function(x):
return np.sin(x)
# Define genetic algorithm parameters
population_size = 100
chromosome_length = 10
mutation_rate = 0.01
generations = 100
# Generate initial population
def initialize_population(population_size, chromosome_length):
return np.random.uniform(-np.pi, np.pi, size=(population_size, chromosome_length))
# Evaluate fitness of each individual in the population
def calculate_fitness(population):
return np.array([fitness_function(chromosome) for chromosome in population])
# Perform tournament selection
def tournament_selection(population, fitness_scores):
indices = np.random.choice(len(population), size=2)
return population[indices[np.argmax(fitness_scores[indices])]]
# Perform single-point crossover
def crossover(parent1, parent2):
crossover_point = np.random.randint(len(parent1))
child1 = np.concatenate((parent1[:crossover_point], parent2[crossover_point:]))
child2 = np.concatenate((parent2[:crossover_point], parent1[crossover_point:]))
return child1, child2
# Perform mutation
def mutate(chromosome, mutation_rate):
for i in range(len(chromosome)):
if np.random.rand() < mutation_rate:
chromosome[i] += np.random.normal(0, 0.1)
return chromosome
# Main genetic algorithm loop
population = initialize_population(population_size, chromosome_length)
for generation in range(generations):
fitness_scores = calculate_fitness(population)
next_generation = []
for _ in range(population_size // 2):
parent1 = tournament_selection(population, fitness_scores)
parent2 = tournament_selection(population, fitness_scores)
child1, child2 = crossover(parent1, parent2)
child1 = mutate(child1, mutation_rate)
child2 = mutate(child2, mutation_rate)
next_generation.extend([child1, child2])
population = np.array(next_generation)
# Find the best individual in the final population
best_individual = population[np.argmax(calculate_fitness(population))]
best_fitness = fitness_function(best_individual)
print("Best individual:", best_individual)
print("Best fitness:", best_fitness)

OUTPUT:
Best individual: [ 3.13960106 3.13960106 3.13960106 3.13960106 3.13960106
3.13960106
3.13960106 3.13960106 -3.13960106 3.13960106]
Best fitness: 0.999999994148458

RESULT:
Thus, the Programming exercises on maximizing a function using Genetic algorithm has been
verified.
EXP6: Implementation of two input sine function
DATE:

AIM:
To Implement a two input sine function.

ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the fitness function
Step 3: Define genetic algorithm parameters
Step 4: Generate initial population
Step 5: Evaluate fitness of each individual in the population
Step 6: Perform tournament selection
Step 7: Perform single-point cross over Step
Step 8: Perform mutation
Step 9: Main genetic algorithm loop
Step 10: Find the best individual in the final population

PROGRAM:
import numpy as np

def two_input_sine(x1, x2):


return np.sin(x1) * np.sin(x2)

# Example usage
x1 = np.pi / 4 # Example value for x1
x2 = np.pi / 3 # Example value for x2
result = two_input_sine(x1, x2)
print("Result of two-input sine function:", result)

OUTPUT:
The two-input sine function: 0.191012986014

RESULT:
Thus, the Implement a two - input sine function using python has been verified.
EXP7: Implementation of three input non - linear function.
DATE:

AIM:
To Implementation of three input non - linear function

ALGORITHM:

Step 1: Import necessary libraries


Step 2: Define the nonlinear function
Step 3: Example usage

Program:
import numpy as np
# Define the three-input nonlinear function
def three_input_nonlinear(x1, x2, x3):
return np.sin(x1) + np.cos(x2)**2 + np.exp(x3)
# Example usage
x1 = 0.5
x2 = 1.2
x3 = -0.8
result = three_input_nonlinear(x1, x2, x3)
print("Result of three-input nonlinear function:", result)

OUTPUT:
The three-input nonlinear function: 3.12483372615

Result:
Thus, the Implementation of three input non- linear function using python has been verified.

You might also like