Soft Computing
Soft Computing
Prepared by
N. SENTHIL KUMAR, AP/IT
EXP 1: Implementation of fuzzy control/ inference system
DATE :
AIM :
To Implement of fuzzy control/ inference system
ALGORITHM:
Step 1: Define Input and Output Variables
Step 2: Define Membership Functions
Step 3: Define Rules
Step 4: Create Control System
Step 5: Create Simulation
Step 6: Pass Inputs and Compute Output
PROGRAM:
import numpy as np
import skfuzzy as fuzz
from skfuzzy import control as ctrl
# Define rules
rule1 = ctrl.Rule(temperature['cold'] & humidity['low'], fan_speed['low'])
rule2 = ctrl.Rule(temperature['medium'] | humidity['medium'], fan_speed['medium'])
rule3 = ctrl.Rule(temperature['hot'] & humidity['high'], fan_speed['high'])
RESULT:
Thus, the implementation of fuzzy control/ inference system has been verified.
EXP 2: Programming exercise on classification with a discrete perceptron with output using
python
DATE:
AIM :
To Implement of Programming exercise on classification with a discrete perceptron
ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the perceptron class
Step 3: Define the dataset
Step 4: Initialize and train the perceptron
Step 5: Test the perceptron
PROGRAM:
import numpy as np
class Discrete Perceptron With Output:
def __init__(self, num_features):
self.weights = np.zeros(num_features + 1) # Additional weight for bias
self.learning_rate = 0.1
OUTPUT:
Features: [0 0], Target: 0, Prediction: 0 Features: [0 1], Target: 1, Prediction: 1 Features: [1
0], Target: 1, Prediction: 1 Features: [1 1], Target: 1, Prediction: 1
RESULT:
Thus, the implementation of Programming exercise on classification with a discrete
perceptron has been verified.
EXP3: Implementation of XOR with backpropagation algorithm.
DATE:
AIM:
To Implement a XOR with backpropagation algorithm
ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the activation function (sigmoid) and its derivative
Step 3: Define the input dataset (X) and output dataset (y) for XOR
Step 4: Initialize the weights randomly and set learning rate and number of epochs.
Step 5: Initialize weights randomly with mean 0
Step 6: Training the neural network
Step 7: Print the final output after training
PROGRAM:
import numpy as np
class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# Initialize weights and biases
self.weights_input_hidden = np.random.randn(self.input_size, self.hidden_size)
self.bias_hidden = np.random.randn(self.hidden_size)
self.weights_hidden_output = np.random.randn(self.hidden_size, self.output_size)
self.bias_output = np.random.randn(self.output_size)
# Learning rate
self.learning_rate = 0.1
def sigmoid (self, x):
return 1 / (1 + np.exp(-x))
def sigmoidvderivative(self, x):
return x * (1 - x)
def forward (self, X):
# Forward pass through the network
self.hidden_input = np.dot(X, self.weights_input_hidden) + self.bias_hidden
self.hidden_output = self.sigmoid(self.hidden_input)
self.output = np.dot(self.hidden_output, self.weights_hidden_output) + self.bias_output
return self.output
OUTPUT:
Predictions after training: Input: [0 0], Target: [0], Predicted: 0 (Output: [5.55111512e-15])
Input: [0 1], Target: [1], Predicted: 1 (Output: [1.]) Input: [1 0], Target: [1], Predicted: 1
(Output: [1.]) Input: [1 1], Target: [0], Predicted: 0 (Output: [4.88498131e-15])
RESULT:
Thus, the Implement a XOR with backpropagation algorithm has been verified.
EXP4: Implementation of self organizing maps for a specific application
DATE:
AIM:
To Implement a self-organizing maps for a specific application
ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the SOM class
Step 3: Prepare and normalize the data
Step 4: Instantiate and train the SOM
Step 5: Visualize the SOM
PROGRAM:
import math
class SOM:
# Function here computes the winning vector
# by Euclidean distance
def winner (self, weights, sample):
D0 = 0
D1 = 0
for i in range(len(sample)):
D0 = D0 + math.pow((sample[i] - weights[0][i]), 2)
D1 = D1 + math.pow((sample[i] - weights[1][i]), 2)
# Selecting the cluster with smallest distance as winning cluster
if D0 < D1:
return 0
else:
return 1
# Function here updates the winning vector
def update(self, weights, sample, J, alpha):
# Here iterating over the weights of winning cluster and modifying them
for i in range(len(weights[0])):
weights[J][i] = weights[J][i] + alpha * (sample[i] - weights[J][i])
return weights
# Driver code
def main():
# Training Examples ( m, n )
T = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
m, n = len(T), len(T[0])
# weight initialization ( n, C )
weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]]
# training
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
for j in range(m):
# training sample
sample = T[j]
# Compute winner vector
J = ob.winner(weights, sample)
# Update winning vector
weights = ob.update(weights, sample, J, alpha)
# classify test sample
s = [0, 0, 0, 1]
J = ob.winner(weights, s)
print("Test Sample s belongs to Cluster : ", J)
print("Trained weights : ", weights)
if __name__ == "__main__":
main()
OUTPUT:
Test Sample s belongs to Cluster : 0
Trained weights : [[0.6000000000000001, 0.8, 0.5, 0.9], [0.3333984375, 0.0666015625, 0.7,
0.3]]
RESULT:
Thus, the Implementation of self- organizing maps for a specific application Python program
has been verified.
EXP5: Programming exercises on maximizing a function using Genetic algorithm
Date:
AIM:
To implement a Programming exercise on maximizing a function using Genetic algorithm
using Python.
ALGORITHM:
Step1: Randomly initialize populations p
Step2: Determine fitness of population
Step3: Until convergence repeat:
a) Select parents from population
b) Crossover and generate new population
c) Perform mutation on new population
d) Calculate fitness for new population
PROGRAM:
import numpy as np
# Define the function to be maximized
def fitness_function(x):
return np.sin(x)
# Define genetic algorithm parameters
population_size = 100
chromosome_length = 10
mutation_rate = 0.01
generations = 100
# Generate initial population
def initialize_population(population_size, chromosome_length):
return np.random.uniform(-np.pi, np.pi, size=(population_size, chromosome_length))
# Evaluate fitness of each individual in the population
def calculate_fitness(population):
return np.array([fitness_function(chromosome) for chromosome in population])
# Perform tournament selection
def tournament_selection(population, fitness_scores):
indices = np.random.choice(len(population), size=2)
return population[indices[np.argmax(fitness_scores[indices])]]
# Perform single-point crossover
def crossover(parent1, parent2):
crossover_point = np.random.randint(len(parent1))
child1 = np.concatenate((parent1[:crossover_point], parent2[crossover_point:]))
child2 = np.concatenate((parent2[:crossover_point], parent1[crossover_point:]))
return child1, child2
# Perform mutation
def mutate(chromosome, mutation_rate):
for i in range(len(chromosome)):
if np.random.rand() < mutation_rate:
chromosome[i] += np.random.normal(0, 0.1)
return chromosome
# Main genetic algorithm loop
population = initialize_population(population_size, chromosome_length)
for generation in range(generations):
fitness_scores = calculate_fitness(population)
next_generation = []
for _ in range(population_size // 2):
parent1 = tournament_selection(population, fitness_scores)
parent2 = tournament_selection(population, fitness_scores)
child1, child2 = crossover(parent1, parent2)
child1 = mutate(child1, mutation_rate)
child2 = mutate(child2, mutation_rate)
next_generation.extend([child1, child2])
population = np.array(next_generation)
# Find the best individual in the final population
best_individual = population[np.argmax(calculate_fitness(population))]
best_fitness = fitness_function(best_individual)
print("Best individual:", best_individual)
print("Best fitness:", best_fitness)
OUTPUT:
Best individual: [ 3.13960106 3.13960106 3.13960106 3.13960106 3.13960106
3.13960106
3.13960106 3.13960106 -3.13960106 3.13960106]
Best fitness: 0.999999994148458
RESULT:
Thus, the Programming exercises on maximizing a function using Genetic algorithm has been
verified.
EXP6: Implementation of two input sine function
DATE:
AIM:
To Implement a two input sine function.
ALGORITHM:
Step 1: Import necessary libraries
Step 2: Define the fitness function
Step 3: Define genetic algorithm parameters
Step 4: Generate initial population
Step 5: Evaluate fitness of each individual in the population
Step 6: Perform tournament selection
Step 7: Perform single-point cross over Step
Step 8: Perform mutation
Step 9: Main genetic algorithm loop
Step 10: Find the best individual in the final population
PROGRAM:
import numpy as np
# Example usage
x1 = np.pi / 4 # Example value for x1
x2 = np.pi / 3 # Example value for x2
result = two_input_sine(x1, x2)
print("Result of two-input sine function:", result)
OUTPUT:
The two-input sine function: 0.191012986014
RESULT:
Thus, the Implement a two - input sine function using python has been verified.
EXP7: Implementation of three input non - linear function.
DATE:
AIM:
To Implementation of three input non - linear function
ALGORITHM:
Program:
import numpy as np
# Define the three-input nonlinear function
def three_input_nonlinear(x1, x2, x3):
return np.sin(x1) + np.cos(x2)**2 + np.exp(x3)
# Example usage
x1 = 0.5
x2 = 1.2
x3 = -0.8
result = three_input_nonlinear(x1, x2, x3)
print("Result of three-input nonlinear function:", result)
OUTPUT:
The three-input nonlinear function: 3.12483372615
Result:
Thus, the Implementation of three input non- linear function using python has been verified.