Text An Speech
Text An Speech
Name : ………………………………………………………………………
Branch : ………………………………………………………………………
AIM: Understand the concept of fuzzy control inference system using python programminglanguage.
Algorithm:
Step 1: Define Fuzzy Sets input and output variables.Step 2:
Create Fuzzy Rules
Step 3: Perform Fuzzy Inference
Step 4: Defuzzify the output fuzzy sets to obtain a crisp output value.Step 5: Use
the defuzzified output as the control action.
Step 6: Implement Control Action.
Step 7: Repeat the above steps in a loop as needed for real-time control.End of the
fuzzy control algorithm.
First, you'll need to install the scikit-fuzzy library if you haven't already. You can install it usingthe
following command:
pip install scikit-fuzzy
Now, let's implement the fuzzy inference system:
PROGRAM:
Output:
2
Result:
Thus the above program for fuzzy control interface system executed successfully withdesired output.
3
EXP 2 Programming exercise on classification with a discrete perceptron
AIM: Understand the concept of classification with discrete perceptron using pythonprogramming
language.
Algorithm:
Step 1: Initialize weights W and bias b to small random valuesStep 2:
Define learning rate
Step 3: Define the number of training epochs
Step 4: Define the training data (features and labelsStep 5:
Define the perceptron training algorithm
Step 6: The perceptron is now trained, and you can use it to make predictions
PROGRAM:
import numpy as np
class DiscretePerceptron:
def init (self, input_size): self.weights =
np.zeros(input_size)self.bias = 0
def main():
# Generate some example data points for two classesclass_0 =
np.array([[2, 3], [3, 2], [1, 1]])
class_1 = np.array([[5, 7], [6, 8], [7, 6]])
# Combine the data points and create labels (0 for class 0, 1 for class 1)inputs =
np.vstack((class_0, class_1))
targets = np.array([0, 0, 0, 1, 1, 1])
Output:
Result:
Thus the above program classification with discrete perceptron executed successfullywith desired
output.
5
EXP 3 Implementation of XOR with backpropagation algorithm
AIM: Understand the concept of XOR with backpropagation algorithm using pythonprograming
language.
Algorithm:
1. Initialize the neural network with random weights and biases.
2. Define the training data for XOR
3. Set hyperparameters:Learning
rate (alpha)
Number of epochs (iterations)
Number of hidden layers and neurons per layerActivation
function (e.g., sigmoid)
4. Repeat for each epoch:
a. Initialize the total error for this epoch to 0.
b. For each training example in the dataset:
i. Forward propagation:
Compute the weighted sum of inputs and biases for each neuron in thehidden
layer(s) and output layer.
Apply the activation function to each neuron's output.
ii. Compute the error between the predicted output and the actual output for thecurrent
training example.
iii. Update the total error for this epoch with the squared error from step ii.
iv. Backpropagation:
Compute the gradient of the error with respect to the output layer neurons.
Backpropagate the gradients through the hidden layers.
Update the weights and biases using the gradients and the learning rate.
c. Calculate the average error for this epoch by dividing the total error by the number oftraining
examples.
d. Check if the average error is below a predefined threshold or if the desired accuracyis
reached.
- If yes, exit the training loop.
5. Once training is complete, you can use the trained neural network to predict XOR valuesfor new
inputs.
6. End.
PROGRAM:
import numpy as np
def sigmoid_derivative(x):
6
return x * (1 - x)
# Training loop
for _ in range(epochs):
# Forward propagation
hidden_layer_activation = np.dot(input_data, hidden_weights)
hidden_layer_output = sigmoid(hidden_layer_activation)
# Calculate error
error = target_data - predicted_output
# Backpropagation
output_delta = error * sigmoid_derivative(predicted_output)
hidden_layer_error = output_delta.dot(output_weights.T)
hidden_layer_delta = hidden_layer_error * sigmoid_derivative(hidden_layer_output)
# Update weights
output_weights += hidden_layer_output.T.dot(output_delta) * learning_rate
hidden_weights += input_data.T.dot(hidden_layer_delta) * learning_rate
7
Output:
Result:
Thus the above program classification with discrete perception executed successfully withdesired output.
8
EXP 4 Implementation of self-organizing maps for a specific application.
AIM: Understand the concept of self-organizing maps for a specific application usingpython
programming language.
Algorithm:
4. Repeat the training process until convergence (or a predetermined number of epochs).
6. Visualization (optional):
- Visualize the trained SOM grid to understand the data distribution and clustering.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
# Generate some sample data (replace this with your own dataset)
np.random.seed(42)
data = np.random.rand(100, 2)
9
# SOM parameters
grid_size = (10, 10) # Grid size of the SOM input_dim = 2 #
Dimensionality of the input datalearning_rate = 0.2
num_epochs = 1000
# Training loop
for epoch in range(num_epochs):for
input_vector in data:
# Find the Best Matching Unit (BMU)
distances = np.linalg.norm(weight_matrix - input_vector, axis=-1) bmu_coords =
np.unravel_index(np.argmin(distances), distances.shape)
10
Output:
Result:
Thus the above program for self-organizing map executed successfully with desiredoutput.
11
EXP 5 Programming exercises on maximizing a function using Genetic algorithm.
AIM: Understand the concept of maximizing function using Genetic algorithm using python
programming.
Algorithm:
1. Initialize the population with random solutions.
2. Define the fitness function to evaluate how good each solution is.
3. Set the maximum number of generations.
4. Set the mutation rate (probability of changing a gene in an individual).
5. Set the crossover rate (probability of two individuals mating).
6. Repeat for each generation:
a. Evaluate the fitness of each individual in the population using the fitness function.
b. Select the best individuals based on their fitness to become parents.
c. Create a new generation by crossover (mixing) the genes of the parents.
d. Apply mutation to some individuals in the new generation.
e. Replace the old population with the new generation.
7. Repeat for the specified number of generations.
8. Find and return the individual with the highest fitness as the best solution.
PROGRAM:
import random
12
else:
return parent1, parent2
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound): population
= initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
Output:
Result:
Thus the above program maximizing function using genetic algorithm executedsuccessfully with desired
output.
14
EXP 6 Implementation of two input sine function.
AIM: Understand the concept of implementation of two input sine function using Genetic
algorithm.
Algorithm:
# Genetic Algorithm for Two-Input Sine Function Optimization
1. Define the fitness function
2. Initialize the population
3. Define functions for genetic operations
4. Implement the main genetic algorithm loop
5. Print the final best solution found by the genetic algorithm.
PROGRAM
import randomimport
math
15
# Perform mutation in the population
def mutate(individual, mutation_prob=0.01):x, y =
individual
if random.random() < mutation_prob:x +=
random.uniform(-0.1, 0.1)
if random.random() < mutation_prob:y +=
random.uniform(-0.1, 0.1)
return x, y
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound): population
= initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
16
Output:
Generation 1: Best individual - (-5.806639394411164, 2.957052015269947), Fitness -
0.6422076600091893
Generation 2: Best individual - (-3.7004701839702663, 4.4413546380285975), Fitness - -
0.43325964387284566
Generation 3: Best individual - (-3.7004701839702663, 5.464316418988149), Fitness - -
0.20013884834113005
Generation 4: Best individual - (5.481791654037208, 3.3095163097626763), Fitness - -
0.8854619344317294
Generation 5: Best individual - (4.897491323013819, 3.3095163097626763), Fitness - -
1.150052992911647
- (4.976671184995054, 3.3095163097626763), Fitness - -
Generation 6: Best individual
1.1324158225088536
Generation 7: Best individual - (3.9420165382340246, 3.3095163097626763), Fitness - -
0.8847869227205696
Generation 8: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 9: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 10: Best individual - (4.198534144176835, 5.481189847293816), Fitness - -
1.5896010966615468
Generation 11: Best individual - (4.34542752972704, 5.481189847293816), Fitness - -
1.6521667383260996
Generation 12: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 13: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 14: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 15: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 16: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 17: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 18: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 19: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 20: Best individual - (4.266603727856264, 5.481189847293816), Fitness - -
1.621017281069609
Generation 21: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 22: Best individual - (-1.2170450032547304, 5.481189847293816), Fitness - -
1.6568246976897136
Generation 23: Best individual - (4.976671184995054, 5.481189847293816), Fitness - -
17
1.6840251615701645
Generation 24: Best individual - (4.897491323013819, 5.481189847293816), Fitness - -
1.7016623319729578
Generation 25: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 26: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Generation 27: Best individual - (-1.2185577577082327, 5.481189847293816), Fitness - -
1.6573476714317006
Result:
Thus the above program implementation of two input sine function using geneticalgorithm executed
successfully.
18
EXP 7 Implementation of three input nonlinear function
AIM :Understand the concept of implementation of three input nonlinear function using Genetic
algorithm.
Algorithm
# Genetic Algorithm for Three-Input Nonlinear Function Optimization
1. Define the fitness function.
2. Initialize the population.
3. Define functions for genetic operations.
4. Implement the main genetic algorithm loop.
5. Print the final best solution found by the genetic algorithm.
PROGRAM
import random
19
child2 = (crossover_point2 * parent1[0] + (1 - crossover_point2) * parent2[0],
crossover_point2 * parent1[1] + (1 - crossover_point2) * parent2[1],
crossover_point2 * parent1[2] + (1 - crossover_point2) * parent2[2])return
child1, child2
else:
return parent1, parent2
# Genetic Algorithm
def genetic_algorithm(generations, pop_size, lower_bound, upper_bound): population
= initialize_population(pop_size, lower_bound, upper_bound)
population = new_population
generations = 50
pop_size = 100
lower_bound = -1
upper_bound = 1
20
Output:
Generation 1: Best individual - (-0.05856140717606745, 0.031920444393859077,
0.1749430018353162), Fitness - 23.638248996079486
Generation 2: Best individual - (-0.0435664961811546, -0.21954873032302427, -
0.16051643562429213), Fitness - 16.78431041370941
Generation 3: Best individual - (-0.08047256311183462, 0.08748607229595336, -
0.033675015554337134), Fitness - 27.03730906535697
Generation 4: Best individual - (0.09173450837429278, 0.2701480951847052, -
0.04923516012359691), Fitness - 16.563306003309293
Generation 5: Best individual - (0.06931525412773312, 0.05650887471327237, -
0.6038978838220976), Fitness - 10.126283111803192
Generation 6: Best individual - (0.09551296321256389, 0.3037721680736508,
0.09828297902264888), Fitness - 12.980004103640377
Generation 7: Best individual - (0.36966788594966404, 0.020338697069605532,
0.0003553226927579256), Fitness - 12.951119752237492
Generation 8: Best individual - (-0.13483009446855374, 0.031089470762199117, -
0.5756450454760131), Fitness - 7.188833165750407
Generation 9: Best individual - (-0.063607907585292, -0.04456979821453749, -
0.45149742141484683), Fitness - 9.073275131295658
Generation 10: Best individual - (0.011176844816005782, 0.38683718057120575, -
0.4586668963552707), Fitness - -7.626399941503013
Generation 11: Best individual - (0.42384530453390673, -0.017961106838255136, -
0.4918457018441281), Fitness - -9.349262068882442
Generation 12: Best individual - (0.33991169237820235, 0.31387850909754794, -
0.4929268418150241), Fitness - -19.707456018578803
Generation 13: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 14: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 15: Best individual - (0.4287928945546883, 0.5471800246826355, -
0.2740060489612387), Fitness - -20.6405201860691
Generation 16: Best individual - (0.4112385766210301, 0.5040715286796309, -
0.3178766342306278), Fitness - -23.14240113919352
Generation 17: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 18: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 19: Best individual - (0.4147812368137274, 0.5041212646314481, -
0.33531860658801094), Fitness - -24.24331785854991
Generation 20: Best individual - (0.30622472006746704, 0.4950670130302236, -
0.44378439110485673), Fitness - -23.373347854338835
Generation 21: Best individual - (0.30622472006746704, 0.4950670130302236, -
0.44378439110485673), Fitness - -23.373347854338835
Generation 22: Best individual - (0.3063202575245222, 0.4950822379662878, -
0.4437892287140689), Fitness - -23.379191958933983
Generation 23: Best individual - (0.33335312358816604, 0.49763301297219154, -
0.43647155864564097), Fitness - -24.763115313088132
Generation 24: Best individual - (0.40994188262594977, 0.4096825578747779, -
0.42523970750461587), Fitness - -26.30751096118025
Generation 25: Best individual - (0.39756456561304454, 0.48504626799164063, -
21
0.4088104102096408), Fitness - -26.9186217943733
Generation 26: Best individual - (0.38380825053477785, 0.5006633169359437, -
0.4288824223540211), Fitness - -27.051357458861553
Generation 27: Best individual - (0.4057387303749639, 0.5681504161728289, -
0.4242177696903832), Fitness - -26.948969587424035
Generation 28: Best individual - (0.40511387532462084, 0.47856788928174143, -
0.36781261632617945), Fitness - -25.457363863631336
Generation 29: Best individual - (0.40671661145515176, 0.4783203340919677, -
0.3735290334571525), Fitness - -25.777462228224486
Generation 30: Best individual - (0.40696169049260167, 0.4789885169063051, -
0.3799677735908236), Fitness - -26.080160960767703
Result: Thus the above program genetic algorithm for three input non-linear function optimizationexecuted
successfully.
22
EXP 8 Generate AND NOT function using McCulloch-Pitts neural net.
Aim: Understand the concept of AND NOT function using McCulloch-Pitts neural net usingpython
programming.
Algorithm:
Initialize Weights and Threshold:
o Set the weights for inputs A and B. Let's denote them as w_A and w_B.
o Set a threshold value.
Input Values:
o Provide binary input values for A and B.
Calculate Weighted Sum:
o Calculate the weighted sum of inputs: =net=(wA⋅A)+(wB⋅B)
Apply Threshold:
o If the weighted sum is greater than or equal to the threshold, set the output to 1.
Otherwise, set the output to 0.
Output:
o The output is the result of the thresholding step.
PROGRAM:
class McCullochPittsNeuron:
def init (self, weights, threshold):self.weights
= weights self.threshold = threshold
return output
# Input valuesinputs =
23
[a, b]
# Get the output of the neuron result =
neuron.activate(inputs)
return result
OUTPUT:
ANDNOT(0, 0) = 1
ANDNOT(0, 1) = 0
ANDNOT(1, 0) = 0
ANDNOT(1, 1) = 0
Result:
Thus the above program for ANDNOT using McCulloch-Pitts neural net executedsuccessfully.
24
EXP 9 Write a program for solving linearly separable problem using Perceptron Model.
Aim: Understand the concept of linearly separable problem using Perceptron Model using python
programming language.
Algorithm:
Initialize Weights and Bias:
o Initialize the weights and bias to small random values or zeros. These are theparameters
that the perceptron will learn during training.
Provide Training Data:
o Prepare your training dataset. Each data point should have features and a
corresponding label indicating the class.
Add Bias to Inputs:
o Add a bias term (usually set to 1) to each input vector. This allows the model tolearn a bias
term as well.
Define the Perceptron Model:
o Create a perceptron class or function that includes methods for prediction andtraining.
Prediction:
o Implement the prediction function. Given the inputs and current weights, calculate the
weighted sum and apply an activation function (commonly a step function) to obtain the
predicted output (0 or 1).
Training Loop:
o Repeat the following steps for a specified number of epochs or until convergence:
o Iterate through each training data point.
o Compute the predicted output using the current weights.
o Update the weights based on the prediction error and the learning rate.
Weight Update:
o The weight update formula is often expressed as:
weighti ← weighti + learning rate × (true label−predicted label) × inputiThis
update adjusts the weights in the direction that reduces the error.
Convergence:
o Monitor the convergence by checking if the model's predictions match the true labels for all
training examples. If the model has converged, exit the training loop.
Testing:
o Use the trained perceptron to make predictions on new data to evaluate its
performance.
Adjust Hyperparameters:
o Fine-tune hyperparameters such as the learning rate and the number of epochsbased on
the model's performance on a validation set.
Iterate:
o If the model does not perform well, consider refining the features, adjusting thelearning
rate, or using a more complex model.
Final Model:
o Once satisfied with the model's performance, the final learned weights and bias canbe used
for making predictions on new, unseen data.
25
PROGRAM:
Perceptron:
def init (self, input_size, learning_rate=0.1, epochs=100):self.weights =
np.zeros(input_size + 1)
self.learning_rate = learning_rate
self.epochs = epochs
def main():
# Example data for a linearly separable problem
training_data = np.array([
[1, 2],
[2, 3],
[3, 1],
[4, 5],
[5, 4],
])
Perceptron instance
input_size = len(training_data[0]) perceptron
= Perceptron(input_size)
print("Test Results:")
for test_input in test_data:
prediction = perceptron.predict(test_input) print(f"Input:
{test_input}, Prediction: {prediction}")
OUTPUT:
Test Results:
Input: [1 2], Prediction: 0
Input: [4 4], Prediction: 1
Input: [2 3], Prediction: 0
Input: [5 5], Prediction: 1
Result:
Thus the above program for solving linearly separable problem using Perceptron Modelexecuted
successfully.
27
EXP 10 Perceptron net for an AND function with bipolar inputs and targets.
Aim: Understand the concept of Perceptron net for an AND function with bipolar inputs and
targets.
Algorithm:
Define the Patterns:
o Choose 10 patterns to represent the digits (0 to 9). Each pattern should be a list ofdiscrete
values, typically +1 and -1.
Create a Hopfield Network Class:
o Define a Hopfield Network class that includes methods for training the network andrecalling
patterns.
Initialize the Hopfield Network:
o Create an instance of the Hopfield Network with a size that matches the length ofthe
patterns.
Train the Network:
o Use the training method to train the network with the provided patterns.
Define a Test Input:
o Choose a test input, which may be a noisy or incomplete version of one of thepatterns.
Recall the Pattern:
o Use the recall method to retrieve a pattern based on the test input.
Print the Original and Recalled Patterns:
o Print the original pattern and the pattern recalled by the network.
Run the Program:
o Put all the steps together and run the program.
Program:
import numpy as np
class HopfieldNetwork:
def init (self, pattern_size):
self.pattern_size = pattern_size
self.weights = np.zeros((pattern_size, pattern_size))
print("\nRecalled Pattern:")
print_pattern(recalled_pattern.reshape(2, 5))
29
OUTPUT:
Original Pattern:
+-++-
+++++
Recalled Pattern:+-++-
++-++
Results:
Thus the above program for Perceptron net for an AND function with bipolar inputs andtargets executed
successfully.
30