0% found this document useful (0 votes)
65 views29 pages

SC Lab File Fayiz PDF

This document contains the lab file submitted by Fayiz with roll number 21mtcysc06 from the Department of Computer Science and Technology at Central University of Punjab. The lab file contains 9 programs implementing various concepts in soft computing such as single layer perceptron, multilayer perceptron, fuzzy logic operations, Hebb's learning rule, and crisp lambda sets. Fayiz has submitted this lab file to the assistant professor Er. Anam Bansal for evaluation.

Uploaded by

fayizp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views29 pages

SC Lab File Fayiz PDF

This document contains the lab file submitted by Fayiz with roll number 21mtcysc06 from the Department of Computer Science and Technology at Central University of Punjab. The lab file contains 9 programs implementing various concepts in soft computing such as single layer perceptron, multilayer perceptron, fuzzy logic operations, Hebb's learning rule, and crisp lambda sets. Fayiz has submitted this lab file to the assistant professor Er. Anam Bansal for evaluation.

Uploaded by

fayizp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Central University of Punjab

LAB FILE
Soft Computing (CST.527)

Submitted by

Fayiz
21mtcysc06
Dept Computer Science and Technology
Central University of Punjab

Submitted to

Er . Anam Bansal
Assistant Professor
Dept Computer Science and Technology
Central University of Punjab
INDEX

PRACTICAL OBJECTIVE
N0
1 Program to implement Single layer perceptron
2 Program to implement Multilayer Perceptron
3 Program to perform following Fuzzy Operations
4 Implement AND GATE function using perceptron networks for
bipolar inputs and targets
5 Implement Single Point Crossover
6 Implement De Morgan’s law
7 To print membership function of Fuzzy set
8 Implement Hebb's learning rule
9 Implement program for crisp lambda set

2
LAB 1
Program to implement Single layer perceptron

from sklearn import datasets


from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import numpy as np

# Load dataset and assign data to vertices


iris = datasets.load_iris()
x = iris.data
y = iris.target

# Print measurements for the first 5 features


x[:5]

array([[5.1, 3.5, 1.4, 0.2],


[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2]])

# Print all 150 indices of x axis (the class of each sample)


y[:150]

array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])

# Import dataset
iris = datasets.load_iris()
X = iris.data
y = iris.target

3
# Plot graph
plt.figure(2, figsize=(8, 6))
plt.clf()
# Create scatter graph
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1, edgecolor='k')
# Axis labels
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xticks(())
plt.yticks(())
# Display graph
plt.show()

# Set aside 30% of the data set for training


x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)

# Scale features
sc = StandardScaler()
sc.fit(x_train)

# Apply the scaler to the split datasets


x_train_std = sc.transform(x_train)
x_test_std = sc.transform(x_test)

# Create a perceptron with 50 iterations over the dataset, and a learning rate of 0.3
ppn = Perceptron(max_iter=50, eta0=1, verbose=1)

# Train the perceptron


ppn.fit(x_train_std, y_train)

-- Epoch 1
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 105, Avg. loss: 0.000000

4
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 210, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 315, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 420, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 525, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 630, Avg. loss: 0.000000
Total training time: 0.00 seconds.
Convergence after 6 epochs took 0.00 seconds
-- Epoch 1
Norm: 3.86, NNZs: 4, Bias: -2.000000, T: 105, Avg. loss: 0.710458
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 3.55, NNZs: 4, Bias: -2.000000, T: 210, Avg. loss: 0.708580
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 4.94, NNZs: 4, Bias: -3.000000, T: 315, Avg. loss: 0.546571
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 3.17, NNZs: 4, Bias: -3.000000, T: 420, Avg. loss: 0.664591
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 4.79, NNZs: 4, Bias: -2.000000, T: 525, Avg. loss: 0.646327
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 5.21, NNZs: 4, Bias: -2.000000, T: 630, Avg. loss: 0.728946
Total training time: 0.00 seconds.
-- Epoch 7
Norm: 3.80, NNZs: 4, Bias: 0.000000, T: 735, Avg. loss: 0.480516
Total training time: 0.01 seconds.
-- Epoch 8
Norm: 4.60, NNZs: 4, Bias: -2.000000, T: 840, Avg. loss: 0.563502
Total training time: 0.01 seconds.
-- Epoch 9
Norm: 4.23, NNZs: 4, Bias: -2.000000, T: 945, Avg. loss: 0.616793
Total training time: 0.01 seconds.
-- Epoch 10
Norm: 4.60, NNZs: 4, Bias: -2.000000, T: 1050, Avg. loss: 0.673299
Total training time: 0.01 seconds.

5
-- Epoch 11
Norm: 5.09, NNZs: 4, Bias: -1.000000, T: 1155, Avg. loss: 0.472257
Total training time: 0.01 seconds.
-- Epoch 12
Norm: 5.17, NNZs: 4, Bias: -2.000000, T: 1260, Avg. loss: 0.589533
Total training time: 0.01 seconds.
-- Epoch 13
Norm: 5.15, NNZs: 4, Bias: -1.000000, T: 1365, Avg. loss: 0.619679
Total training time: 0.01 seconds.
-- Epoch 14
Norm: 4.86, NNZs: 4, Bias: -5.000000, T: 1470, Avg. loss: 0.587288
Total training time: 0.01 seconds.
-- Epoch 15
Norm: 4.90, NNZs: 4, Bias: -1.000000, T: 1575, Avg. loss: 0.659759
Total training time: 0.01 seconds.
-- Epoch 16
Norm: 5.90, NNZs: 4, Bias: -1.000000, T: 1680, Avg. loss: 0.602657
Total training time: 0.01 seconds.
Convergence after 16 epochs took 0.01 seconds
-- Epoch 1
Norm: 4.44, NNZs: 4, Bias: -3.000000, T: 105, Avg. loss: 0.167079
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 4.58, NNZs: 4, Bias: -4.000000, T: 210, Avg. loss: 0.031087
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 6.55, NNZs: 4, Bias: -3.000000, T: 315, Avg. loss: 0.021161
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 5.66, NNZs: 4, Bias: -5.000000, T: 420, Avg. loss: 0.090071
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 6.48, NNZs: 4, Bias: -5.000000, T: 525, Avg. loss: 0.092820
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 6.33, NNZs: 4, Bias: -6.000000, T: 630, Avg. loss: 0.013560
Total training time: 0.00 seconds.
-- Epoch 7
Norm: 6.36, NNZs: 4, Bias: -6.000000, T: 735, Avg. loss: 0.022056
Total training time: 0.00 seconds.
-- Epoch 8
Norm: 6.41, NNZs: 4, Bias: -6.000000, T: 840, Avg. loss: 0.030255
Total training time: 0.00 seconds.
-- Epoch 9
Norm: 7.29, NNZs: 4, Bias: -7.000000, T: 945, Avg. loss: 0.047561
Total training time: 0.00 seconds.
-- Epoch 10

6
Norm: 8.10, NNZs: 4, Bias: -6.000000, T: 1050, Avg. loss: 0.018981
Total training time: 0.00 seconds.
-- Epoch 11
Norm: 8.05, NNZs: 4, Bias: -6.000000, T: 1155, Avg. loss: 0.042576
Total training time: 0.00 seconds.
Convergence after 11 epochs took 0.00 seconds
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent
workers.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.0s finished
Out[17]:
Perceptron(eta0=1, max_iter=50, verbose=1)

# Apply the trained perceptron on the X data to make predicts for the y test data
y_pred = ppn.predict(x_test_std)

# Print the predicted y test data


y_pred

array([2, 1, 1, 0, 2, 1, 1, 1, 2, 2, 0, 0, 2, 1, 2, 1, 1, 0, 0, 2, 2,
0,
1, 2, 2, 1, 1, 0, 0, 2, 2, 1, 1, 1, 0, 0, 2, 2, 2, 2, 1, 0, 1,
2,
0])

# Print the true y test data


y_test

array([2, 1, 1, 0, 2, 1, 1, 1, 2, 2, 0, 0, 2, 1, 2, 1, 1, 0, 0, 1, 2,
0,
0, 2, 2, 0, 1, 0, 0, 2, 1, 1, 1, 1, 0, 0, 1, 2, 2, 2, 2, 0, 1,
2,
0])

# Print the accuracy of the implementation


print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))

Accuracy: 0.87

# Import Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import Perceptron
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

7
# Plot algorithm decision boundary.
# Code based on
http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
def plot_decision_boundary(classifier, X, y, title):
xmin, xmax = np.min(X[:, 0]) - 0.05, np.max(X[:, 0]) + 0.05
ymin, ymax = np.min(X[:, 1]) - 0.05, np.max(X[:, 1]) + 0.05
step = 0.01

cm = plt.cm.coolwarm_r

thr = 0.0
xx, yy = np.meshgrid(np.arange(xmin - thr, xmax + thr, step), np.arange(ymin - thr, ymax
+ thr, step))

if hasattr(classifier, 'decision_function'):
Z = classifier.decision_function(np.hstack((xx.ravel()[:, np.newaxis], yy.ravel()[:,
np.newaxis])))

else:
Z = classifier.predict_proba(np.hstack((xx.ravel()[:, np.newaxis], yy.ravel()[:,
np.newaxis])))[:, 1]

Z = Z.reshape(xx.shape)

plt.contourf(xx, yy, Z, cmap=cm, alpha=0.8)


plt.colorbar()
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=ListedColormap(['#FF0000', '#0000FF']), alpha=0.6)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.xticks((0.0, 1.0))
plt.yticks((0.0, 1.0))
plt.title(title)

# Setting the input samples.


X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]],
dtype=np.double)
y_XOR = np.array([0, 1, 1, 0])
# Create a MLPClassifier.
mlp = MLPClassifier(hidden_layer_sizes=(5,),
activation='tanh',
max_iter=10000,
random_state=10)
# Train model
mlp.fit(X, y_XOR)
# Plot and display the decision boundary
plot_decision_boundary(mlp, X, y_XOR, 'XOR')
plt.show()

8
# Get predicted values and print
pred = mlp.predict_proba(X)
print("MLP's XOR probabilities:\n[class0, class1]\n{}".format(pred))

9
LAB 2
Program to implement Multilayer Perceptron

import sklearn.datasets
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
breast_cancer = sklearn.datasets.load_breast_cancer() # Binary classification dataset
X = breast_cancer.data
y = breast_cancer.target
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=1, test_size=0.2)
sc_X = StandardScaler()
X_trainscaled=sc_X.fit_transform(X_train)
X_testscaled=sc_X.transform(X_test)
clf =
MLPClassifier(hidden_layer_sizes=(256,128,64,32),activation="relu",random_state=1).fit(X
_trainscaled, y_train)
y_pred=clf.predict(X_testscaled)
print(clf.score(X_testscaled, y_test))

0.9736842105263158

10
LAB 3
Program to perform following Fuzzy Operations:

a) Program to perform the Union of two fuzzy sets

A={"a":0.4,"b":0.6,"c":0.3,"d":0.7}
B={"a":0.2,"b":1.0,"c":0.3,"d":0.8}
U={}
print("The First Fuzzy Set is:",A)
print("The Second Fuzzy Set is:",B)
for A_key,B_key in zip(A,B):
A_value=A[A_key]
B_value=B[B_key]

if A_value>B_value:
U[A_key]=A_value
else:
U[B_key]=B_value
print("The Union of A and B is :",U)

The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Second Fuzzy Set is: {'a': 0.2, 'b': 1.0, 'c': 0.3, 'd': 0.8}
The Union of A and B is : {'a': 0.4, 'b': 1.0, 'c': 0.3, 'd': 0.8}

b) Program to perform the Complement of fuzzy set program to


perform the Intersection of two fuzzy sets.
A={"a":0.4,"b":0.6,"c":0.3,"d":0.7}
B={"a":0.2,"b":1.0,"c":0.3,"d":0.8}
U={}
print("The First Fuzzy Set is:",A)
print("The Second Fuzzy Set is:",B)
for A_key,B_key in zip(A,B):
A_value=A[A_key]
B_value=B[B_key]

if A_value<B_value:
U[A_key]=A_value
else:
U[B_key]=B_value
print("The Interscetion of A and B is :",U)

The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Second Fuzzy Set is: {'a': 0.2, 'b': 1.0, 'c': 0.3, 'd': 0.8}
The Union of A and B is : {'a': 0.2, 'b': 0.6, 'c': 0.3, 'd': 0.7}

c) Program to perform the Complement of fuzzy set

11
A={"a":0.4,"b":0.6,"c":0.3,"d":0.7}
C={}
print("The First Fuzzy Set is:",A)
for A_key in A:
C[A_key]=1-A[A_key]
print("The Complement of A is:",C)

The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Complement of A is: {'a': 0.6, 'b': 0.4, 'c': 0.7, 'd':
0.30000000000000004}

d) Program to perform the Set Difference

A = {"a": 0.2, "b": 0.3, "c": 0.6, "d": 0.6}


B = {"a": 0.9, "b": 0.9, "c": 0.4, "d": 0.5}
D={}
print('The First Fuzzy Set is :', A)
print('The Second Fuzzy Set is :', B)

for A_key, B_key in zip(A, B):


A_value = A[A_key]
B_value = B[B_key]
B_value = 1 - B_value

if A_value < B_value:


D[A_key] = A_value
else:
D[B_key] = B_value

print('Fuzzy Set Difference is :', D)

The First Fuzzy Set is : {'a': 0.2, 'b': 0.3, 'c': 0.6, 'd': 0.6}
The Second Fuzzy Set is : {'a': 0.9, 'b': 0.9, 'c': 0.4, 'd': 0.5}
Fuzzy Set Difference is : {'a': 0.09999999999999998, 'b':
0.09999999999999998, 'c': 0.6, 'd': 0.5}

from pyit2fls import T1FS, trapezoid_mf


from numpy import linspace

domain = linspace(0., 1., 100)


mySet = T1FS(domain, trapezoid_mf, [0, 0.4, 0.6, 1., 1.])
mySet.plot()

12
from pyit2fls import T1FS, gaussian_mf, T1FS_plot
from numpy import linspace

domain = linspace(0., 1., 100)


Small = T1FS(domain, gaussian_mf, [0, 0.15, 1.])
Medium = T1FS(domain, gaussian_mf, [0.5, 0.15, 1.])
Large = T1FS(domain, gaussian_mf, [1., 0.15, 1.])
T1FS_plot(Small, Medium, Large, legends=["Small", "Medium", "large"])

from numpy import abs as npabs


from numpy import linspace
from pyit2fls import T1FS

def gbell_mf(x, params):


return params[3] / (1 + npabs((x - params[2]) / params[0]) ** (2 * params[1]))

domain = linspace(0., 1., 100)


mySet = T1FS(domain, gbell_mf,[0.2, 2., 0.5, 1.])
mySet.plot()

13
14
LAB 4
Implement AND GATE function using perceptron
networks for bipolar inputs and targets

import numpy as np
def function(y):
if y>0:
return 1
else:
return 0
def model(x,w,b):
y=np.dot(w,x)+b
u= function(y)
return u
def AND(X):
w = np.array([0.1,0.1])
b = -0.1
return model(X,w,b)
case1 = np.array([0,1])
case2 = np.array([1,0])
case3 = np.array([0,0])
case4 = np.array([1,1])
print("AND({},{})={}".format(0,1, AND(case1)))
print("AND({},{})={}".format(1,0, AND(case2)))
print("AND({},{})={}".format(0,0, AND(case3)))
print("AND({},{})={}".format(1,1, AND(case4)))

AND(0,1)=0
AND(1,0)=0
AND(0,0)=0
AND(1,1)=1

15
LAB 5
Implement Single Point Crossover

# library to generate a random number


import random

# function for implementing the single-point crossover


def crossover(l, q):

# converting the string to list for performing the crossover


l = list(l)
q = list(q)

# generating the random number to perform crossover


k = random.randint(0, 15)
print("Crossover point :", k)

# interchanging the genes


for i in range(k, len(s)):
l[i], q[i] = q[i], l[i]
l = ''.join(l)
q = ''.join(q)
print(l)
print(q, "\n\n")
return l, q

# patent chromosomes:

s = '1100110110110011'
p = '1000110011011111'
print("Parents")
print("P1 :", s)
print("P2 :", p, "\n")

# function calling and storing the off springs for


# next generation crossover
for i in range(5):
print("Generation ", i+1, "Childrens :")
s, p = crossover(s, p)

Parents
P1 : 1100110110110011
P2 : 1000110011011111

Generation 1 Childrens :
Crossover point : 8
1100110111011111
1000110010110011

16
Generation 2 Childrens :
Crossover point : 7
1100110010110011
1000110111011111

Generation 3 Childrens :
Crossover point : 13
1100110010110111
1000110111011011

Generation 4 Childrens :
Crossover point : 7
1100110111011011
1000110010110111

Generation 5 Childrens :
Crossover point : 2
1100110010110111
1000110111011011

17
LAB 6
Implement De Morgan’s law
def union(A,B):

u = {}

for i in A:

if i in B:

u[i]=max(A[i],B[i])

else:

u[i]=A[i]

for i in B:

if i not in A:

u[i]=B[i]

return(u)

def intersection(A,B):

inter = {}

for i in A:

if i in B:

inter[i]=min(A[i],B[i])

18
else:

inter[i]=A[i]

for i in B:

if i not in A:

inter[i]=B[i]

return(inter)

def difference(A,B):

comp_b = complement(B)

diff = intersection(A,comp_b) #A-B = A intersection B'


return(diff)

def complement(A):

comp_a={}

for i in A:

comp_a[i] = round((1-A[i]),1)

return(comp_a)

def morgan(A,B):

19
#case 1

p = intersection(A,B)

p_bar = complement(p)

comp_a = complement(A)

#p -> A intersection B

#p_bar -> (A intersection B)bar

comp_b = complement(B)

q = union(comp_a,comp_b)

if(p_bar == q):

print("(A n B)' = ",p_bar)

print(" A' u B' = ",q)

print("Thus, (A n B)' = A' u B'")

print("Law 1 proved")

#case 2

p = union(A,B)

20
p_bar = complement(p)

comp_a = complement(A)

comp_b = complement(B)

q = intersection(comp_a,comp_b)

if(p_bar == q):

print("\n(A u B)' = ",p_bar)

print(" A' n B' = ",q)

print("Thus, (A u B)' = A' n B'")

print("Law 2 proved")

print("\nHence De Morgan's Law is proved")

#p -> A intersection B

#p_bar -> (A intersection B)bar

a=[]

n=input("Enter the elements of set A = ")

a=n.split(',')

print("Enter the membership value for each element")

21
mem_a = {}

for i in a:

print(i,"= ",end='')

mem_a[i] = float(input())

print("A = ",mem_a)

b=[]

n=input("\nEnter the elements of set B = ")

b=n.split(',')

print("Enter the membership value for each element")

mem_b = {}

for i in b:

print(i,"= ",end='')

mem_b[i] = float(input())

print("B = ",mem_b)

print("\n1.Union\n2.Intersection\n3.Difference\n4.Complement\n5.De Morgan's Law\


n6.Exit")

n=int(input("Enter your choice = "))

22
while(n < 6):

if(n == 1):

u = union(mem_a,mem_b)

print("Union = ",u)

elif(n == 2):

inter = intersection(mem_a,mem_b)

print("Intersection = ",inter)

elif(n == 3):

diff = difference(mem_a,mem_b)

print("Difference = ",diff)

elif(n == 4):

comp_a = complement(mem_a)

comp_b = complement(mem_b)

print("A's complement = ",comp_a)

print("B's complement = ",comp_b)

else:

morgan(mem_a,mem_b)

23
print("\n1.Union\n2.Intersection\n3.Difference\n4.Complement\n5.De Morgan's Law\
n6.Exit")

n=int(input("Enter your choice = "))

Enter the elements of set A = 5


Enter the membership value for each element
5 = 2
A = {'5': 2.0}

Enter the elements of set B = 6


Enter the membership value for each element
6 = 3
B = {'6': 3.0}

1.Union
2.Intersection
3.Difference
4.Complement
5.De Morgan's Law
6.Exit
Enter your choice = 5
(A n B)' = {'5': -1.0, '6': -2.0}
A' u B' = {'5': -1.0, '6': -2.0}
Thus, (A n B)' = A' u B'
Law 1 proved

(A u B)' = {'5': -1.0, '6': -2.0}


A' n B' = {'5': -1.0, '6': -2.0}
Thus, (A u B)' = A' n B'
Law 2 proved

Hence De Morgan's Law is proved

24
LAB 7
To print membership function of Fuzzy set
from numpy import linspace
import matplotlib.pyplot as plt
from pyit2fls import tri_mf
domain = linspace(0., 1., 1001)
tri = tri_mf(domain, [0., 0.5, 1., 1.])
plt.figure()
plt.plot(domain, tri, label="Triangular MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()

from pyit2fls import T1FS, trapezoid_mf


from numpy import linspace
domain = linspace(0., 1., 100)
mySet = T1FS(domain, trapezoid_mf, [0, 0.4, 0.6, 1., 1.])
mySet.plot()

pyit2fls import T1FS, gaussian_mf, T1FS_plot

25
from numpy import linspace
domain = linspace(0., 1., 100)
Small = T1FS(domain, gaussian_mf, [0, 0.15, 1.])
Medium = T1FS(domain, gaussian_mf, [0.5, 0.15, 1.])
Large = T1FS(domain, gaussian_mf, [1., 0.15, 1.])
T1FS_plot(Small, Medium, Large, legends=["Small", "Medium", "large"])

from numpy import abs as npabs


from numpy import linspace
from pyit2fls import T1FS
def gbell_mf(x, params):
return params[3] / (1 + npabs((x - params[2]) / params[0]) ** (2 * params[1]))
domain = linspace(0., 1., 100)
mySet = T1FS(domain, gbell_mf,[0.2, 2., 0.5, 1.])
mySet.plot()

26
LAB 8
Implement Hebb's learning rule

import numpy as np
import time
def threshold(x):
if x >=0:
return 1
else:
return -1
def print_func(loop_var, net, sig_net, w, delta_w):
print("i: "+ str(loop_var))
print("net: "+ str(net))
print("sig_net: "+ str(sig_net))
print("delta_w: "+ str(delta_w))
print("w: "+ str(w))
print("-------------------\n")
def compute():
try:
n = int(input("Enter number of input vectors: "))
x = []
r = 1 #Learning constant(c)
for i in range(0,n):
raw_str1 = str(input("Enter values for vector " + str(i+1) + ": "))
input_vector = raw_str1.split(' ')
#print(input_vector)
ip_list = []
for ele in input_vector:
ip_list.append(float(ele))
#print(ip_list)
np_list = np.array(ip_list, dtype=np.float64)
x.append(np_list)
raw_str3 = str(input("Enter initial weight vector: "))
w = raw_str3.split(' ')
w_list = []
for ele in w:
w_list.append(float(ele))
#np_wlist = np.array(w_list, dtype=np.float64)
#print(np_wlist)
#if len(np_wlist) != n:
# print("Init Weight Vector Error..")
delta_w = 0
for i in range(0,n):
net = np.transpose(np.asarray(w_list)).dot(np.asarray(x[i]))
#print(net)
sig_net = threshold(net)
#print(sig_net)
delta_w = r * sig_net * x[i]
#print(delta_w)
w_list = np.add(np.asarray(w_list),delta_w)
print_func(i, net, sig_net, w_list, delta_w)
except Exception as e:
print("Error.. "+(str(e)))

27
if __name__ == '__main__':
compute()

28
LAB 9
To create a crisp set using lambda cut relation

import numpy as np
set=np.array([[0.1, 0.6, 0.8,1], [1, 0.7, 0.4,0.2],[0,0.6,1,0.5],[0.1,0.5,1,0.6]])
print("Input data:\n",set)
x=0.6 #lamda_cut_value
out=np.empty((4,4),dtype=int)
i=0
j=0
for i in range(0,4):
for j in range(0,4):
if set[i][j]>= x:
out[i][j]=1
else:
out[i][j]=0
print("final output:\n",out)

Input data:
[[0.1 0.6 0.8 1. ]
[1. 0.7 0.4 0.2]
[0. 0.6 1. 0.5]
[0.1 0.5 1. 0.6]]
final output:
[[0 1 1 1]
[1 1 0 0]
[0 1 1 0]
[0 0 1 1]]

29

You might also like