SC Lab File Fayiz PDF
SC Lab File Fayiz PDF
LAB FILE
Soft Computing (CST.527)
Submitted by
Fayiz
21mtcysc06
Dept Computer Science and Technology
Central University of Punjab
Submitted to
Er . Anam Bansal
Assistant Professor
Dept Computer Science and Technology
Central University of Punjab
INDEX
PRACTICAL OBJECTIVE
N0
1 Program to implement Single layer perceptron
2 Program to implement Multilayer Perceptron
3 Program to perform following Fuzzy Operations
4 Implement AND GATE function using perceptron networks for
bipolar inputs and targets
5 Implement Single Point Crossover
6 Implement De Morgan’s law
7 To print membership function of Fuzzy set
8 Implement Hebb's learning rule
9 Implement program for crisp lambda set
2
LAB 1
Program to implement Single layer perceptron
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
# Import dataset
iris = datasets.load_iris()
X = iris.data
y = iris.target
3
# Plot graph
plt.figure(2, figsize=(8, 6))
plt.clf()
# Create scatter graph
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1, edgecolor='k')
# Axis labels
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xticks(())
plt.yticks(())
# Display graph
plt.show()
# Scale features
sc = StandardScaler()
sc.fit(x_train)
# Create a perceptron with 50 iterations over the dataset, and a learning rate of 0.3
ppn = Perceptron(max_iter=50, eta0=1, verbose=1)
-- Epoch 1
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 105, Avg. loss: 0.000000
4
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 210, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 315, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 420, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 525, Avg. loss: 0.000000
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 0.70, NNZs: 4, Bias: -1.000000, T: 630, Avg. loss: 0.000000
Total training time: 0.00 seconds.
Convergence after 6 epochs took 0.00 seconds
-- Epoch 1
Norm: 3.86, NNZs: 4, Bias: -2.000000, T: 105, Avg. loss: 0.710458
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 3.55, NNZs: 4, Bias: -2.000000, T: 210, Avg. loss: 0.708580
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 4.94, NNZs: 4, Bias: -3.000000, T: 315, Avg. loss: 0.546571
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 3.17, NNZs: 4, Bias: -3.000000, T: 420, Avg. loss: 0.664591
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 4.79, NNZs: 4, Bias: -2.000000, T: 525, Avg. loss: 0.646327
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 5.21, NNZs: 4, Bias: -2.000000, T: 630, Avg. loss: 0.728946
Total training time: 0.00 seconds.
-- Epoch 7
Norm: 3.80, NNZs: 4, Bias: 0.000000, T: 735, Avg. loss: 0.480516
Total training time: 0.01 seconds.
-- Epoch 8
Norm: 4.60, NNZs: 4, Bias: -2.000000, T: 840, Avg. loss: 0.563502
Total training time: 0.01 seconds.
-- Epoch 9
Norm: 4.23, NNZs: 4, Bias: -2.000000, T: 945, Avg. loss: 0.616793
Total training time: 0.01 seconds.
-- Epoch 10
Norm: 4.60, NNZs: 4, Bias: -2.000000, T: 1050, Avg. loss: 0.673299
Total training time: 0.01 seconds.
5
-- Epoch 11
Norm: 5.09, NNZs: 4, Bias: -1.000000, T: 1155, Avg. loss: 0.472257
Total training time: 0.01 seconds.
-- Epoch 12
Norm: 5.17, NNZs: 4, Bias: -2.000000, T: 1260, Avg. loss: 0.589533
Total training time: 0.01 seconds.
-- Epoch 13
Norm: 5.15, NNZs: 4, Bias: -1.000000, T: 1365, Avg. loss: 0.619679
Total training time: 0.01 seconds.
-- Epoch 14
Norm: 4.86, NNZs: 4, Bias: -5.000000, T: 1470, Avg. loss: 0.587288
Total training time: 0.01 seconds.
-- Epoch 15
Norm: 4.90, NNZs: 4, Bias: -1.000000, T: 1575, Avg. loss: 0.659759
Total training time: 0.01 seconds.
-- Epoch 16
Norm: 5.90, NNZs: 4, Bias: -1.000000, T: 1680, Avg. loss: 0.602657
Total training time: 0.01 seconds.
Convergence after 16 epochs took 0.01 seconds
-- Epoch 1
Norm: 4.44, NNZs: 4, Bias: -3.000000, T: 105, Avg. loss: 0.167079
Total training time: 0.00 seconds.
-- Epoch 2
Norm: 4.58, NNZs: 4, Bias: -4.000000, T: 210, Avg. loss: 0.031087
Total training time: 0.00 seconds.
-- Epoch 3
Norm: 6.55, NNZs: 4, Bias: -3.000000, T: 315, Avg. loss: 0.021161
Total training time: 0.00 seconds.
-- Epoch 4
Norm: 5.66, NNZs: 4, Bias: -5.000000, T: 420, Avg. loss: 0.090071
Total training time: 0.00 seconds.
-- Epoch 5
Norm: 6.48, NNZs: 4, Bias: -5.000000, T: 525, Avg. loss: 0.092820
Total training time: 0.00 seconds.
-- Epoch 6
Norm: 6.33, NNZs: 4, Bias: -6.000000, T: 630, Avg. loss: 0.013560
Total training time: 0.00 seconds.
-- Epoch 7
Norm: 6.36, NNZs: 4, Bias: -6.000000, T: 735, Avg. loss: 0.022056
Total training time: 0.00 seconds.
-- Epoch 8
Norm: 6.41, NNZs: 4, Bias: -6.000000, T: 840, Avg. loss: 0.030255
Total training time: 0.00 seconds.
-- Epoch 9
Norm: 7.29, NNZs: 4, Bias: -7.000000, T: 945, Avg. loss: 0.047561
Total training time: 0.00 seconds.
-- Epoch 10
6
Norm: 8.10, NNZs: 4, Bias: -6.000000, T: 1050, Avg. loss: 0.018981
Total training time: 0.00 seconds.
-- Epoch 11
Norm: 8.05, NNZs: 4, Bias: -6.000000, T: 1155, Avg. loss: 0.042576
Total training time: 0.00 seconds.
Convergence after 11 epochs took 0.00 seconds
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent
workers.
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 0.0s finished
Out[17]:
Perceptron(eta0=1, max_iter=50, verbose=1)
# Apply the trained perceptron on the X data to make predicts for the y test data
y_pred = ppn.predict(x_test_std)
array([2, 1, 1, 0, 2, 1, 1, 1, 2, 2, 0, 0, 2, 1, 2, 1, 1, 0, 0, 2, 2,
0,
1, 2, 2, 1, 1, 0, 0, 2, 2, 1, 1, 1, 0, 0, 2, 2, 2, 2, 1, 0, 1,
2,
0])
array([2, 1, 1, 0, 2, 1, 1, 1, 2, 2, 0, 0, 2, 1, 2, 1, 1, 0, 0, 1, 2,
0,
0, 2, 2, 0, 1, 0, 0, 2, 1, 1, 1, 1, 0, 0, 1, 2, 2, 2, 2, 0, 1,
2,
0])
Accuracy: 0.87
# Import Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import Perceptron
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
7
# Plot algorithm decision boundary.
# Code based on
http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
def plot_decision_boundary(classifier, X, y, title):
xmin, xmax = np.min(X[:, 0]) - 0.05, np.max(X[:, 0]) + 0.05
ymin, ymax = np.min(X[:, 1]) - 0.05, np.max(X[:, 1]) + 0.05
step = 0.01
cm = plt.cm.coolwarm_r
thr = 0.0
xx, yy = np.meshgrid(np.arange(xmin - thr, xmax + thr, step), np.arange(ymin - thr, ymax
+ thr, step))
if hasattr(classifier, 'decision_function'):
Z = classifier.decision_function(np.hstack((xx.ravel()[:, np.newaxis], yy.ravel()[:,
np.newaxis])))
else:
Z = classifier.predict_proba(np.hstack((xx.ravel()[:, np.newaxis], yy.ravel()[:,
np.newaxis])))[:, 1]
Z = Z.reshape(xx.shape)
8
# Get predicted values and print
pred = mlp.predict_proba(X)
print("MLP's XOR probabilities:\n[class0, class1]\n{}".format(pred))
9
LAB 2
Program to implement Multilayer Perceptron
import sklearn.datasets
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
breast_cancer = sklearn.datasets.load_breast_cancer() # Binary classification dataset
X = breast_cancer.data
y = breast_cancer.target
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=1, test_size=0.2)
sc_X = StandardScaler()
X_trainscaled=sc_X.fit_transform(X_train)
X_testscaled=sc_X.transform(X_test)
clf =
MLPClassifier(hidden_layer_sizes=(256,128,64,32),activation="relu",random_state=1).fit(X
_trainscaled, y_train)
y_pred=clf.predict(X_testscaled)
print(clf.score(X_testscaled, y_test))
0.9736842105263158
10
LAB 3
Program to perform following Fuzzy Operations:
A={"a":0.4,"b":0.6,"c":0.3,"d":0.7}
B={"a":0.2,"b":1.0,"c":0.3,"d":0.8}
U={}
print("The First Fuzzy Set is:",A)
print("The Second Fuzzy Set is:",B)
for A_key,B_key in zip(A,B):
A_value=A[A_key]
B_value=B[B_key]
if A_value>B_value:
U[A_key]=A_value
else:
U[B_key]=B_value
print("The Union of A and B is :",U)
The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Second Fuzzy Set is: {'a': 0.2, 'b': 1.0, 'c': 0.3, 'd': 0.8}
The Union of A and B is : {'a': 0.4, 'b': 1.0, 'c': 0.3, 'd': 0.8}
if A_value<B_value:
U[A_key]=A_value
else:
U[B_key]=B_value
print("The Interscetion of A and B is :",U)
The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Second Fuzzy Set is: {'a': 0.2, 'b': 1.0, 'c': 0.3, 'd': 0.8}
The Union of A and B is : {'a': 0.2, 'b': 0.6, 'c': 0.3, 'd': 0.7}
11
A={"a":0.4,"b":0.6,"c":0.3,"d":0.7}
C={}
print("The First Fuzzy Set is:",A)
for A_key in A:
C[A_key]=1-A[A_key]
print("The Complement of A is:",C)
The First Fuzzy Set is: {'a': 0.4, 'b': 0.6, 'c': 0.3, 'd': 0.7}
The Complement of A is: {'a': 0.6, 'b': 0.4, 'c': 0.7, 'd':
0.30000000000000004}
The First Fuzzy Set is : {'a': 0.2, 'b': 0.3, 'c': 0.6, 'd': 0.6}
The Second Fuzzy Set is : {'a': 0.9, 'b': 0.9, 'c': 0.4, 'd': 0.5}
Fuzzy Set Difference is : {'a': 0.09999999999999998, 'b':
0.09999999999999998, 'c': 0.6, 'd': 0.5}
12
from pyit2fls import T1FS, gaussian_mf, T1FS_plot
from numpy import linspace
13
14
LAB 4
Implement AND GATE function using perceptron
networks for bipolar inputs and targets
import numpy as np
def function(y):
if y>0:
return 1
else:
return 0
def model(x,w,b):
y=np.dot(w,x)+b
u= function(y)
return u
def AND(X):
w = np.array([0.1,0.1])
b = -0.1
return model(X,w,b)
case1 = np.array([0,1])
case2 = np.array([1,0])
case3 = np.array([0,0])
case4 = np.array([1,1])
print("AND({},{})={}".format(0,1, AND(case1)))
print("AND({},{})={}".format(1,0, AND(case2)))
print("AND({},{})={}".format(0,0, AND(case3)))
print("AND({},{})={}".format(1,1, AND(case4)))
AND(0,1)=0
AND(1,0)=0
AND(0,0)=0
AND(1,1)=1
15
LAB 5
Implement Single Point Crossover
# patent chromosomes:
s = '1100110110110011'
p = '1000110011011111'
print("Parents")
print("P1 :", s)
print("P2 :", p, "\n")
Parents
P1 : 1100110110110011
P2 : 1000110011011111
Generation 1 Childrens :
Crossover point : 8
1100110111011111
1000110010110011
16
Generation 2 Childrens :
Crossover point : 7
1100110010110011
1000110111011111
Generation 3 Childrens :
Crossover point : 13
1100110010110111
1000110111011011
Generation 4 Childrens :
Crossover point : 7
1100110111011011
1000110010110111
Generation 5 Childrens :
Crossover point : 2
1100110010110111
1000110111011011
17
LAB 6
Implement De Morgan’s law
def union(A,B):
u = {}
for i in A:
if i in B:
u[i]=max(A[i],B[i])
else:
u[i]=A[i]
for i in B:
if i not in A:
u[i]=B[i]
return(u)
def intersection(A,B):
inter = {}
for i in A:
if i in B:
inter[i]=min(A[i],B[i])
18
else:
inter[i]=A[i]
for i in B:
if i not in A:
inter[i]=B[i]
return(inter)
def difference(A,B):
comp_b = complement(B)
def complement(A):
comp_a={}
for i in A:
comp_a[i] = round((1-A[i]),1)
return(comp_a)
def morgan(A,B):
19
#case 1
p = intersection(A,B)
p_bar = complement(p)
comp_a = complement(A)
#p -> A intersection B
comp_b = complement(B)
q = union(comp_a,comp_b)
if(p_bar == q):
print("Law 1 proved")
#case 2
p = union(A,B)
20
p_bar = complement(p)
comp_a = complement(A)
comp_b = complement(B)
q = intersection(comp_a,comp_b)
if(p_bar == q):
print("Law 2 proved")
#p -> A intersection B
a=[]
a=n.split(',')
21
mem_a = {}
for i in a:
print(i,"= ",end='')
mem_a[i] = float(input())
print("A = ",mem_a)
b=[]
b=n.split(',')
mem_b = {}
for i in b:
print(i,"= ",end='')
mem_b[i] = float(input())
print("B = ",mem_b)
22
while(n < 6):
if(n == 1):
u = union(mem_a,mem_b)
print("Union = ",u)
elif(n == 2):
inter = intersection(mem_a,mem_b)
print("Intersection = ",inter)
elif(n == 3):
diff = difference(mem_a,mem_b)
print("Difference = ",diff)
elif(n == 4):
comp_a = complement(mem_a)
comp_b = complement(mem_b)
else:
morgan(mem_a,mem_b)
23
print("\n1.Union\n2.Intersection\n3.Difference\n4.Complement\n5.De Morgan's Law\
n6.Exit")
1.Union
2.Intersection
3.Difference
4.Complement
5.De Morgan's Law
6.Exit
Enter your choice = 5
(A n B)' = {'5': -1.0, '6': -2.0}
A' u B' = {'5': -1.0, '6': -2.0}
Thus, (A n B)' = A' u B'
Law 1 proved
24
LAB 7
To print membership function of Fuzzy set
from numpy import linspace
import matplotlib.pyplot as plt
from pyit2fls import tri_mf
domain = linspace(0., 1., 1001)
tri = tri_mf(domain, [0., 0.5, 1., 1.])
plt.figure()
plt.plot(domain, tri, label="Triangular MF")
plt.grid(True)
plt.legend()
plt.xlabel("Domain")
plt.ylabel("Membership function")
plt.show()
25
from numpy import linspace
domain = linspace(0., 1., 100)
Small = T1FS(domain, gaussian_mf, [0, 0.15, 1.])
Medium = T1FS(domain, gaussian_mf, [0.5, 0.15, 1.])
Large = T1FS(domain, gaussian_mf, [1., 0.15, 1.])
T1FS_plot(Small, Medium, Large, legends=["Small", "Medium", "large"])
26
LAB 8
Implement Hebb's learning rule
import numpy as np
import time
def threshold(x):
if x >=0:
return 1
else:
return -1
def print_func(loop_var, net, sig_net, w, delta_w):
print("i: "+ str(loop_var))
print("net: "+ str(net))
print("sig_net: "+ str(sig_net))
print("delta_w: "+ str(delta_w))
print("w: "+ str(w))
print("-------------------\n")
def compute():
try:
n = int(input("Enter number of input vectors: "))
x = []
r = 1 #Learning constant(c)
for i in range(0,n):
raw_str1 = str(input("Enter values for vector " + str(i+1) + ": "))
input_vector = raw_str1.split(' ')
#print(input_vector)
ip_list = []
for ele in input_vector:
ip_list.append(float(ele))
#print(ip_list)
np_list = np.array(ip_list, dtype=np.float64)
x.append(np_list)
raw_str3 = str(input("Enter initial weight vector: "))
w = raw_str3.split(' ')
w_list = []
for ele in w:
w_list.append(float(ele))
#np_wlist = np.array(w_list, dtype=np.float64)
#print(np_wlist)
#if len(np_wlist) != n:
# print("Init Weight Vector Error..")
delta_w = 0
for i in range(0,n):
net = np.transpose(np.asarray(w_list)).dot(np.asarray(x[i]))
#print(net)
sig_net = threshold(net)
#print(sig_net)
delta_w = r * sig_net * x[i]
#print(delta_w)
w_list = np.add(np.asarray(w_list),delta_w)
print_func(i, net, sig_net, w_list, delta_w)
except Exception as e:
print("Error.. "+(str(e)))
27
if __name__ == '__main__':
compute()
28
LAB 9
To create a crisp set using lambda cut relation
import numpy as np
set=np.array([[0.1, 0.6, 0.8,1], [1, 0.7, 0.4,0.2],[0,0.6,1,0.5],[0.1,0.5,1,0.6]])
print("Input data:\n",set)
x=0.6 #lamda_cut_value
out=np.empty((4,4),dtype=int)
i=0
j=0
for i in range(0,4):
for j in range(0,4):
if set[i][j]>= x:
out[i][j]=1
else:
out[i][j]=0
print("final output:\n",out)
Input data:
[[0.1 0.6 0.8 1. ]
[1. 0.7 0.4 0.2]
[0. 0.6 1. 0.5]
[0.1 0.5 1. 0.6]]
final output:
[[0 1 1 1]
[1 1 0 0]
[0 1 1 0]
[0 0 1 1]]
29