0% found this document useful (0 votes)
11 views20 pages

Ann 1

This document contains a Jupyter Notebook with 7 assignments involving machine learning algorithms: 1. Plots the sigmoid activation function and its derivative. 2. Implements an AND/NOT logic gate. 3. Converts a number to binary and determines if it is odd or even. 4. Implements a perceptron algorithm on XOR data and calculates cosine similarity. 5. Implements forward and backward propagation on sample data using cosine similarity. 6. Loads iris data and trains a neural network classifier using cosine similarity, achieving 70% accuracy. 7. Preprocesses iris data and separates it by species for further analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

Ann 1

This document contains a Jupyter Notebook with 7 assignments involving machine learning algorithms: 1. Plots the sigmoid activation function and its derivative. 2. Implements an AND/NOT logic gate. 3. Converts a number to binary and determines if it is odd or even. 4. Implements a perceptron algorithm on XOR data and calculates cosine similarity. 5. Implements forward and backward propagation on sample data using cosine similarity. 6. Loads iris data and trains a neural network classifier using cosine similarity, achieving 70% accuracy. 7. Preprocesses iris data and separates it by species for further analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

24/05/2023, 17:44 FINAL - Jupyter Notebook

Assignment 1
In [1]:

import matplotlib.pyplot as plt


import numpy as np
import math

In [8]:

def sigmoid(x):
return 1 / (1 + np.exp(-0.5*x))
y = np.linspace(-10, 10, 100)
x = sigmoid(y)

plt.plot(y, x)
plt.xlabel("Input")
plt.ylabel("Output")
plt.title("Sigmoid Activation Function")

Out[8]:

Text(0.5, 1.0, 'Sigmoid Activation Function')

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 1/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [9]:

x = np.linspace(-10, 10, 100)


dx=x[1]-x[0]
y=1 / (1 + np.exp(-0.5*x))
dydx=np.gradient(y,dx)
plt.plot(x, dydx)
plt.xlabel("Input")
plt.ylabel("Output")
plt.title("Derivative of Sigmoid Activation Function")

Out[9]:

Text(0.5, 1.0, 'Derivative of Sigmoid Activation Function')

Assignment 2
In [10]:

import numpy as np
a=np.array([[1,1],[1,0],[0,1],[0,0]])
w=[1,-1]
T=1
result=[]

In [11]:

def AND_NOT():
print("Aggregation of inputs and weights is: ")
for i in range(0,4):
x=a[i]@w
print(x)
if x<T:
result.append(0)
else:
result.append(1)
print("Output Array: ")
return result

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 2/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [12]:

AND_NOT()

Aggregation of inputs and weights is:


0
1
-1
0
Output Array:

Out[12]:

[0, 1, 0, 0]

Assignment 3
In [13]:

import numpy as np
num=int(input("Enter the number:-"))

Enter the number:-45

In [14]:

x=[]
def dec(num):
if num >= 1:
dec(num // 2)
x.append(num%2)
return x

In [17]:

dec(num)

Out[17]:

[1, 0, 1, 1, 0, 1]

In [18]:

w1=w2=1
b=1
def fun():
a=(x[-1]*w1) and (b*w2)
if a==1:
print("The number is odd")
return 1
else:
print("The number is even")
return 0

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 3/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [19]:

fun()

The number is odd

Out[19]:

Assignment 4
In [33]:

import numpy as np
import matplotlib.pyplot as plt

In [34]:

X1 = np.array([0, 0, 1, 1])
X2 = np.array([0, 1, 0, 1])
X = [X1, X2]
W = np.array([[4,5]])

In [35]:

def aggregation(X, W):


result=[]
for i in range(len(X[0])):
result.append((X[0][i]*W[0][0])+X[1][i]*W[0][1])
return np.array(result)

In [36]:

X_ = aggregation(X,W)
print(X_)

[0 5 4 9]

In [37]:

def sigmoid(X):
result=[]
for i in range(len(X)):
result.append(1/(1+np.exp(-X[i])))
return np.array(result)

In [38]:

Y = sigmoid(X_)
print(Y)

[0.5 0.99330715 0.98201379 0.99987661]

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 4/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [39]:

def check(Y, W, X):


value=[]
for i in range(len(Y)):
if Y[i]>0.5:
value.append(1)
W = W - X[i]
else:
value.append(0)
W = W + X[i]
return np.array(value), np.array(W)

In [40]:

value, result = check(Y, W, X_)

In [41]:

value,result

Out[41]:

(array([0, 1, 1, 1]), array([[-14, -13]]))

In [44]:

from sklearn import preprocessing


cos_alpha = (W.T * X) / (preprocessing.normalize(W) @ preprocessing.normalize(X))

/tmp/ipykernel_4407/3241100043.py:2: RuntimeWarning: invalid value


encountered in true_divide
cos_alpha = (W.T * X) / (preprocessing.normalize(W) @ preprocessi
ng.normalize(X))

In [45]:

plt.plot(cos_alpha)
plt.title('Cosine Similarity')
plt.xlabel('Data Points')
plt.ylabel('Cosine Similarity')
plt.show()

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 5/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

Assignment 5
In [46]:

import numpy as np

In [47]:

X1 = np.array([[1, -1, 1, -1]])


X2 = np.array([[-1, -1, 1, 1]])
X3 = np.array([[1, 1, -1, -1]])
X4 = np.array([[1, 1, 1, 1]])
X=[X1, X2, X3, X4]

Y1 = np.array([[1 ,0 ,1]])
Y2 = np.array([[1, 1, 1]])
Y3 = np.array([[0, 1, 1]])
Y4 = np.array([[1, 1, 0]])
Y= [Y1, Y2, Y3, Y4]

In [48]:

def calcWeight(X, Y):


return np.dot(X.T, Y)

In [49]:

def weights(X,Y):
weis=[]
for i in range(4):
weis.append(calcWeight(X[i], Y[i]))
return weis

In [50]:

weis=weights(X,Y)

In [51]:

def forward_aggregation(X,W):
result=[]
for i in range(len(X)):
result.append(X[i]@W[i])
return np.squeeze(result)

In [52]:

def activation(X):
X = np.array(X)
X[X > 0] = 1
X[X == 0] = 0
X[X < 0] = -1
return X

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 6/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [53]:

for_aggri = forward_aggregation(X, weis)

In [54]:

for_aggri

Out[54]:

array([[4, 0, 4],
[4, 4, 4],
[0, 4, 4],
[4, 4, 0]])

In [55]:

print(activation(for_aggri))

[[1 0 1]
[1 1 1]
[0 1 1]
[1 1 0]]

In [56]:

def backward_aggregation(X,W):
result=[]
for i in range(len(X)):
result.append(W[i]@X[i].T)
return np.squeeze(result)

In [57]:

back_aggri = backward_aggregation(Y, weis)

In [58]:

back_aggri

Out[58]:

array([[ 2, -2, 2, -2],


[-3, -3, 3, 3],
[ 2, 2, -2, -2],
[ 2, 2, 2, 2]])

In [59]:

print(activation(back_aggri))

[[ 1 -1 1 -1]
[-1 -1 1 1]
[ 1 1 -1 -1]
[ 1 1 1 1]]

Assignment 7

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 7/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [60]:

import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# Step 1: Preprocess the Iris datasetS


iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_st

# Step 2: Implement the Cosine Similarity FunctionV


def cosine_similarity(a, b):
dot_product = np.dot(a, b)
norm_a = np.linalg.norm(a)
norm_b = np.linalg.norm(b)
return dot_product / (norm_a * norm_b)

# Step 3: Define the Neural Network ArchitectureI


input_size = X_train.shape[1]
output_size = len(np.unique(y_train))

W = np.random.randn(input_size, output_size)

# Forward propagation functionN


def forward_propagation(X):
# Input layer to output layerS
y_hat = np.zeros((X.shape[0], output_size))
for i in range(X.shape[0]):
for j in range(output_size):
y_hat[i, j] = cosine_similarity(X[i], W[:, j])
return y_hat

# Step 4: Train the Neural NetworkA


num_epochs = 1000
for epoch in range(num_epochs):
y_hat = forward_propagation(X_train)
for i in range(output_size):
y_i = np.where(y_train == i, 1, -1)
W[:, i] += 0.1 * np.dot(X_train.T, y_i - y_hat[:, i])

# Step 5: Test the Neural NetworkL


y_pred = np.argmax(forward_propagation(X_test), axis=1)
accuracy = np.mean(y_pred == y_test)

In [61]:

accuracy

Out[61]:

0.7

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 8/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [62]:

y_pred

Out[62]:

array([2, 0, 2, 2, 2, 0, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 2, 2,
0, 2,
0, 2, 2, 2, 2, 2, 0, 0])

In [63]:

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split

In [68]:

data

Out[68]:

Id SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Species

0 1 5.1 3.5 1.4 0.2 Iris-setosa

1 2 4.9 3.0 1.4 0.2 Iris-setosa

2 3 4.7 3.2 1.3 0.2 Iris-setosa

3 4 4.6 3.1 1.5 0.2 Iris-setosa

4 5 5.0 3.6 1.4 0.2 Iris-setosa

... ... ... ... ... ... ...

145 146 6.7 3.0 5.2 2.3 Iris-virginica

146 147 6.3 2.5 5.0 1.9 Iris-virginica

147 148 6.5 3.0 5.2 2.0 Iris-virginica

148 149 6.2 3.4 5.4 2.3 Iris-virginica

149 150 5.9 3.0 5.1 1.8 Iris-virginica

150 rows × 6 columns

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 9/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [72]:

data = pd.read_csv('Iris.csv')
groups=data.groupby('Species')
setosa=groups.get_group("Iris-setosa")
veriscolor=groups.get_group("Iris-versicolor")
virginica=groups.get_group("Iris-virginica")
def test(data):
feature=["SepalLengthCm", "SepalWidthCm","PetalLengthCm", "PetalWidthCm"]
target=["Species"]
X=data[feature].values
y=data[target].values
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size=0.30)
return X_train,X_test,y_train,y_test
SX_train,SX_test,Sy_train,Sy_test = test(setosa)
VX_train,VX_test,Vy_train,Vy_test = test(virginica)
vX_train,vX_test,vy_train,vy_test = test(veriscolor)

In [73]:

val = SX_test[10]
print(val)
val=np.array(val)

[4.4 3.2 1.3 0.2]

In [74]:

new_input = np.array([[4.9, 3.1, 1.5, 0.2]])

# Make prediction

def predict(X):
y_hat = forward_propagation(X)
return np.where(y_hat[0] == np.amax(y_hat[0]))[0][0]

class_name = iris.target_names[predict(new_input)]
print('Class name:', class_name)

Class name: setosa

Assignment 11

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 10/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [75]:

import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np

2023-05-24 17:32:03.712792: I tensorflow/core/platform/cpu_feature_


guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep
Neural Network Library (oneDNN) to use the following CPU instructio
ns in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the app
ropriate compiler flags.
2023-05-24 17:32:04.494517: W tensorflow/stream_executor/platform/d
efault/dso_loader.cc:64] Could not load dynamic library 'libcudart.
so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object fil
e: No such file or directory
2023-05-24 17:32:04.494548: I tensorflow/stream_executor/cuda/cudar
t_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU
set up on your machine.
2023-05-24 17:32:04.592595: E tensorflow/stream_executor/cuda/cuda_
blas.cc:2981] Unable to register cuBLAS factory: Attempting to regi
ster factory for plugin cuBLAS when one has already been registered
2023-05-24 17:32:06.399729: W tensorflow/stream_executor/platform/d
efault/dso_loader.cc:64] Could not load dynamic library 'libnvinfe
r.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file:
No such file or directory
2023-05-24 17:32:06.400118: W tensorflow/stream_executor/platform/d
efault/dso_loader.cc:64] Could not load dynamic library 'libnvinfer
_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared
object file: No such file or directory
2023-05-24 17:32:06.400154: W tensorflow/compiler/tf2tensorrt/util
s/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libra
ries. If you would like to use Nvidia GPU with TensorRT, please mak
e sure the missing libraries mentioned above are installed properl
y.

In [76]:

(X_train, y_train) , (X_test, y_test) = keras.datasets.mnist.load_data()

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 11/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [77]:

plt.matshow(X_train[5])

Out[77]:

<matplotlib.image.AxesImage at 0x7fc4eed70490>

In [78]:

y_train[5]

Out[78]:

In [79]:

X_train = X_train / 255


X_test = X_test / 255

In [80]:

X_train[0]
[0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.
,
0. , 0. , 0. , 0. , 0.

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 12/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [81]:

X_train_flattened = X_train.reshape(len(X_train), 28*28)


X_test_flattened = X_test.reshape(len(X_test), 28*28)

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 13/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [87]:

model = keras.Sequential([
keras.layers.Dense(100, input_shape=(784,), activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(X_train_flattened, y_train, epochs=20)

Epoch 1/20

2023-05-24 17:34:12.974939: W tensorflow/core/framework/cpu_allocat


or_impl.cc:82] Allocation of 188160000 exceeds 10% of free system m
emory.

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 14/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

1875/1875 [==============================] - 5s 3ms/step - loss: 0.


2705 - accuracy: 0.9235
Epoch 2/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
1226 - accuracy: 0.9637
Epoch 3/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0860 - accuracy: 0.9742
Epoch 4/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0666 - accuracy: 0.9800
Epoch 5/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0531 - accuracy: 0.9839
Epoch 6/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0432 - accuracy: 0.9864
Epoch 7/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0345 - accuracy: 0.9897
Epoch 8/20
In [88]: [==============================]
1875/1875 - 5s 3ms/step - loss: 0.
0277 - accuracy: 0.9918
model.evaluate(X_test_flattened,y_test)
Epoch 9/20
1875/1875 [==============================]
313/313 [==============================] - -
1s5s 3ms/step
2ms/step - - loss:
loss: 0.
0.10
0247 - accuracy: 0.9921
48 - accuracy: 0.9784
Epoch 10/20
Out[88]:
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0200 - accuracy: 0.9941
[0.1047787219285965,
Epoch 11/20 0.9783999919891357]
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0173 - accuracy: 0.9948
Epoch 12/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0161 - accuracy: 0.9948
Epoch 13/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0118 - accuracy: 0.9965
Epoch 14/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0117 - accuracy: 0.9964
Epoch 15/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.
0094 - accuracy: 0.9971
Epoch 16/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0082 - accuracy: 0.9977
Epoch 17/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0075 - accuracy: 0.9976
Epoch 18/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0075 - accuracy: 0.9976
Epoch 19/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0065 - accuracy: 0.9980
Epoch 20/20
1875/1875 [==============================] - 5s 2ms/step - loss: 0.
0067 - accuracy: 0.9977

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 15/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

Out[87]:
In [89]:
<keras.callbacks.History
y_predicted at 0x7fc4be7259d0>
= model.predict(X_test_flattened)
y_predicted_labels = [np.argmax(i) for i in y_predicted]
cm = tf.math.confusion_matrix(labels=y_test,predictions=y_predicted_labels)
cm

313/313 [==============================] - 1s 2ms/step

Out[89]:

<tf.Tensor: shape=(10, 10), dtype=int32, numpy=


array([[ 974, 0, 1, 1, 0, 0, 2, 0, 2,
0],
[ 0, 1122, 3, 1, 0, 0, 1, 1, 7,
0],
[ 2, 1, 1011, 6, 0, 0, 2, 4, 4,
2],
[ 0, 0, 1, 996, 0, 3, 0, 3, 5,
2],
[ 2, 0, 9, 0, 948, 1, 2, 3, 3, 1
4],
[ 2, 0, 0, 13, 0, 870, 2, 1, 4,
0],
[ 4, 2, 3, 1, 2, 5, 940, 0, 1,
0],
[ 2, 2, 9, 4, 0, 0, 1, 1004, 2,
4],
[ 3, 0, 3, 8, 0, 2, 1, 3, 954,
0],
[ 3, 2, 1, 12, 5, 7, 0, 3, 14, 96
2]],
dtype=int32)>

In [90]:

plt.matshow(X_test[9])

Out[90]:

<matplotlib.image.AxesImage at 0x7fc4bdf524c0>

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 16/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [91]:

np.argmax(y_predicted[9])

Out[91]:

Assignment 8
In [92]:

import numpy as np

In [93]:

X = [
[0,0,0,1],
[0,1,0,1],
[0,0,1,1],
[1,0,0,0]
]

In [94]:

rho = 0.4 #vigilance


alpha = 2 #learning_rate
n_clusters = 3 # n_clusters
n_inputs = 4
flg = 0
bij = [ ( [ (1/(1+n_inputs)) for j in range(n_clusters)] ) for i in range(n_input
global_bij = bij.copy()
tji = [ ( [ 1 for j in range(n_inputs)] ) for i in range(n_clusters)]
global_tji = tji.copy()
old_bij = np.array(bij.copy())
old_tji = np.array(tji.copy())
temp = np.array(bij)

In [95]:

bij

Out[95]:

[[0.2, 0.2, 0.2], [0.2, 0.2, 0.2], [0.2, 0.2, 0.2], [0.2, 0.2, 0.
2]]

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 17/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [96]:

def buttom_up(inputs,global_bij):
norm_s = sum(inputs)
norm_x = sum(inputs)
temp2 = np.array(global_bij)
x = np.array(inputs)
y_list=[]
for l in range(len(temp2.T)):
a = (x @ temp2.T[l])
y_list.append(a)
print(y_list)
j = y_list.index(max(y_list))
print('WINNER IS ',j)
tension,t1 = top_down(inputs,j,norm_s)
return tension,t1

In [97]:

def top_down(si,j,norm_s):
si = np.array(si)
xi = si * tji[j]
norm_x = sum(xi)
answer = norm_x/norm_s
tension2,t2 = vigilance_check(si,answer,norm_x,j)
return tension2,t2

In [98]:

def vigilance_check(inputs,answer,norm_x,j):
if answer > rho:
tension3,t3 = weight_adjust(inputs,norm_x,j)
return tension3,t3
else:
pass
return 0

In [99]:

def weight_adjust(inputs,norm_x,j):
new_bij = [ ( ( alpha * i ) / ( ( alpha - 1 ) + norm_x ) ) for i in inputs ]
temp.T[j] = new_bij
bij = temp
tji[j] = new_bij
print("this is bij = ", bij)
print("this is tji = ", np.array(tji))
tension4 = bij
t4 = tji
flg = stop(old_bij,old_tji,bij,tji)
return tension4,t4

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 18/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [100]:

def stop(old_bij,old_tij,bij,tji):
bij=np.array(bij)
tji= np.array(tji)
if(bij.all() == old_bij.all() and tji.all() == old_tji.all()):
flg = 1
else:
pass

In [101]:

epochs = 10000
cnt = 0
for i in range(epochs):
print("this is epoch = ", i )
for j in X:
global_bij,global_tji = buttom_up(j,global_bij)
if( flg == 1):
break
else:
pass
[1. 0. 0. 0.]
[1. 1. 1. 1.]]
[1.0, 0.0, 0.4]
WINNER IS 0
this is bij = [[0. 1. 0.2]
[0. 0. 0.2]
[1. 0. 0.2]
[1. 0. 0.2]]
this is tji = [[0. 0. 1. 1.]
[1. 0. 0. 0.]
[1. 1. 1. 1.]]
[0.0, 1.0, 0.2]
WINNER IS 1
this is bij = [[0. 1. 0.2]
[0. 0. 0.2]
[1. 0. 0.2]
[1. 0. 0.2]]
this is tji = [[0. 0. 1. 1.]
[1. 0. 0. 0.]
[1. 1. 1. 1.]]

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 19/20


24/05/2023, 17:44 FINAL - Jupyter Notebook

In [102]:

def clustering(inputs,global_bij,y):
norm_s = sum(inputs)
norm_x = sum(inputs)
temp2 = np.array(global_bij)
x = np.array(inputs)
y_list=[]
for l in range(len(temp2.T)):
a = (x @ temp2.T[l])
y_list.append(a)
print(y_list)
j = y_list.index(max(y_list))
print('WINNER IS ',j)
y[j].append(inputs)
return y

In [103]:

y = [[],[],[]]
for i in X:
a = clustering(i,global_bij,y)
print(a)

[1.0, 0.0, 0.2]


WINNER IS 0
[[[0, 0, 0, 1]], [], []]
[1.0, 0.0, 0.4]
WINNER IS 0
[[[0, 0, 0, 1], [0, 1, 0, 1]], [], []]
[2.0, 0.0, 0.4]
WINNER IS 0
[[[0, 0, 0, 1], [0, 1, 0, 1], [0, 0, 1, 1]], [], []]
[0.0, 1.0, 0.2]
WINNER IS 1
[[[0, 0, 0, 1], [0, 1, 0, 1], [0, 0, 1, 1]], [[1, 0, 0, 0]], []]

In [ ]:

localhost:8888/notebooks/Untitled Folder 1/FINAL.ipynb 20/20

You might also like