0% found this document useful (0 votes)
34 views11 pages

Untitled Document

Uploaded by

Vaibhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views11 pages

Untitled Document

Uploaded by

Vaibhav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

CNN MODEL

Importing Necessary Libraries


In [20]:

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv1D,
MaxPool1D,Flatten,Dense,Dropout,BatchNormalization
from tensorflow.keras.optimizers import Adam

In [21]:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

In [22]:

from sklearn import datasets,metrics


from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

Loading the Built-in Sklearn Breast Cancer Dataset


In [23]:

cancerData = datasets.load_breast_cancer()

In [24]:
X = pd.DataFrame(data = cancerData.data,
columns=cancerData.feature_names )
X.head()

Out[24]:

m
w
e
w or
m m a m w
o st
e e n ea o w wo
m m m r w c
a a m c n r o w wo rst
ea e me mea ea s or wor wor o
n n ea o fra s r or rst fra
n a an n n t st st st n
t n n cta t s st sy cta
pe n sm com sy t pe sm com c
r e co c l r t co m l
ri a oot pact m e ri oot pact a
a x nc a di a a nc m di
m r hne nes m x m hne nes v
d t av v m d r av etr me
et e ss s etr t et ss s e
i u ity e en i e ity y nsi
er a y u er p
u r p sio u a on
r oi
s e oi n s
e nt
nt
s
s

1 0. 2
1 1 2 1 0.
0 1 0. 0
7 0 12 0.1 0. 0. 5 7 18 0. 2 0. 0.1
0 0.27 4 07 1 0.1 0.66
. . 2. 184 30 24 . . 4. 71 6 46 18
1 760 7 87 9 622 56
9 3 80 0 01 19 3 3 60 19 5 01 90
. 1 1 .
9 8 8 3 4
0 0 0

1 0. 1
2 1 2 2 0.
3 0 0. 9
0 7 13 0.0 0. 0. 4 3 15 0. 1 0. 0.0
2 0.07 7 05 5 0.1 0.18
. . 2. 847 08 18 . . 8. 24 8 27 89
6 864 0 66 6 238 66
5 7 90 4 69 12 9 4 80 16 6 50 02
. 1 7 .
7 7 9 1 0
0 7 0

1 2 13 1 0.1 0.15 0. 0. 0. 0. 2 2 15 1 0.1 0.42 0. 0. 0. 0.0


9 1 0. 2 096 990 19 1 20 05 3 5 2. 7 444 45 45 2 36 87
. . 00 0 0 74 2 69 99 . . 50 0 04 4 13 58
6 2 3 7 9 5 5 9 3
9 5 . 9 .
0 0 7 3 0 0

0.
1 2 3 1 2 5 0.
1 0.
1 0 77 8 0.1 0. 0. 4 6 98 6 0. 2 0. 0.1
0.28 0 09 0.2 0.86
. . .5 6 425 24 25 . . .8 7 68 5 66 73
390 5 74 098 63
4 3 8 . 0 14 97 9 5 7 . 69 7 38 00
2 4
2 8 1 1 0 7 5
0

1 0. 1
2 1 2 1 0.
2 1 0. 5
0 4 13 0.1 0. 0. 2 6 15 0. 1 0. 0.0
9 0.13 0 05 7 0.1 0.20
. . 5. 003 19 18 . . 2. 40 6 23 76
7 280 4 88 5 374 50
2 3 10 0 80 09 5 6 20 00 2 64 78
. 3 3 .
9 4 4 7 5
0 0 0

5 rows × 30 columns

In [25]:

y = cancerData.target

In [26]:

X.shape

Out[26]:

(569, 30)

Splitting into Train and Test datasets


In [27]:
X_train,X_test,y_train,y_test=
train_test_split(X,y,test_size=0.1,stratify=y)

In [28]:

X_train.shape

Out[28]:

(512, 30)

In [29]:

y_test.shape

Out[29]:

(57,)

Applying StandardScaler()
In [30]:

scaler = StandardScaler()

In [31]:

X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

Reshaping the dataset to 3-D to pass it through CNN


In [32]:

X_train = X_train.reshape(512,30,1)
X_test = X_test.reshape(57,30,1)

Preparing the Model


In [33]:

model = Sequential()
model.add(Conv1D(filters=16,kernel_size=2,activation='relu',input_shape=(3
0,1)))
model.add(BatchNormalization())
model.add(Dropout(0.2))

model.add(Conv1D(32,2,activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))

model.add(Flatten())
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(1,activation='sigmoid'))

In [34]:

model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_2 (Conv1D) (None, 29, 16) 48
_________________________________________________________________
batch_normalization_2 (Batch (None, 29, 16) 64
_________________________________________________________________
dropout_3 (Dropout) (None, 29, 16) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 28, 32) 1056
_________________________________________________________________
batch_normalization_3 (Batch (None, 28, 32) 128
_________________________________________________________________
dropout_4 (Dropout) (None, 28, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 896) 0
_________________________________________________________________
dense_2 (Dense) (None, 32) 28704
_________________________________________________________________
dropout_5 (Dropout) (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
Total params: 30,033
Trainable params: 29,937
Non-trainable params: 96
_________________________________________________________________

In [35]:

model.compile(optimizer=Adam(learning_rate=0.0001),loss='binary_crossentro
py',metrics=['accuracy'])

In [36]:

history =
model.fit(X_train,y_train,epochs=35,verbose=1,validation_data=(X_test,y_te
st))

Train on 512 samples, validate on 57 samples


Epoch 1/35
512/512 [==============================] - 1s 2ms/sample - loss: 0.9711 -
accuracy: 0.5352 - val_loss: 0.6497 - val_accuracy: 0.7193
Epoch 2/35
512/512 [==============================] - 0s 159us/sample - loss: 0.6452
- accuracy: 0.6797 - val_loss: 0.5717 - val_accuracy: 0.8421
Epoch 3/35
512/512 [==============================] - 0s 150us/sample - loss: 0.4521
- accuracy: 0.7852 - val_loss: 0.5118 - val_accuracy: 0.8246
Epoch 4/35
512/512 [==============================] - 0s 156us/sample - loss: 0.3574
- accuracy: 0.8438 - val_loss: 0.4674 - val_accuracy: 0.8070
Epoch 5/35
512/512 [==============================] - 0s 147us/sample - loss: 0.2879
- accuracy: 0.8770 - val_loss: 0.4324 - val_accuracy: 0.7895
Epoch 6/35
512/512 [==============================] - 0s 147us/sample - loss: 0.2909
- accuracy: 0.8770 - val_loss: 0.4050 - val_accuracy: 0.7719
Epoch 7/35
512/512 [==============================] - 0s 155us/sample - loss: 0.2365
- accuracy: 0.8984 - val_loss: 0.3823 - val_accuracy: 0.7719
Epoch 8/35
512/512 [==============================] - 0s 152us/sample - loss: 0.2099
- accuracy: 0.9062 - val_loss: 0.3623 - val_accuracy: 0.7895
Epoch 9/35
512/512 [==============================] - 0s 152us/sample - loss: 0.2105
- accuracy: 0.9043 - val_loss: 0.3465 - val_accuracy: 0.7895
Epoch 10/35
512/512 [==============================] - 0s 151us/sample - loss: 0.1847
- accuracy: 0.9297 - val_loss: 0.3308 - val_accuracy: 0.8070
Epoch 11/35
512/512 [==============================] - 0s 151us/sample - loss: 0.1689
- accuracy: 0.9336 - val_loss: 0.3141 - val_accuracy: 0.8246
Epoch 12/35
512/512 [==============================] - 0s 160us/sample - loss: 0.1684
- accuracy: 0.9336 - val_loss: 0.3000 - val_accuracy: 0.8246
Epoch 13/35
512/512 [==============================] - 0s 149us/sample - loss: 0.1634
- accuracy: 0.9395 - val_loss: 0.2873 - val_accuracy: 0.8246
Epoch 14/35
512/512 [==============================] - 0s 145us/sample - loss: 0.1443
- accuracy: 0.9414 - val_loss: 0.2708 - val_accuracy: 0.8421
Epoch 15/35
512/512 [==============================] - 0s 149us/sample - loss: 0.1383
- accuracy: 0.9414 - val_loss: 0.2512 - val_accuracy: 0.8421
Epoch 16/35
512/512 [==============================] - 0s 155us/sample - loss: 0.1584
- accuracy: 0.9434 - val_loss: 0.2300 - val_accuracy: 0.8772
Epoch 17/35
512/512 [==============================] - 0s 147us/sample - loss: 0.1269
- accuracy: 0.9531 - val_loss: 0.2153 - val_accuracy: 0.8772
Epoch 18/35
512/512 [==============================] - 0s 151us/sample - loss: 0.1179
- accuracy: 0.9629 - val_loss: 0.2010 - val_accuracy: 0.8772
Epoch 19/35
512/512 [==============================] - 0s 156us/sample - loss: 0.1173
- accuracy: 0.9629 - val_loss: 0.1852 - val_accuracy: 0.8947
Epoch 20/35
512/512 [==============================] - 0s 152us/sample - loss: 0.1160
- accuracy: 0.9590 - val_loss: 0.1707 - val_accuracy: 0.9123
Epoch 21/35
512/512 [==============================] - 0s 161us/sample - loss: 0.1264
- accuracy: 0.9629 - val_loss: 0.1584 - val_accuracy: 0.9474
Epoch 22/35
512/512 [==============================] - 0s 164us/sample - loss: 0.1054
- accuracy: 0.9629 - val_loss: 0.1440 - val_accuracy: 0.9474
Epoch 23/35
512/512 [==============================] - 0s 155us/sample - loss: 0.1103
- accuracy: 0.9590 - val_loss: 0.1344 - val_accuracy: 0.9474
Epoch 24/35
512/512 [==============================] - 0s 154us/sample - loss: 0.1114
- accuracy: 0.9629 - val_loss: 0.1219 - val_accuracy: 0.9474
Epoch 25/35
512/512 [==============================] - 0s 156us/sample - loss: 0.1000
- accuracy: 0.9648 - val_loss: 0.1121 - val_accuracy: 0.9474
Epoch 26/35
512/512 [==============================] - 0s 152us/sample - loss: 0.0971
- accuracy: 0.9707 - val_loss: 0.1022 - val_accuracy: 0.9474
Epoch 27/35
512/512 [==============================] - 0s 159us/sample - loss: 0.0933
- accuracy: 0.9707 - val_loss: 0.0948 - val_accuracy: 0.9649
Epoch 28/35
512/512 [==============================] - 0s 160us/sample - loss: 0.0938
- accuracy: 0.9707 - val_loss: 0.0884 - val_accuracy: 0.9649
Epoch 29/35
512/512 [==============================] - 0s 144us/sample - loss: 0.0852
- accuracy: 0.9707 - val_loss: 0.0821 - val_accuracy: 0.9649
Epoch 30/35
512/512 [==============================] - 0s 151us/sample - loss: 0.0958
- accuracy: 0.9668 - val_loss: 0.0736 - val_accuracy: 0.9649
Epoch 31/35
512/512 [==============================] - 0s 148us/sample - loss: 0.0945
- accuracy: 0.9648 - val_loss: 0.0691 - val_accuracy: 0.9649
Epoch 32/35
512/512 [==============================] - 0s 147us/sample - loss: 0.0930
- accuracy: 0.9727 - val_loss: 0.0640 - val_accuracy: 0.9649
Epoch 33/35
512/512 [==============================] - 0s 150us/sample - loss: 0.0857
- accuracy: 0.9766 - val_loss: 0.0615 - val_accuracy: 0.9649
Epoch 34/35
512/512 [==============================] - 0s 151us/sample - loss: 0.0885
- accuracy: 0.9590 - val_loss: 0.0601 - val_accuracy: 0.9649
Epoch 35/35
512/512 [==============================] - 0s 147us/sample - loss: 0.0758
- accuracy: 0.9746 - val_loss: 0.0597 - val_accuracy: 0.9825

Plots of Accuracy and Loss


In [37]:

def plotLearningCurve(history,epochs):
epochRange = range(1,epochs+1)
plt.plot(epochRange,history.history['accuracy'])
plt.plot(epochRange,history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Train','Validation'],loc='upper left')
plt.show()

plt.plot(epochRange,history.history['loss'])
plt.plot(epochRange,history.history['val_loss'])
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Train','Validation'],loc='upper left')
plt.show()

In [38]:

linkcode
plotLearningCurve(history,35)

You might also like