0% found this document useful (0 votes)
21 views23 pages

Cancer Peau

Uploaded by

Eddy SHANGA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views23 pages

Cancer Peau

Uploaded by

Eddy SHANGA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

CancerPeau

December 12, 2022

Importation de bibliothèques et d’outils essentiels


[1]: import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten
from tensorflow.keras.metrics import BinaryAccuracy, Precision, Recall
import warnings
warnings.filterwarnings("ignore")
tf.keras.backend.clear_session()

#
Chargement des données
Info: Nous chargeons les données en utilisant l’outil image_dataset_from répertoire. Il
nous aide à récupérer les données du répertoire approprié, à étiqueter automatiquement,
à mélanger les données, à regrouper les données (dans ce cas, à 32) et à redimensionner
les images en 256 par 256.
[2]: data_path = 'skin-cancer-malignant-vs-benign/'
test_data = keras.utils.
↪image_dataset_from_directory('skin-cancer-malignant-vs-benign/test')

train_data = keras.utils.
↪image_dataset_from_directory('skin-cancer-malignant-vs-benign/train')

Found 660 files belonging to 2 classes.


Found 2637 files belonging to 2 classes.
#
Tracer quelques exemples d’images
Info: Si un cancer de la peau est malin, il est étiqueté comme 1, si non, il est étiqueté
comme 0.

1
[3]: batch = train_data.as_numpy_iterator().next()

[4]: fig, ax = plt.subplots(3, 5, figsize=(15,10))


ax = ax.flatten()
for idx, img in enumerate(batch[0][:15]):
ax[idx].imshow(img.astype(int))
ax[idx].title.set_text(batch[1][idx])

#
Mise à l’échelle des données
Info: Puisque nos données sont constituées d’images et que les images sont constituées
de pixels, nous divisons toutes les valeurs de pixels par 255 - chaque pixel peut avoir une
valeur dans [0, 255]- de sorte que toutes les valeurs de pixels sont sur la même échelle,
c’est-à-dire [0, 1]..

[5]: train_data = train_data.map(lambda x,y: (x/255, y))


test_data = test_data.map(lambda x,y: (x/255, y))

[6]: batch = train_data.as_numpy_iterator().next()

[7]: print("Valeur minimale des données mises à l’échelle:", batch[0].min())


print("Valeur maximale des données mises à l’échelle:", batch[0].max())

2
Valeur minimale des données mises à l’échelle: 0.0
Valeur maximale des données mises à l’échelle: 1.0
#
Augmentation des données
Info: Parce que notre train a un nombre relativement faible d’images, nous pouvons
appliquer l’augmentation des données qui reproduit les images en appliquant certains
changements tels que la rotation aléatoire, le retournement aléatoire, le zoom aléatoire et
le contraste aléatoire. Cela peut éventuellement augmenter le score de précision du mod-
èle. Puisque nous appliquerons l’augmentation de données au début de l’architecture
du réseau neuronal, nous devrions passer la forme d’entrée..
Note:L’augmentation des données sera inactive lors du test des données. Les im-
ages d’entrée seront augmentées lors des appels à model.fit (pas model.assess ou
model.predict).

[8]: batch = train_data.as_numpy_iterator().next()

[9]: data_augmentation = Sequential([


layers.RandomFlip("horizontal_and_vertical", input_shape=(256,256,3)),
layers.RandomZoom(0.1),
layers.RandomContrast(0.1),
layers.RandomRotation(0.2)
])

image = batch[0]

plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")

WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip


WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting

3
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:5 out of the last 5 calls to <function pfor.<locals>.f at
0x0000027165398160> triggered tf.function retracing. Tracing is expensive and
the excessive number of tracings could be due to (1) creating @tf.function
repeatedly in a loop, (2) passing tensors with different shapes, (3) passing
Python objects instead of tensors. For (1), please define your @tf.function
outside of the loop. For (2), @tf.function has reduce_retracing=True option that
can avoid unnecessary retracing. For (3), please refer to

4
https://www.tensorflow.org/guide/function#controlling_retracing and
https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:6 out of the last 6 calls to <function pfor.<locals>.f at
0x0000027165445AF0> triggered tf.function retracing. Tracing is expensive and
the excessive number of tracings could be due to (1) creating @tf.function
repeatedly in a loop, (2) passing tensors with different shapes, (3) passing
Python objects instead of tensors. For (1), please define your @tf.function
outside of the loop. For (2), @tf.function has reduce_retracing=True option that
can avoid unnecessary retracing. For (3), please refer to
https://www.tensorflow.org/guide/function#controlling_retracing and
https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2

5
#
Création d’un modèle d’apprentissage profond
[10]: model = Sequential([
data_augmentation,
Conv2D(16, (3,3), 1, activation="relu", padding="same"),
Conv2D(16, (3,3), 1, activation="relu", padding="same"),
MaxPooling2D(),
Conv2D(32, (5,5), 1, activation="relu", padding="same"),
Conv2D(32, (5,5), 1, activation="relu", padding="same"),
MaxPooling2D(),
Conv2D(16, (3,3), 1, activation="relu", padding="same"),
Conv2D(16, (3,3), 1, activation="relu", padding="same"),

6
MaxPooling2D(),

Flatten(),
Dense(128, activation="relu"),
Dense(1, activation="sigmoid")
])

WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip


WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2

[11]: model.compile(loss='binary_crossentropy', # binaire est utilisé dans la classe␣


↪2

optimizer='adam',
metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall(),␣
↪"acc"])

[12]: model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential (Sequential) (None, 256, 256, 3) 0

conv2d (Conv2D) (None, 256, 256, 16) 448

conv2d_1 (Conv2D) (None, 256, 256, 16) 2320

max_pooling2d (MaxPooling2D (None, 128, 128, 16) 0


)

conv2d_2 (Conv2D) (None, 128, 128, 32) 12832

conv2d_3 (Conv2D) (None, 128, 128, 32) 25632

max_pooling2d_1 (MaxPooling (None, 64, 64, 32) 0


2D)

conv2d_4 (Conv2D) (None, 64, 64, 16) 4624

7
conv2d_5 (Conv2D) (None, 64, 64, 16) 2320

max_pooling2d_2 (MaxPooling (None, 32, 32, 16) 0


2D)

flatten (Flatten) (None, 16384) 0

dense (Dense) (None, 128) 2097280

dense_1 (Dense) (None, 1) 129

=================================================================
Total params: 2,145,585
Trainable params: 2,145,585
Non-trainable params: 0
_________________________________________________________________

[13]: history = model.fit(train_data, epochs=100, validation_data=test_data)

Epoch 1/100
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomUniformFullIntV2
WARNING:tensorflow:Using a while_loop for converting
StatelessRandomGetKeyCounter
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting AdjustContrastv2
83/83 [==============================] - 867s 10s/step - loss: 0.5894 -
precision: 0.6470 - recall: 0.6951 - acc: 0.6894 - val_loss: 0.5182 -
val_precision: 0.6703 - val_recall: 0.8267 - val_acc: 0.7364
Epoch 2/100
83/83 [==============================] - 729s 9s/step - loss: 0.5225 -
precision: 0.6680 - recall: 0.8538 - acc: 0.7410 - val_loss: 0.4745 -
val_precision: 0.6751 - val_recall: 0.9833 - val_acc: 0.7773
Epoch 3/100
83/83 [==============================] - 711s 9s/step - loss: 0.5308 -
precision: 0.6558 - recall: 0.8563 - acc: 0.7308 - val_loss: 0.5219 -

8
val_precision: 0.6728 - val_recall: 0.7267 - val_acc: 0.7152
Epoch 4/100
83/83 [==============================] - 701s 8s/step - loss: 0.5061 -
precision: 0.6805 - recall: 0.8739 - acc: 0.7565 - val_loss: 0.5405 -
val_precision: 0.7117 - val_recall: 0.6667 - val_acc: 0.7258
Epoch 5/100
83/83 [==============================] - 723s 9s/step - loss: 0.4817 -
precision: 0.6815 - recall: 0.9081 - acc: 0.7656 - val_loss: 0.4449 -
val_precision: 0.6988 - val_recall: 0.9433 - val_acc: 0.7894
Epoch 6/100
83/83 [==============================] - 706s 8s/step - loss: 0.4590 -
precision: 0.6952 - recall: 0.9240 - acc: 0.7816 - val_loss: 0.4215 -
val_precision: 0.6884 - val_recall: 0.9500 - val_acc: 0.7818
Epoch 7/100
83/83 [==============================] - 724s 9s/step - loss: 0.4606 -
precision: 0.6894 - recall: 0.9198 - acc: 0.7755 - val_loss: 0.4232 -
val_precision: 0.6931 - val_recall: 0.9333 - val_acc: 0.7818
Epoch 8/100
83/83 [==============================] - 711s 8s/step - loss: 0.4518 -
precision: 0.6932 - recall: 0.9231 - acc: 0.7797 - val_loss: 0.5108 -
val_precision: 0.6778 - val_recall: 0.7433 - val_acc: 0.7227
Epoch 9/100
83/83 [==============================] - 747s 9s/step - loss: 0.4668 -
precision: 0.6846 - recall: 0.9014 - acc: 0.7668 - val_loss: 0.4301 -
val_precision: 0.6917 - val_recall: 0.9500 - val_acc: 0.7848
Epoch 10/100
83/83 [==============================] - 1032s 12s/step - loss: 0.4347 -
precision: 0.7002 - recall: 0.9524 - acc: 0.7933 - val_loss: 0.3958 -
val_precision: 0.7103 - val_recall: 0.9400 - val_acc: 0.7985
Epoch 11/100
83/83 [==============================] - 851s 10s/step - loss: 0.4454 -
precision: 0.7069 - recall: 0.9148 - acc: 0.7892 - val_loss: 0.3901 -
val_precision: 0.6973 - val_recall: 0.9600 - val_acc: 0.7924
Epoch 12/100
83/83 [==============================] - 718s 9s/step - loss: 0.4232 -
precision: 0.7143 - recall: 0.9357 - acc: 0.8009 - val_loss: 0.3912 -
val_precision: 0.7060 - val_recall: 0.9367 - val_acc: 0.7939
Epoch 13/100
83/83 [==============================] - 699s 8s/step - loss: 0.4352 -
precision: 0.7004 - recall: 0.9315 - acc: 0.7880 - val_loss: 0.4004 -
val_precision: 0.6954 - val_recall: 0.9667 - val_acc: 0.7924
Epoch 14/100
83/83 [==============================] - 711s 9s/step - loss: 0.4102 -
precision: 0.7169 - recall: 0.9373 - acc: 0.8036 - val_loss: 0.3951 -
val_precision: 0.6966 - val_recall: 0.9567 - val_acc: 0.7909
Epoch 15/100
83/83 [==============================] - 714s 9s/step - loss: 0.4025 -
precision: 0.7238 - recall: 0.9131 - acc: 0.8024 - val_loss: 0.3915 -

9
val_precision: 0.7225 - val_recall: 0.9200 - val_acc: 0.8030
Epoch 16/100
83/83 [==============================] - 681s 8s/step - loss: 0.3855 -
precision: 0.7219 - recall: 0.9348 - acc: 0.8070 - val_loss: 0.3693 -
val_precision: 0.7292 - val_recall: 0.9333 - val_acc: 0.8121
Epoch 17/100
83/83 [==============================] - 677s 8s/step - loss: 0.3860 -
precision: 0.7133 - recall: 0.9373 - acc: 0.8005 - val_loss: 0.3590 -
val_precision: 0.7289 - val_recall: 0.9500 - val_acc: 0.8167
Epoch 18/100
83/83 [==============================] - 676s 8s/step - loss: 0.3911 -
precision: 0.7403 - recall: 0.8956 - acc: 0.8100 - val_loss: 0.3759 -
val_precision: 0.7261 - val_recall: 0.9367 - val_acc: 0.8106
Epoch 19/100
83/83 [==============================] - 674s 8s/step - loss: 0.3816 -
precision: 0.7220 - recall: 0.9114 - acc: 0.8005 - val_loss: 0.3606 -
val_precision: 0.7224 - val_recall: 0.9367 - val_acc: 0.8076
Epoch 20/100
83/83 [==============================] - 682s 8s/step - loss: 0.3847 -
precision: 0.7248 - recall: 0.9131 - acc: 0.8032 - val_loss: 0.3499 -
val_precision: 0.7428 - val_recall: 0.9433 - val_acc: 0.8258
Epoch 21/100
83/83 [==============================] - 675s 8s/step - loss: 0.4309 -
precision: 0.7184 - recall: 0.8822 - acc: 0.7895 - val_loss: 0.3872 -
val_precision: 0.7528 - val_recall: 0.8933 - val_acc: 0.8182
Epoch 22/100
83/83 [==============================] - 674s 8s/step - loss: 0.3762 -
precision: 0.7491 - recall: 0.8780 - acc: 0.8111 - val_loss: 0.3526 -
val_precision: 0.7579 - val_recall: 0.8767 - val_acc: 0.8167
Epoch 23/100
83/83 [==============================] - 712s 9s/step - loss: 0.3638 -
precision: 0.7512 - recall: 0.9031 - acc: 0.8203 - val_loss: 0.3356 -
val_precision: 0.7692 - val_recall: 0.9000 - val_acc: 0.8318
Epoch 24/100
83/83 [==============================] - 701s 8s/step - loss: 0.3603 -
precision: 0.7572 - recall: 0.8830 - acc: 0.8184 - val_loss: 0.3445 -
val_precision: 0.7759 - val_recall: 0.9000 - val_acc: 0.8364
Epoch 25/100
83/83 [==============================] - 673s 8s/step - loss: 0.3517 -
precision: 0.7758 - recall: 0.8847 - acc: 0.8316 - val_loss: 0.3643 -
val_precision: 0.7798 - val_recall: 0.8500 - val_acc: 0.8227
Epoch 26/100
83/83 [==============================] - 689s 8s/step - loss: 0.3506 -
precision: 0.7783 - recall: 0.8680 - acc: 0.8278 - val_loss: 0.3354 -
val_precision: 0.8117 - val_recall: 0.8333 - val_acc: 0.8364
Epoch 27/100
83/83 [==============================] - 680s 8s/step - loss: 0.3564 -
precision: 0.7681 - recall: 0.8580 - acc: 0.8180 - val_loss: 0.3799 -

10
val_precision: 0.7886 - val_recall: 0.7833 - val_acc: 0.8061
Epoch 28/100
83/83 [==============================] - 678s 8s/step - loss: 0.3647 -
precision: 0.7700 - recall: 0.8839 - acc: 0.8275 - val_loss: 0.3449 -
val_precision: 0.7803 - val_recall: 0.8167 - val_acc: 0.8121
Epoch 29/100
83/83 [==============================] - 674s 8s/step - loss: 0.3648 -
precision: 0.7742 - recall: 0.8563 - acc: 0.8214 - val_loss: 0.3897 -
val_precision: 0.7368 - val_recall: 0.9333 - val_acc: 0.8182
Epoch 30/100
83/83 [==============================] - 728s 9s/step - loss: 0.3534 -
precision: 0.7829 - recall: 0.8797 - acc: 0.8347 - val_loss: 0.3307 -
val_precision: 0.7868 - val_recall: 0.8733 - val_acc: 0.8348
Epoch 31/100
83/83 [==============================] - 707s 8s/step - loss: 0.3325 -
precision: 0.7935 - recall: 0.8797 - acc: 0.8415 - val_loss: 0.3617 -
val_precision: 0.7406 - val_recall: 0.8567 - val_acc: 0.7985
Epoch 32/100
83/83 [==============================] - 691s 8s/step - loss: 0.3547 -
precision: 0.7806 - recall: 0.8830 - acc: 0.8343 - val_loss: 0.3190 -
val_precision: 0.8131 - val_recall: 0.8700 - val_acc: 0.8500
Epoch 33/100
83/83 [==============================] - 693s 8s/step - loss: 0.3358 -
precision: 0.7917 - recall: 0.8638 - acc: 0.8350 - val_loss: 0.3355 -
val_precision: 0.7725 - val_recall: 0.9167 - val_acc: 0.8394
Epoch 34/100
83/83 [==============================] - 683s 8s/step - loss: 0.3411 -
precision: 0.7871 - recall: 0.8864 - acc: 0.8396 - val_loss: 0.3511 -
val_precision: 0.7575 - val_recall: 0.9267 - val_acc: 0.8318
Epoch 35/100
83/83 [==============================] - 672s 8s/step - loss: 0.3246 -
precision: 0.7964 - recall: 0.8889 - acc: 0.8464 - val_loss: 0.3524 -
val_precision: 0.7675 - val_recall: 0.9133 - val_acc: 0.8348
Epoch 36/100
83/83 [==============================] - 672s 8s/step - loss: 0.3073 -
precision: 0.8166 - recall: 0.8780 - acc: 0.8551 - val_loss: 0.3109 -
val_precision: 0.8018 - val_recall: 0.8900 - val_acc: 0.8500
Epoch 37/100
83/83 [==============================] - 674s 8s/step - loss: 0.3384 -
precision: 0.8065 - recall: 0.8772 - acc: 0.8487 - val_loss: 0.3196 -
val_precision: 0.7772 - val_recall: 0.9300 - val_acc: 0.8470
Epoch 38/100
83/83 [==============================] - 699s 8s/step - loss: 0.3213 -
precision: 0.7991 - recall: 0.8739 - acc: 0.8430 - val_loss: 0.3086 -
val_precision: 0.8075 - val_recall: 0.8667 - val_acc: 0.8455
Epoch 39/100
83/83 [==============================] - 719s 9s/step - loss: 0.3138 -
precision: 0.8083 - recall: 0.8914 - acc: 0.8548 - val_loss: 0.3407 -

11
val_precision: 0.7952 - val_recall: 0.8800 - val_acc: 0.8424
Epoch 40/100
83/83 [==============================] - 694s 8s/step - loss: 0.3242 -
precision: 0.8017 - recall: 0.8780 - acc: 0.8460 - val_loss: 0.3055 -
val_precision: 0.7941 - val_recall: 0.9000 - val_acc: 0.8485
Epoch 41/100
83/83 [==============================] - 672s 8s/step - loss: 0.2990 -
precision: 0.8173 - recall: 0.8855 - acc: 0.8582 - val_loss: 0.3272 -
val_precision: 0.8339 - val_recall: 0.8200 - val_acc: 0.8439
Epoch 42/100
83/83 [==============================] - 674s 8s/step - loss: 0.3020 -
precision: 0.8146 - recall: 0.8847 - acc: 0.8563 - val_loss: 0.3139 -
val_precision: 0.8190 - val_recall: 0.8600 - val_acc: 0.8500
Epoch 43/100
83/83 [==============================] - 676s 8s/step - loss: 0.3227 -
precision: 0.8080 - recall: 0.8755 - acc: 0.8491 - val_loss: 0.3374 -
val_precision: 0.7827 - val_recall: 0.8767 - val_acc: 0.8333
Epoch 44/100
83/83 [==============================] - 675s 8s/step - loss: 0.3053 -
precision: 0.8159 - recall: 0.8997 - acc: 0.8623 - val_loss: 0.3606 -
val_precision: 0.7726 - val_recall: 0.8833 - val_acc: 0.8288
Epoch 45/100
83/83 [==============================] - 681s 8s/step - loss: 0.3183 -
precision: 0.8164 - recall: 0.8730 - acc: 0.8532 - val_loss: 0.3068 -
val_precision: 0.8082 - val_recall: 0.8567 - val_acc: 0.8424
Epoch 46/100
83/83 [==============================] - 676s 8s/step - loss: 0.3157 -
precision: 0.8069 - recall: 0.8830 - acc: 0.8510 - val_loss: 0.3192 -
val_precision: 0.7797 - val_recall: 0.8967 - val_acc: 0.8379
Epoch 47/100
83/83 [==============================] - 683s 8s/step - loss: 0.3076 -
precision: 0.8076 - recall: 0.8839 - acc: 0.8517 - val_loss: 0.2967 -
val_precision: 0.8377 - val_recall: 0.8600 - val_acc: 0.8606
Epoch 48/100
83/83 [==============================] - 721s 9s/step - loss: 0.3107 -
precision: 0.8111 - recall: 0.8897 - acc: 0.8559 - val_loss: 0.3036 -
val_precision: 0.8674 - val_recall: 0.8067 - val_acc: 0.8561
Epoch 49/100
83/83 [==============================] - 686s 8s/step - loss: 0.3051 -
precision: 0.8157 - recall: 0.8872 - acc: 0.8578 - val_loss: 0.3164 -
val_precision: 0.8018 - val_recall: 0.8767 - val_acc: 0.8455
Epoch 50/100
83/83 [==============================] - 685s 8s/step - loss: 0.2950 -
precision: 0.8194 - recall: 0.8906 - acc: 0.8612 - val_loss: 0.3369 -
val_precision: 0.7853 - val_recall: 0.8900 - val_acc: 0.8394
Epoch 51/100
83/83 [==============================] - 671s 8s/step - loss: 0.3017 -
precision: 0.8233 - recall: 0.8797 - acc: 0.8597 - val_loss: 0.3052 -

12
val_precision: 0.8269 - val_recall: 0.8600 - val_acc: 0.8545
Epoch 52/100
83/83 [==============================] - 669s 8s/step - loss: 0.2891 -
precision: 0.8261 - recall: 0.8889 - acc: 0.8646 - val_loss: 0.3322 -
val_precision: 0.7867 - val_recall: 0.9100 - val_acc: 0.8470
Epoch 53/100
83/83 [==============================] - 679s 8s/step - loss: 0.2917 -
precision: 0.8283 - recall: 0.8947 - acc: 0.8680 - val_loss: 0.2950 -
val_precision: 0.7803 - val_recall: 0.9233 - val_acc: 0.8470
Epoch 54/100
83/83 [==============================] - 679s 8s/step - loss: 0.2829 -
precision: 0.8356 - recall: 0.8914 - acc: 0.8711 - val_loss: 0.3155 -
val_precision: 0.8220 - val_recall: 0.8467 - val_acc: 0.8470
Epoch 55/100
83/83 [==============================] - 682s 8s/step - loss: 0.2910 -
precision: 0.8357 - recall: 0.8797 - acc: 0.8669 - val_loss: 0.3432 -
val_precision: 0.8092 - val_recall: 0.8200 - val_acc: 0.8303
Epoch 56/100
83/83 [==============================] - 669s 8s/step - loss: 0.2966 -
precision: 0.8271 - recall: 0.8872 - acc: 0.8646 - val_loss: 0.3230 -
val_precision: 0.8024 - val_recall: 0.8800 - val_acc: 0.8470
Epoch 57/100
83/83 [==============================] - 674s 8s/step - loss: 0.2878 -
precision: 0.8381 - recall: 0.8822 - acc: 0.8692 - val_loss: 0.3431 -
val_precision: 0.8092 - val_recall: 0.8200 - val_acc: 0.8303
Epoch 58/100
83/83 [==============================] - 674s 8s/step - loss: 0.2985 -
precision: 0.8195 - recall: 0.8914 - acc: 0.8616 - val_loss: 0.3230 -
val_precision: 0.7982 - val_recall: 0.8967 - val_acc: 0.8500
Epoch 59/100
83/83 [==============================] - 693s 8s/step - loss: 0.2813 -
precision: 0.8234 - recall: 0.9114 - acc: 0.8711 - val_loss: 0.3141 -
val_precision: 0.7928 - val_recall: 0.8800 - val_acc: 0.8409
Epoch 60/100
83/83 [==============================] - 719s 9s/step - loss: 0.2855 -
precision: 0.8337 - recall: 0.8964 - acc: 0.8718 - val_loss: 0.3013 -
val_precision: 0.8146 - val_recall: 0.8933 - val_acc: 0.8591
Epoch 61/100
83/83 [==============================] - 681s 8s/step - loss: 0.2962 -
precision: 0.8282 - recall: 0.8739 - acc: 0.8604 - val_loss: 0.2925 -
val_precision: 0.8149 - val_recall: 0.9100 - val_acc: 0.8652
Epoch 62/100
83/83 [==============================] - 670s 8s/step - loss: 0.2844 -
precision: 0.8221 - recall: 0.8956 - acc: 0.8646 - val_loss: 0.3784 -
val_precision: 0.7982 - val_recall: 0.9100 - val_acc: 0.8545
Epoch 63/100
83/83 [==============================] - 670s 8s/step - loss: 0.2776 -
precision: 0.8346 - recall: 0.9023 - acc: 0.8745 - val_loss: 0.4005 -

13
val_precision: 0.7831 - val_recall: 0.8667 - val_acc: 0.8303
Epoch 64/100
83/83 [==============================] - 668s 8s/step - loss: 0.2724 -
precision: 0.8411 - recall: 0.8931 - acc: 0.8749 - val_loss: 0.3749 -
val_precision: 0.7890 - val_recall: 0.8600 - val_acc: 0.8318
Epoch 65/100
83/83 [==============================] - 675s 8s/step - loss: 0.2901 -
precision: 0.8242 - recall: 0.8931 - acc: 0.8650 - val_loss: 0.2792 -
val_precision: 0.8409 - val_recall: 0.8633 - val_acc: 0.8636
Epoch 66/100
83/83 [==============================] - 682s 8s/step - loss: 0.2877 -
precision: 0.8292 - recall: 0.8922 - acc: 0.8677 - val_loss: 0.3169 -
val_precision: 0.7884 - val_recall: 0.9067 - val_acc: 0.8470
Epoch 67/100
83/83 [==============================] - 702s 8s/step - loss: 0.2877 -
precision: 0.8336 - recall: 0.8914 - acc: 0.8699 - val_loss: 0.3367 -
val_precision: 0.8013 - val_recall: 0.8467 - val_acc: 0.8348
Epoch 68/100
83/83 [==============================] - 700s 8s/step - loss: 0.2729 -
precision: 0.8395 - recall: 0.8914 - acc: 0.8733 - val_loss: 0.3651 -
val_precision: 0.8137 - val_recall: 0.8300 - val_acc: 0.8364
Epoch 69/100
83/83 [==============================] - 674s 8s/step - loss: 0.2716 -
precision: 0.8442 - recall: 0.8830 - acc: 0.8730 - val_loss: 0.4153 -
val_precision: 0.7902 - val_recall: 0.9167 - val_acc: 0.8515
Epoch 70/100
83/83 [==============================] - 669s 8s/step - loss: 0.2732 -
precision: 0.8342 - recall: 0.8997 - acc: 0.8733 - val_loss: 0.2857 -
val_precision: 0.8267 - val_recall: 0.9067 - val_acc: 0.8712
Epoch 71/100
83/83 [==============================] - 670s 8s/step - loss: 0.2754 -
precision: 0.8348 - recall: 0.8989 - acc: 0.8733 - val_loss: 0.2948 -
val_precision: 0.8531 - val_recall: 0.8133 - val_acc: 0.8515
Epoch 72/100
83/83 [==============================] - 671s 8s/step - loss: 0.2889 -
precision: 0.8304 - recall: 0.8914 - acc: 0.8680 - val_loss: 0.2868 -
val_precision: 0.8024 - val_recall: 0.9067 - val_acc: 0.8561
Epoch 73/100
83/83 [==============================] - 672s 8s/step - loss: 0.3102 -
precision: 0.8295 - recall: 0.8822 - acc: 0.8642 - val_loss: 0.2941 -
val_precision: 0.8257 - val_recall: 0.9000 - val_acc: 0.8682
Epoch 74/100
83/83 [==============================] - 670s 8s/step - loss: 0.2795 -
precision: 0.8315 - recall: 0.8906 - acc: 0.8684 - val_loss: 0.2795 -
val_precision: 0.8562 - val_recall: 0.8533 - val_acc: 0.8682
Epoch 75/100
83/83 [==============================] - 730s 9s/step - loss: 0.2719 -
precision: 0.8419 - recall: 0.8897 - acc: 0.8741 - val_loss: 0.2997 -

14
val_precision: 0.8173 - val_recall: 0.8500 - val_acc: 0.8455
Epoch 76/100
83/83 [==============================] - 714s 9s/step - loss: 0.2687 -
precision: 0.8395 - recall: 0.8956 - acc: 0.8749 - val_loss: 0.3179 -
val_precision: 0.8199 - val_recall: 0.8500 - val_acc: 0.8470
Epoch 77/100
83/83 [==============================] - 723s 9s/step - loss: 0.2632 -
precision: 0.8406 - recall: 0.9073 - acc: 0.8798 - val_loss: 0.3132 -
val_precision: 0.7878 - val_recall: 0.9033 - val_acc: 0.8455
Epoch 78/100
83/83 [==============================] - 701s 8s/step - loss: 0.2599 -
precision: 0.8459 - recall: 0.9123 - acc: 0.8847 - val_loss: 0.3277 -
val_precision: 0.8531 - val_recall: 0.8133 - val_acc: 0.8515
Epoch 79/100
83/83 [==============================] - 698s 8s/step - loss: 0.2571 -
precision: 0.8454 - recall: 0.8997 - acc: 0.8798 - val_loss: 0.3074 -
val_precision: 0.8067 - val_recall: 0.8767 - val_acc: 0.8485
Epoch 80/100
83/83 [==============================] - 699s 8s/step - loss: 0.2601 -
precision: 0.8501 - recall: 0.8906 - acc: 0.8790 - val_loss: 0.3211 -
val_precision: 0.7976 - val_recall: 0.8800 - val_acc: 0.8439
Epoch 81/100
83/83 [==============================] - 699s 8s/step - loss: 0.2686 -
precision: 0.8369 - recall: 0.8914 - acc: 0.8718 - val_loss: 0.2900 -
val_precision: 0.8503 - val_recall: 0.8333 - val_acc: 0.8576
Epoch 82/100
83/83 [==============================] - 707s 9s/step - loss: 0.2568 -
precision: 0.8502 - recall: 0.9006 - acc: 0.8828 - val_loss: 0.3012 -
val_precision: 0.8493 - val_recall: 0.8267 - val_acc: 0.8545
Epoch 83/100
83/83 [==============================] - 745s 9s/step - loss: 0.2685 -
precision: 0.8469 - recall: 0.8964 - acc: 0.8794 - val_loss: 0.2998 -
val_precision: 0.8165 - val_recall: 0.8600 - val_acc: 0.8485
Epoch 84/100
83/83 [==============================] - 722s 9s/step - loss: 0.2478 -
precision: 0.8594 - recall: 0.8989 - acc: 0.8874 - val_loss: 0.2851 -
val_precision: 0.8297 - val_recall: 0.8767 - val_acc: 0.8621
Epoch 85/100
83/83 [==============================] - 715s 9s/step - loss: 0.2669 -
precision: 0.8530 - recall: 0.9014 - acc: 0.8847 - val_loss: 0.2954 -
val_precision: 0.8219 - val_recall: 0.8767 - val_acc: 0.8576
Epoch 86/100
83/83 [==============================] - 712s 9s/step - loss: 0.2614 -
precision: 0.8492 - recall: 0.8889 - acc: 0.8779 - val_loss: 0.3453 -
val_precision: 0.8179 - val_recall: 0.8233 - val_acc: 0.8364
Epoch 87/100
83/83 [==============================] - 718s 9s/step - loss: 0.2581 -
precision: 0.8595 - recall: 0.8839 - acc: 0.8817 - val_loss: 0.3103 -

15
val_precision: 0.8012 - val_recall: 0.9000 - val_acc: 0.8530
Epoch 88/100
83/83 [==============================] - 721s 9s/step - loss: 0.2502 -
precision: 0.8584 - recall: 0.9014 - acc: 0.8878 - val_loss: 0.2892 -
val_precision: 0.8272 - val_recall: 0.8933 - val_acc: 0.8667
Epoch 89/100
83/83 [==============================] - 705s 8s/step - loss: 0.2425 -
precision: 0.8578 - recall: 0.8972 - acc: 0.8859 - val_loss: 0.2874 -
val_precision: 0.8292 - val_recall: 0.8900 - val_acc: 0.8667
Epoch 90/100
83/83 [==============================] - 713s 9s/step - loss: 0.2419 -
precision: 0.8659 - recall: 0.9006 - acc: 0.8915 - val_loss: 0.3094 -
val_precision: 0.8228 - val_recall: 0.8667 - val_acc: 0.8545
Epoch 91/100
83/83 [==============================] - 751s 9s/step - loss: 0.2433 -
precision: 0.8627 - recall: 0.9081 - acc: 0.8927 - val_loss: 0.2851 -
val_precision: 0.8302 - val_recall: 0.8967 - val_acc: 0.8697
Epoch 92/100
83/83 [==============================] - 743s 9s/step - loss: 0.2473 -
precision: 0.8597 - recall: 0.8956 - acc: 0.8862 - val_loss: 0.3072 -
val_precision: 0.8185 - val_recall: 0.8567 - val_acc: 0.8485
Epoch 93/100
83/83 [==============================] - 742s 9s/step - loss: 0.2495 -
precision: 0.8656 - recall: 0.9089 - acc: 0.8946 - val_loss: 0.2968 -
val_precision: 0.8188 - val_recall: 0.8733 - val_acc: 0.8545
Epoch 94/100
83/83 [==============================] - 738s 9s/step - loss: 0.2437 -
precision: 0.8648 - recall: 0.9031 - acc: 0.8919 - val_loss: 0.2835 -
val_precision: 0.8287 - val_recall: 0.9033 - val_acc: 0.8712
Epoch 95/100
83/83 [==============================] - 751s 9s/step - loss: 0.2409 -
precision: 0.8656 - recall: 0.8931 - acc: 0.8885 - val_loss: 0.3013 -
val_precision: 0.8118 - val_recall: 0.9200 - val_acc: 0.8667
Epoch 96/100
83/83 [==============================] - 731s 9s/step - loss: 0.2486 -
precision: 0.8591 - recall: 0.9014 - acc: 0.8881 - val_loss: 0.3792 -
val_precision: 0.7629 - val_recall: 0.9333 - val_acc: 0.8379
Epoch 97/100
83/83 [==============================] - 727s 9s/step - loss: 0.2557 -
precision: 0.8435 - recall: 0.9048 - acc: 0.8805 - val_loss: 0.3140 -
val_precision: 0.8427 - val_recall: 0.8033 - val_acc: 0.8424
Epoch 98/100
83/83 [==============================] - 776s 9s/step - loss: 0.2358 -
precision: 0.8707 - recall: 0.9056 - acc: 0.8961 - val_loss: 0.3098 -
val_precision: 0.8237 - val_recall: 0.8567 - val_acc: 0.8515
Epoch 99/100
83/83 [==============================] - 763s 9s/step - loss: 0.2306 -
precision: 0.8609 - recall: 0.9048 - acc: 0.8904 - val_loss: 0.2914 -

16
val_precision: 0.8201 - val_recall: 0.8967 - val_acc: 0.8636
Epoch 100/100
83/83 [==============================] - 733s 9s/step - loss: 0.2590 -
precision: 0.8573 - recall: 0.8881 - acc: 0.8821 - val_loss: 0.3296 -
val_precision: 0.7981 - val_recall: 0.8567 - val_acc: 0.8364
#
Résultats
[14]: bin_acc = BinaryAccuracy()
recall = Recall()
precision = Precision()

for batch in test_data.as_numpy_iterator():


X, y = batch
yhat = model.predict(X)
bin_acc.update_state(y, yhat)
recall.update_state(y, yhat)
precision.update_state(y, yhat)

print("Accuracy:", bin_acc.result().numpy(), "\nRecall:", recall.result().


↪numpy(), "\nPrecision:", precision.result().numpy())

1/1 [==============================] - 13s 13s/step


1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
1/1 [==============================] - 3s 3s/step
Accuracy: 0.8363636
Recall: 0.8566667
Precision: 0.79813665

17
#
Affichage de perte de données et de la précision.
[15]: losses = pd.DataFrame(history.history)
losses.head()
losses[['loss','val_loss']].plot()
losses[['precision','val_precision']].plot()
losses.plot()

[15]: <AxesSubplot:>

18
19
#
Enregistrement de notre modèle CNN pour son utilisation dans l’API
[16]: model.save('CancerPeau1_model.h5')

[ ]:

#
Tests manuels
Info:Nous avons déjà évalué notre modèle à l’aide de diverses métriques et visualisations,
mais il est toujours judicieux de tester le modèle à la main pour s’assurer que tout
fonctionne bien. Dans le code ci-dessous, j’ai choisi au hasard une image et l’ai tracée
avec sa véritable étiquette sur le titre, alors voyons si notre modèle va classer cet exemple
correctement.
[17]: batch = test_data.as_numpy_iterator().next()

[18]: batch[0][15]

[18]: array([[[0.91764706, 0.6156863 , 0.654902 ],


[0.92720586, 0.6093137 , 0.6612745 ],
[0.93210787, 0.6132353 , 0.67083335],
…,
[0.91544116, 0.64215684, 0.6779412 ],
[0.90710783, 0.63504905, 0.65686274],
[0.92941177, 0.6509804 , 0.68235296]],

[[0.92720586, 0.6156863 , 0.6580882 ],


[0.9367647 , 0.61966914, 0.6748162 ],
[0.93509495, 0.6194087 , 0.6736213 ],
…,
[0.91165745, 0.6248315 , 0.6612132 ],
[0.9182598 , 0.62688416, 0.66343445],
[0.9198529 , 0.63504905, 0.6759804 ]],

[[0.94289213, 0.6102941 , 0.6696078 ],


[0.9458793 , 0.6276195 , 0.6821538 ],
[0.9448836 , 0.6344822 , 0.6862286 ],
…,
[0.87742037, 0.5817402 , 0.6226409 ],
[0.8921875 , 0.6037684 , 0.64928 ],
[0.8933824 , 0.62328434, 0.6610294 ]],

…,

20
[[0.847549 , 0.50759804, 0.54264706],
[0.84675246, 0.50759804, 0.5504136 ],
[0.86459863, 0.5274816 , 0.5648438 ],
…,
[0.8837316 , 0.64805454, 0.6475031 ],
[0.88730085, 0.65032166, 0.6525582 ],
[0.8779412 , 0.6389706 , 0.63284314]],

[[0.87083334, 0.5357843 , 0.5781863 ],


[0.8630668 , 0.5363817 , 0.5773897 ],
[0.85234374, 0.52809435, 0.56288296],
…,
[0.8771753 , 0.6408854 , 0.6393842 ],
[0.8684283 , 0.62875307, 0.6230392 ],
[0.8495098 , 0.6046569 , 0.5879902 ]],

[[0.85490197, 0.5294118 , 0.5686275 ],


[0.85490197, 0.532598 , 0.5781863 ],
[0.8656863 , 0.5468137 , 0.5857843 ],
…,
[0.88235295, 0.64705884, 0.64436275],
[0.875 , 0.6397059 , 0.6389706 ],
[0.84313726, 0.60784316, 0.6039216 ]]], dtype=float32)

[19]: batch[1][15]

[19]: 0

[20]: img, label = batch[0][15], batch[1][15]


plt.imshow(img)
if label==1:
plt.title("Malin")
else:
plt.title("Benin")
plt.show()

21
[21]: y_hat = model.predict(np.expand_dims(img, 0))

1/1 [==============================] - 1s 1s/step


Exemple:Nous sommes en mesure de voir la probabilité de cette peau maligne ci-dessous.
J’ai choisi de déterminer le seuil de classification à 0,5. Cela signifie que, s’il est inférieur
à 0,5, il sera classé comme bénin, sinon il sera classé comme malin.
[22]: y_hat

[22]: array([[0.00141959]], dtype=float32)

[23]: if y_hat < 0.5:


print("Benin")
else:
print("Malin")

Benin
#
Construction du module de prédiction à partir de données test pour l’API.

22
[32]: class_dict = {1:'benin',
0:'malin'}

[33]: import cv2


from PIL import Image

file_path = '/test/malignant/1.jpg'
test_image = cv2.imread(data_path + file_path)
test_image=Image.open(data_path + file_path)
test_image=test_image.resize((256,256))
plt.subplot(1,2,1)
plt.imshow(test_image)

test_image = np.expand_dims(test_image,0)

probs = model.predict(test_image)
pred_class = np.argmax(probs)
d_loss=pred_class
pred_class = class_dict[pred_class]
print('prediction: ',pred_class)

1/1 [==============================] - 0s 124ms/step


prediction: malin

[ ]:

23

You might also like