0% found this document useful (0 votes)
33 views15 pages

Capstone Project-1

The Capstone Project involves creating a neural network to classify digits from the SVHN dataset, which contains over 600,000 images of house numbers. Participants will follow instructions to build, train, validate, and save their TensorFlow model, ensuring to document their process for peer review. The project emphasizes the importance of model performance metrics and includes steps for data preprocessing, model architecture design, and training with callbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views15 pages

Capstone Project-1

The Capstone Project involves creating a neural network to classify digits from the SVHN dataset, which contains over 600,000 images of house numbers. Participants will follow instructions to build, train, validate, and save their TensorFlow model, ensuring to document their process for peer review. The project emphasizes the importance of model performance metrics and includes steps for data preprocessing, model architecture design, and training with callbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Capstone Project

April 15, 2025

1 Capstone Project
1.1 Image classifier for the SVHN dataset
1.1.1 Instructions
In this notebook, you will create a neural network that classifies real-world images digits. You
will use concepts from throughout this course in building, training, testing, validating and saving
your Tensorflow classifier model.
This project is peer-assessed. Within this notebook you will find instructions in each section
for how to complete the project. Pay close attention to the instructions as the peer review will be
carried out according to a grading rubric that checks key parts of the project instructions. Feel free
to add extra cells into the notebook as required.

1.1.2 How to submit


When you have completed the Capstone project notebook, you will submit a pdf of the notebook
for peer review. First ensure that the notebook has been fully executed from beginning to end, and
all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer
being able to view the outputs of your notebook. Save the notebook as a pdf (File -> Download as
-> PDF via LaTeX). You should then submit this pdf for review.

1.1.3 Let’s get started!


We’ll start by running some imports, and loading the dataset. For this project you are free to make
further imports throughout the notebook as you wish.

In [1]: import tensorflow as tf


from scipy.io import loadmat

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Dense, Flatten, Softmax,BatchNormalization,Dropout,
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import ModelCheckpoint,ReduceLROnPlateau,EarlyStopping
from sklearn.preprocessing import OneHotEncoder

1
For the cap-
stone project, you will use the SVHN dataset. This is an image dataset of over 600,000 digit images
in all, and is a harder dataset than MNIST as the numbers appear in the context of natural scene
images. SVHN is obtained from house numbers in Google Street View images.

• Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu and A. Y. Ng. “Reading Digits in Natu-


ral Images with Unsupervised Feature Learning”. NIPS Workshop on Deep Learning and
Unsupervised Feature Learning, 2011.

Your goal is to develop an end-to-end workflow for building, training, validating, evaluating
and saving a neural network that classifies a real-world image into one of ten classes.

In [2]: # Run this cell to load the dataset

train = loadmat('data/train_32x32.mat')
test = loadmat('data/test_32x32.mat')

Both train and test are dictionaries with keys X and y for the input images and labels respec-
tively.

1.2 1. Inspect and preprocess the dataset


• Extract the training and testing images and labels separately from the train and test dictio-
naries loaded for you.
• Select a random sample of images and corresponding labels from the dataset (at least 10),
and display them in a figure.
• Convert the training and test images to grayscale by taking the average across all colour
channels for each pixel. Hint: retain the channel dimension, which will now have size 1.
• Select a random sample of the grayscale images and corresponding labels from the dataset
(at least 10), and display them in a figure.

In [3]: train_data=train['X']
train_labels=train['y']
test_data=test['X']
test_labels=test['y']

2
In [4]: import numpy as np
import matplotlib.pyplot as plt
index_random_sample = np.random.choice(train_data.shape[3])
rows,columns=5,5
fig, axs = plt.subplots(rows, columns, figsize=(15, 12)) # set figure size here
for i, ax in enumerate(axs.flat):
index_random_sample = np.random.choice(train_data.shape[3])
ax.imshow(train_data[:,:,:,index_random_sample])
ax.set_title(f"Sample {index_random_sample}")
plt.tight_layout()
plt.show()

<Figure size 1500x1200 with 25 Axes>

In [5]: train_data_gray=np.average(train_data,axis = 2)
train_data_gray = train_data_gray[:, :, np.newaxis, :]
index_random_sample = np.random.choice(train_data_gray.shape[3])
rows,columns=5,5
fig, axs = plt.subplots(rows, columns, figsize=(15, 12)) # set figure size here
for i, ax in enumerate(axs.flat):
index_random_sample = np.random.choice(train_data_gray.shape[3])
ax.imshow(train_data_gray[:,:,0,index_random_sample],cmap='gray')
ax.set_title(f"Sample {index_random_sample}")
plt.tight_layout()
plt.show()

3
1.3 2. MLP neural network classifier
• Build an MLP classifier model using the Sequential API. Your model should use only Flatten
and Dense layers, with the final layer having a 10-way softmax output.
• You should design and build the model yourself. Feel free to experiment with different MLP
architectures. Hint: to achieve a reasonable accuracy you won’t need to use more than 4 or 5 layers.
• Print out the model summary (using the summary() method)
• Compile and train the model (we recommend a maximum of 30 epochs), making use of both
training and validation sets during the training run.
• Your model should track at least one appropriate metric, and use at least two callbacks dur-
ing training, one of which should be a ModelCheckpoint callback.
• As a guide, you should aim to achieve a final categorical cross entropy training loss of less
than 1.0 (the validation loss might be higher).
• Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and vali-
dation sets.
• Compute and display the loss and accuracy of the trained model on the test set.

In [6]: test_data_gray=np.average(test_data,axis = 2)

4
test_data_gray = test_data_gray[:, :, np.newaxis, :]

#train_data_gray= train_data_gray.reshape(73257, 32, 32, 1)


#test_data_gray = test_data_gray.reshape(26032, 32, 32, 1)

train_data_gray = np.transpose(train_data_gray, (3, 0, 1, 2))


test_data_gray = np.transpose(test_data_gray , (3, 0, 1, 2))

#train_labels[train_labels == 10] = 0
#test_labels[test_labels == 10] = 0

print(np.shape(train_labels))
enc = OneHotEncoder().fit(train_labels)
train_labels = enc.transform(train_labels).toarray()
test_labels = enc.transform(test_labels).toarray()

model = Sequential([Flatten(input_shape= [32 , 32 , 1]),


Dense(256 , activation = 'relu'),
Dense(128 , activation='relu'),
Dense(64 , activation = 'relu'),
Dense(10 , activation= 'softmax')
])

model.summary()
(73257, 1)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 256) 262400
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 304,202
Trainable params: 304,202
Non-trainable params: 0
_________________________________________________________________

In [7]: model.compile (optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),loss='categorica

5
In [8]: t1=tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',factor=0.2,patience=3)
t2=tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=3,mode='min')
checkpoints_best_only_path='checkpoints_best_only_MLP/checkpoint'
checkpoints_best_only=ModelCheckpoint(checkpoints_best_only_path,save_weights_only=True

In [9]: history=model.fit(train_data_gray,train_labels,epochs=30,validation_split=0.2,callbacks

Train on 58605 samples, validate on 14652 samples


Epoch 1/30
58496/58605 [============================>.] - ETA: 0s - loss: 6.5541 - accuracy: 0.1384
Epoch 00001: loss improved from inf to 6.54656, saving model to checkpoints_best_only_MLP/check
58605/58605 [==============================] - 30s 509us/sample - loss: 6.5466 - accuracy: 0.13
Epoch 2/30
58560/58605 [============================>.] - ETA: 0s - loss: 2.1406 - accuracy: 0.2788
Epoch 00002: loss improved from 6.54656 to 2.14033, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 476us/sample - loss: 2.1403 - accuracy: 0.27
Epoch 3/30
58432/58605 [============================>.] - ETA: 0s - loss: 1.7479 - accuracy: 0.4179
Epoch 00003: loss improved from 2.14033 to 1.74786, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 474us/sample - loss: 1.7479 - accuracy: 0.41
Epoch 4/30
58368/58605 [============================>.] - ETA: 0s - loss: 1.5253 - accuracy: 0.5038
Epoch 00004: loss improved from 1.74786 to 1.52546, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 473us/sample - loss: 1.5255 - accuracy: 0.50
Epoch 5/30
58496/58605 [============================>.] - ETA: 0s - loss: 1.4042 - accuracy: 0.5500
Epoch 00005: loss improved from 1.52546 to 1.40374, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 478us/sample - loss: 1.4037 - accuracy: 0.55
Epoch 6/30
58496/58605 [============================>.] - ETA: 0s - loss: 1.3213 - accuracy: 0.5829 E
Epoch 00006: loss improved from 1.40374 to 1.32090, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 470us/sample - loss: 1.3209 - accuracy: 0.58
Epoch 7/30
58560/58605 [============================>.] - ETA: 0s - loss: 1.2466 - accuracy: 0.6111
Epoch 00007: loss improved from 1.32090 to 1.24651, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 468us/sample - loss: 1.2465 - accuracy: 0.61
Epoch 8/30
58432/58605 [============================>.] - ETA: 0s - loss: 1.2186 - accuracy: 0.6206
Epoch 00008: loss improved from 1.24651 to 1.21799, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 469us/sample - loss: 1.2180 - accuracy: 0.62
Epoch 9/30
58432/58605 [============================>.] - ETA: 0s - loss: 1.1639 - accuracy: 0.6389
Epoch 00009: loss improved from 1.21799 to 1.16394, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 466us/sample - loss: 1.1639 - accuracy: 0.63
Epoch 10/30
58560/58605 [============================>.] - ETA: 0s - loss: 1.1518 - accuracy: 0.6450
Epoch 00010: loss improved from 1.16394 to 1.15153, saving model to checkpoints_best_only_MLP/c

6
58605/58605 [==============================] - 27s 467us/sample - loss: 1.1515 - accuracy: 0.64
Epoch 11/30
58560/58605 [============================>.] - ETA: 0s - loss: 1.0879 - accuracy: 0.6678
Epoch 00011: loss improved from 1.15153 to 1.08776, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 464us/sample - loss: 1.0878 - accuracy: 0.66
Epoch 12/30
58368/58605 [============================>.] - ETA: 0s - loss: 1.0720 - accuracy: 0.6720
Epoch 00012: loss improved from 1.08776 to 1.07170, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 466us/sample - loss: 1.0717 - accuracy: 0.67
Epoch 13/30
58496/58605 [============================>.] - ETA: 0s - loss: 1.0339 - accuracy: 0.6842
Epoch 00013: loss improved from 1.07170 to 1.03369, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 464us/sample - loss: 1.0337 - accuracy: 0.68
Epoch 14/30
58432/58605 [============================>.] - ETA: 0s - loss: 1.0178 - accuracy: 0.6886
Epoch 00014: loss improved from 1.03369 to 1.01764, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 469us/sample - loss: 1.0176 - accuracy: 0.68
Epoch 15/30
58496/58605 [============================>.] - ETA: 0s - loss: 1.0015 - accuracy: 0.6930
Epoch 00015: loss improved from 1.01764 to 1.00092, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 468us/sample - loss: 1.0009 - accuracy: 0.69
Epoch 16/30
58432/58605 [============================>.] - ETA: 0s - loss: 0.9806 - accuracy: 0.7014
Epoch 00016: loss improved from 1.00092 to 0.98039, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 470us/sample - loss: 0.9804 - accuracy: 0.70
Epoch 17/30
58432/58605 [============================>.] - ETA: 0s - loss: 0.9606 - accuracy: 0.7067
Epoch 00017: loss improved from 0.98039 to 0.96031, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 465us/sample - loss: 0.9603 - accuracy: 0.70
Epoch 18/30
58560/58605 [============================>.] - ETA: 0s - loss: 0.9510 - accuracy: 0.7098
Epoch 00018: loss improved from 0.96031 to 0.95110, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 471us/sample - loss: 0.9511 - accuracy: 0.70
Epoch 19/30
58560/58605 [============================>.] - ETA: 0s - loss: 0.9446 - accuracy: 0.7099
Epoch 00019: loss improved from 0.95110 to 0.94454, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 471us/sample - loss: 0.9445 - accuracy: 0.70
Epoch 20/30
58560/58605 [============================>.] - ETA: 0s - loss: 0.9243 - accuracy: 0.7146
Epoch 00020: loss improved from 0.94454 to 0.92420, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 472us/sample - loss: 0.9242 - accuracy: 0.71
Epoch 21/30
58496/58605 [============================>.] - ETA: 0s - loss: 0.9173 - accuracy: 0.7178
Epoch 00021: loss improved from 0.92420 to 0.91774, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 469us/sample - loss: 0.9177 - accuracy: 0.71
Epoch 22/30
58560/58605 [============================>.] - ETA: 0s - loss: 0.9146 - accuracy: 0.7211
Epoch 00022: loss improved from 0.91774 to 0.91463, saving model to checkpoints_best_only_MLP/c

7
58605/58605 [==============================] - 28s 471us/sample - loss: 0.9146 - accuracy: 0.72
Epoch 23/30
58496/58605 [============================>.] - ETA: 0s - loss: 0.9016 - accuracy: 0.7236
Epoch 00023: loss improved from 0.91463 to 0.90175, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 474us/sample - loss: 0.9017 - accuracy: 0.72
Epoch 24/30
58368/58605 [============================>.] - ETA: 0s - loss: 0.8945 - accuracy: 0.7258
Epoch 00024: loss improved from 0.90175 to 0.89390, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 28s 471us/sample - loss: 0.8939 - accuracy: 0.72
Epoch 25/30
58432/58605 [============================>.] - ETA: 0s - loss: 0.8909 - accuracy: 0.7281
Epoch 00025: loss improved from 0.89390 to 0.89132, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 468us/sample - loss: 0.8913 - accuracy: 0.72
Epoch 26/30
58496/58605 [============================>.] - ETA: 0s - loss: 0.8709 - accuracy: 0.7339
Epoch 00026: loss improved from 0.89132 to 0.87074, saving model to checkpoints_best_only_MLP/c
58605/58605 [==============================] - 27s 466us/sample - loss: 0.8707 - accuracy: 0.73
Epoch 27/30
58432/58605 [============================>.] - ETA: 0s - loss: 0.8746 - accuracy: 0.7328
Epoch 00027: loss did not improve from 0.87074
58605/58605 [==============================] - 27s 460us/sample - loss: 0.8744 - accuracy: 0.73

In [10]: plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show()

8
In [11]: plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Accuracy vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='lower right')
plt.show()

9
In [12]: test_history_mlp = model.evaluate(test_data_gray , test_labels , verbose=2)

26032/1 - 4s - loss: 0.9175 - accuracy: 0.6989

1.4 3. CNN neural network classifier


• Build a CNN classifier model using the Sequential API. Your model should use the Conv2D,
MaxPool2D, BatchNormalization, Flatten, Dense and Dropout layers. The final layer should
again have a 10-way softmax output.
• You should design and build the model yourself. Feel free to experiment with different
CNN architectures. Hint: to achieve a reasonable accuracy you won’t need to use more than 2 or 3
convolutional layers and 2 fully connected layers.)
• The CNN model should use fewer trainable parameters than your MLP model.
• Compile and train the model (we recommend a maximum of 30 epochs), making use of both
training and validation sets during the training run.
• Your model should track at least one appropriate metric, and use at least two callbacks dur-
ing training, one of which should be a ModelCheckpoint callback.
• You should aim to beat the MLP model performance with fewer parameters!
• Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and vali-
dation sets.
• Compute and display the loss and accuracy of the trained model on the test set.

In [21]: model1=Sequential([
Conv2D(filters=16,kernel_size=3,padding='SAME',activation='relu',input_shape=(32,3

10
BatchNormalization(),
Dropout(0.2),
Conv2D(filters=8,kernel_size=3,activation='relu',padding='SAME',name='conv_2'),
MaxPooling2D(pool_size=(2,2),name='pool_1'),
Flatten(name='flatten'),
Dense(16,activation='relu',name='dense_1'),
BatchNormalization(),
Dense(10,activation='softmax',name='dense_2'),
])
In [22]: model1.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_1 (Conv2D) (None, 32, 32, 16) 160
_________________________________________________________________
batch_normalization_3 (Batch (None, 32, 32, 16) 64
_________________________________________________________________
dropout_2 (Dropout) (None, 32, 32, 16) 0
_________________________________________________________________
conv_2 (Conv2D) (None, 32, 32, 8) 1160
_________________________________________________________________
pool_1 (MaxPooling2D) (None, 16, 16, 8) 0
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
dense_1 (Dense) (None, 16) 32784
_________________________________________________________________
batch_normalization_4 (Batch (None, 16) 64
_________________________________________________________________
dense_2 (Dense) (None, 10) 170
=================================================================
Total params: 34,402
Trainable params: 34,338
Non-trainable params: 64
_________________________________________________________________

In [25]: model1.compile (optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),loss='categoric


In [26]: t1=tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',factor=0.2,patience=3)
t2=tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=3,mode='min')
checkpoints_best_only_path_CNN='checkpoints_best_only_CNN/checkpoint'
checkpoints_best_only_CNN=ModelCheckpoint(checkpoints_best_only_path_CNN,save_weights_
history1=model1.fit(train_data_gray,train_labels,epochs=5,validation_split=0.15,callba
Train on 62268 samples, validate on 10989 samples
Epoch 1/5

11
62208/62268 [============================>.] - ETA: 0s - loss: 1.7671 - accuracy: 0.3628
Epoch 00001: loss improved from inf to 1.76657, saving model to checkpoints_best_only_CNN/check
62268/62268 [==============================] - 456s 7ms/sample - loss: 1.7666 - accuracy: 0.363
Epoch 2/5
62208/62268 [============================>.] - ETA: 0s - loss: 0.9422 - accuracy: 0.7011
Epoch 00002: loss improved from 1.76657 to 0.94220, saving model to checkpoints_best_only_CNN/c
62268/62268 [==============================] - 430s 7ms/sample - loss: 0.9422 - accuracy: 0.701
Epoch 3/5
62208/62268 [============================>.] - ETA: 0s - loss: 0.7319 - accuracy: 0.7770
Epoch 00003: loss improved from 0.94220 to 0.73170, saving model to checkpoints_best_only_CNN/c
62268/62268 [==============================] - 431s 7ms/sample - loss: 0.7317 - accuracy: 0.777
Epoch 4/5
62208/62268 [============================>.] - ETA: 0s - loss: 0.6627 - accuracy: 0.7987
Epoch 00004: loss improved from 0.73170 to 0.66254, saving model to checkpoints_best_only_CNN/c
62268/62268 [==============================] - 430s 7ms/sample - loss: 0.6625 - accuracy: 0.798
Epoch 5/5
62208/62268 [============================>.] - ETA: 0s - loss: 0.6312 - accuracy: 0.8103
Epoch 00005: loss improved from 0.66254 to 0.63107, saving model to checkpoints_best_only_CNN/c
62268/62268 [==============================] - 427s 7ms/sample - loss: 0.6311 - accuracy: 0.810

In [27]: plt.plot(history1.history['accuracy'])
plt.plot(history1.history['val_accuracy'])
plt.title('Accuracy vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='lower right')
plt.show()

12
In [28]: plt.plot(history1.history['loss'])
plt.plot(history1.history['val_loss'])
plt.title('Loss vs. epochs')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.show()

13
In [29]: test_history_cnn = model1.evaluate(test_data_gray , test_labels , verbose=2)

26032/1 - 51s - loss: 0.6687 - accuracy: 0.7851

1.5 4. Get model predictions


• Load the best weights for the MLP and CNN models that you saved during the training run.
• Randomly select 5 images and corresponding labels from the test set and display the images
with their labels.
• Alongside the image and label, show each model’s predictive distribution as a bar chart, and
the final model prediction given by the label with maximum probability.

In [30]: num_test_images = test_data_gray.shape[0]


random_inx = np.random.choice(num_test_images, 5)

In [31]: random_test_images = test_data_gray[random_inx, ...]


random_test_labels = test_labels[random_inx, ...]
predictions_CNN = model1.predict(random_test_images)
fig, axes = plt.subplots(5, 2, figsize=(20, 12))
fig.subplots_adjust(hspace=0.4, wspace=-0.2)
for i, (prediction_CNN, image, label) in enumerate(zip(predictions_CNN , random_test_i
axes[i, 0].imshow(np.squeeze(image))
axes[i, 0].get_xaxis().set_visible(False)

14
axes[i, 0].get_yaxis().set_visible(False)
axes[i, 0].text(10., -3, f'Digit {label}')
axes[i, 1].bar(np.arange(1,11), prediction_CNN)
axes[i, 1].set_xticks(np.arange(1,11))
axes[i, 1].set_title(f"Categorical distribution. Model prediction: {np.argmax(pred
plt.show()

In [ ]:

In [ ]:

In [ ]:

In [ ]:

15

You might also like