Chest Cancer - 90.8 On Test Data Set Code
Chest Cancer - 90.8 On Test Data Set Code
Code iterates over the first dimension of the sample array, and the inner for loop
iterates over the second dimension. For each iteration, a subplot is added to the
figure using the fig.add_subplot() method. The x ticks and y ticks arguments are
Description set to ‘[ ]’ to remove the x and y axis labels.
The subplot is then filled with an image using the ax.imshow() method. The
image is obtained from sample[i,m] and is passed through the np.squeeze()
method to remove single-dimensional entries from the shape of the array. The
shapes of the images are also appended to a list called shapes.
Department of Electrical & Computer Engineering 9
Code Description Cell 07
This code defines the creation of three Image Data
Generators for a machine learning task that involves
images. The first two Image Data Generators,
train_datagen and valid_datagen, are used to
preprocess the training and validation data,
respectively. The preprocessing steps include
converting the datatype to float32 and rescaling the
pixel values so that they lie between 0 and 1. The third
Image Data Generator, test_datagen, is used to
preprocess the test data.
The flow_from_directory method is then used to
generate batches of image data from the respective
directories for each Image Data Generator. The
batch_size argument specifies the number of images
in each batch, and the target_size argument specifies
the size that the images should be resized to (305x430
pixels in this case). The class_mode argument is set to
'categorical' since the images belong to multiple
classes.
Department of Electrical & Computer Engineering 10
Code Description
Cell 08
This code defines a neural network using the Keras library.
The network is created using the Sequential model and is
composed of several layers:
• The first layer is a Conv2D layer with 8 filters, a kernel
size of 2, a padding of 'same', and a ReLU activation
function. This layer serves as a feature extraction layer,
where the filters are applied to the input image to detect
various features in the image.
• The second layer is a MaxPooling2D layer with a pool
size of 2. This layer serves as a down-sampling layer,
where the spatial resolution of the feature maps is
reduced to half its original size.
• The third and fourth layers are another Conv2D and
MaxPooling2D layer, respectively, with similar
properties as the first two layers, but with 16 filters
instead of 8.
After adding the ResNet50 model, a dropout layer with a dropout rate of 0.6 is
added using the Dropout function. Dropout is a regularization technique used
to prevent overfitting in deep learning models. The dropout layer randomly
sets some fraction of the input units to 0 during training, which helps to
prevent the model from relying too heavily on any single feature.
Next, a Flatten layer is added, which reshapes the output of the previous
layer into a 1D tensor. This is followed by a Batch Normalization layer, which
normalizes the activations of the previous layer by subtracting the batch mean
and dividing by the batch standard deviation.
The Batch Normalization layer helps to improve the stability and speed of
training. Another dropout layer with a dropout rate of 0.6 is added, followed by
a dense (fully connected) layer with N_CLASSES output units and a softmax
activation function. The softmax activation function is used for multi-class
classification problems, as it returns a probability distribution over the
N_CLASSES classes, where the sum of all probabilities is 1.