Deep Learning With Keras - Quick Guide
Deep Learning With Keras - Quick Guide
87 Lectures 11 hours
Abhilash Nelson
More Detail
58 Lectures 8 hours
Soumyadeep Dey
More Detail
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 1/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Mike West
More Detail
Deep Learning essentially means training an Artificial Neural Network (ANN) with a huge amount
of data. In deep learning, the network learns by itself and thus requires humongous data for
learning. While traditional machine learning is essentially a set of algorithms that parse data and
learn from it. They then used this learning for making intelligent decisions.
Now, coming to Keras, it is a high-level neural networks API that runs on top of TensorFlow - an
end-to-end open source machine learning platform. Using Keras, you easily define complex ANN
architectures to experiment on your big data. Keras also supports GPU, which becomes essential
for processing huge amount of data and developing machine learning models.
In this tutorial, you will learn the use of Keras in building deep neural networks. We shall look at
the practical examples for teaching. The problem at hand is recognizing handwritten digits using
a neural network that is trained with deep learning.
Just to get you more excited in deep learning, below is a screenshot of Google trends on deep
learning here −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 2/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
As you can see from the diagram, the interest in deep learning is steadily growing over the last
several years. There are many areas such as computer vision, natural language processing,
speech recognition, bioinformatics, drug design, and so on, where the deep learning has been
successfully applied. This tutorial will get you quickly started on deep learning.
So keep reading!
Neural Networks
The idea of artificial neural network was derived from neural networks in our brain. A typical
neural network consists of three layers — input, output and hidden layer as shown in the picture
below.
This is also called a shallow neural network, as it contains only one hidden layer. You add more
hidden layers in the above architecture to create a more complex architecture.
Deep Networks
The following diagram shows a deep network consisting of four hidden layers, an input layer and
an output layer.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 3/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
As the number of hidden layers are added to the network, its training becomes more complex in
terms of required resources and the time it takes to fully train the network.
Network Training
After you define the network architecture, you train it for doing certain kinds of predictions.
Training a network is a process of finding the proper weights for each link in the network. During
training, the data flows from Input to Output layers through various hidden layers. As the data
always moves in one direction from input to output, we call this network as Feed-forward Network
and we call the data propagation as Forward Propagation.
Activation Function
At each layer, we calculate the weighted sum of inputs and feed it to an Activation function. The
activation function brings nonlinearity to the network. It is simply some mathematical function that
discretizes the output. Some of the most commonly used activations functions are sigmoid,
hyperbolic, tangent (tanh), ReLU and Softmax.
Backpropagation
Backpropagation is an algorithm for supervised learning. In Backpropagation, the errors
propagate backwards from the output to the input layer. Given an error function, we calculate the
gradient of the error function with respect to the weights assigned at each connection. The
calculation of the gradient proceeds backwards through the network. The gradient of the final
layer of weights is calculated first and the gradient of the first layer of weights is calculated last.
At each layer, the partial computations of the gradient are reused in the computation of the
gradient for the previous layer. This is called Gradient Descent.
In this project-based tutorial you will define a feed-forward deep neural network and train it with
backpropagation and gradient descent techniques. Luckily, Keras provides us all high level APIs
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 4/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
for defining network architecture and training it using gradient descent. Next, you will learn how to
do this in Keras.
Project Description
The mnist dataset consists of 70000 images of handwritten digits. A few sample images are
reproduced here for your reference
Each image is of size 28 x 28 pixels making it a total of 768 pixels of various gray scale levels.
Most of the pixels tend towards black shade while only few of them are towards white. We will put
the distribution of these pixels in an array or a vector. For example, the distribution of pixels for a
typical image of digits 4 and 5 is shown in the figure below.
Each image is of size 28 x 28 pixels making it a total of 768 pixels of various gray scale levels.
Most of the pixels tend towards black shade while only few of them are towards white. We will put
the distribution of these pixels in an array or a vector. For example, the distribution of pixels for a
typical image of digits 4 and 5 is shown in the figure below.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 5/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Clearly, you can see that the distribution of the pixels (especially those tending towards white
tone) differ, this distinguishes the digits they represent. We will feed this distribution of 784 pixels
to our network as its input. The output of the network will consist of 10 categories representing a
digit between 0 and 9.
Our network will consist of 4 layers — one input layer, one output layer and two hidden layers.
Each hidden layer will contain 512 nodes. Each layer is fully connected to the next layer. When
we train the network, we will be computing the weights for each connection. We train the network
by applying backpropagation and gradient descent that we discussed earlier.
Setting Up Project
We will use Jupyter through Anaconda navigator for our project. As our project uses TensorFlow
and Keras, you will need to install those in Anaconda setup. To install Tensorflow, run the
following command in your console window:
Starting Jupyter
When you start the Anaconda navigator, you would see the following opening screen.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 6/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Click ‘Jupyter’ to start it. The screen will show up the existing projects, if any, on your drive.
The screenshot of the menu selection is shown for your quick reference −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 7/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
import numpy as np
import matplotlib
import matplotlib.pyplot as plot
Suppressing Warnings
As both Tensorflow and Keras keep on revising, if you do not sync their appropriate versions in
the project, at runtime you would see plenty of warning errors. As they distract your attention from
learning, we shall be suppressing all the warnings in this project. This is done with the following
lines of code −
Keras
We use Keras libraries to import dataset. We will use the mnist dataset for handwritten digits. We
import the required package using the following statement
We will be defining our deep learning neural network using Keras packages. We import the
Sequential, Dense, Dropout and Activation packages for defining the network architecture. We
use load_model package for saving and retrieving our model. We also use np_utils for a few
utilities that we need in our project. These imports are done with the following program
statements −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 8/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
When you run this code, you will see a message on the console that says that Keras uses
TensorFlow at the backend. The screenshot at this stage is shown here −
Now, as we have all the imports required by our project, we will proceed to define the architecture
for our Deep Learning network.
model = Sequential()
Input Layer
We define the input layer, which is the first layer in our network using the following program
statement −
model.add(Dense(512, input_shape=(784,)))
This creates a layer with 512 nodes (neurons) with 784 input nodes. This is depicted in the figure
below −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 9/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Note that all the input nodes are fully connected to the Layer 1, that is each input node is
connected to all 512 nodes of Layer 1.
Next, we need to add the activation function for the output of Layer 1. We will use ReLU as our
activation. The activation function is added using the following program statement −
model.add(Activation('relu'))
Next, we add Dropout of 20% using the statement below. Dropout is a technique used to prevent
model from overfitting.
model.add(Dropout(0.2))
At this point, our input layer is fully defined. Next, we will add a hidden layer.
Hidden Layer
Our hidden layer will consist of 512 nodes. The input to the hidden layer comes from our
previously defined input layer. All the nodes are fully connected as in the earlier case. The output
of the hidden layer will go to the next layer in the network, which is going to be our final and
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 10/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
output layer. We will use the same ReLU activation as for the previous layer and a dropout of
20%. The code for adding this layer is given here −
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
Next, we will add the final layer to our network, which is the output layer. Note that you may add
any number of hidden layers using the code similar to the one which you have used here. Adding
more layers would make the network complex for training; however, giving a definite advantage
of better results in many cases though not all.
Output Layer
The output layer consists of just 10 nodes as we want to classify the given images in 10 distinct
digits. We add this layer, using the following statement −
model.add(Dense(10))
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 11/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
As we want to classify the output in 10 distinct units, we use the softmax activation. In case of
ReLU, the output is binary. We add the activation using the following statement −
model.add(Activation('softmax'))
At this point, our network can be visualized as shown in the below diagram −
At this point, our network model is fully defined in the software. Run the code cell and if there are
no errors, you will get a confirmation message on the screen as shown in the screenshot below −
network model
The compile method requires several parameters. The loss parameter is specified to have type
'categorical_crossentropy'. The metrics parameter is set to 'accuracy' and finally we use the
adam optimizer for training the network. The output at this stage is shown below −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 12/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Loading Data
As said earlier, we will use the mnist dataset provided by Keras. When we load the data into our
system, we will split it in the training and test data. The data is loaded by calling the load_data
method as follows −
The data that is provided to us are the graphic images of size 28 x 28 pixels, each containing a
single digit between 0 and 9. We will display the first ten images on the console. The code for
doing so is given below −
plot.subplot(3,5,i+1)
plot.tight_layout()
plot.imshow(X_train[i], cmap='gray', interpolation='none')
plot.title("Digit: {}".format(y_train[i]))
plot.xticks([])
plot.yticks([])
In an iterative loop of 10 counts, we create a subplot on each iteration and show an image from
X_train vector in it. We title each image from the corresponding y_train vector. Note that the
y_train vector contains the actual values for the corresponding image in X_train vector. We
remove the x and y axes markings by calling the two methods xticks and yticks with null
argument. When you run the code, you would see the following output −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 13/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Now, our training vector will consist of 60000 data points, each consisting of a single dimension
vector of size 784. Similarly, our test vector will consist of 10000 data points of a single-dimension
vector of size 784.
Normalizing Data
The data that the input vector contains currently has a discrete value between 0 and 255 - the
gray scale levels. Normalizing these pixel values between 0 and 1 helps in speeding up the
training. As we are going to use stochastic gradient descent, normalizing data will also help in
reducing the chance of getting stuck in local optima.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 14/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
To normalize the data, we represent it as float type and divide it by 255 as shown in the following
code snippet −
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
plot.hist(X_train[0])
plot.title("Digit: {}".format(y_train[0]))
Here, we plot the histogram of the first element of the X_train vector. We also print the digit
represented by this data point. The output of running the above code is shown here −
You will notice a thick density of points having value close to zero. These are the black dot points
in the image, which obviously is the major portion of the image. The rest of the gray scale points,
which are close to white color, represent the digit. You may check out the distribution of pixels for
another digit. The code below prints the histogram of a digit at index of 2 in the training dataset.
plot.hist(X_train[2])
plot.title("Digit: {}".format(y_train[2])
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 15/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Comparing the above two figures, you will notice that the distribution of the white pixels in two
images differ indicating a representation of a different digit - “5” and “4” in the above two pictures.
Next, we will examine the distribution of data in our full training dataset.
Use the following command to print the number of unique values and the number of occurrences
of each one
print(np.unique(y_train, return_counts=True))
When you run the above command, you will see the following output −
It shows that there are 10 distinct values — 0 through 9. There are 5923 occurrences of digit 0,
6742 occurrences of digit 1, and so on. The screenshot of the output is shown here −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 16/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Encoding Data
We have ten categories in our dataset. We will thus encode our output in these ten categories
using one-hot encoding. We use to_categorial method of Numpy utilities to perform encoding.
After the output data is encoded, each data point would be converted into a single dimensional
vector of size 10. For example, digit 5 will now be represented as [0,0,0,0,0,1,0,0,0,0].
n_classes = 10
Y_train = np_utils.to_categorical(y_train, n_classes)
You may check out the result of encoding by printing the first 5 elements of the categorized
Y_train vector.
for i in range(5):
print (Y_train[i])
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
The first element represents digit 5, the second represents digit 0, and so on.
Finally, you will have to categorize the test data too, which is done using the following statement
−
At this stage, your data is fully prepared for feeding into the network.
Next, comes the most important part and that is training our network model.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 17/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
The first two parameters to the fit method specify the features and the output of the training
dataset.
The epochs is set to 20; we assume that the training will converge in max 20 epochs - the
iterations. The trained model is validated on the test data as specified in the last parameter.
The screenshot of the output is given below for your quick reference −
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 18/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Now, as the model is trained on our training data, we will evaluate its performance.
We will print the loss and accuracy using the following two statements −
When you run the above statements, you would see the following output −
This shows a test accuracy of 98%, which should be acceptable to us. What it means to us that in
2% of the cases, the handwritten digits would not be classified correctly. We will also plot
accuracy and loss metrics to see how the model performs on the test data.
plot.subplot(2,1,1)
plot.plot(history.history['acc'])
plot.plot(history.history['val_acc'])
plot.title('model accuracy')
plot.ylabel('accuracy')
plot.xlabel('epoch')
plot.legend(['train', 'test'], loc='lower right')
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 19/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
As you can see in the diagram, the accuracy increases rapidly in the first two epochs, indicating
that the network is learning fast. Afterwards, the curve flattens indicating that not too many
epochs are required to train the model further. Generally, if the training data accuracy (“acc”)
keeps improving while the validation data accuracy (“val_acc”) gets worse, you are encountering
overfitting. It indicates that the model is starting to memorize the data.
We will also plot the loss metrics to check our model’s performance.
plot.subplot(2,1,2)
plot.plot(history.history['loss'])
plot.plot(history.history['val_loss'])
plot.title('model loss')
plot.ylabel('loss')
plot.xlabel('epoch')
plot.legend(['train', 'test'], loc='upper right')
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 20/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
As you can see in the diagram, the loss on the training set decreases rapidly for the first two
epochs. For the test set, the loss does not decrease at the same rate as the training set, but
remains almost flat for multiple epochs. This means our model is generalizing well to unseen
data.
Now, we will use our trained model to predict the digits in our test data.
predictions = model.predict_classes(X_test)
The method call returns the predictions in a vector that can be tested for 0’s and 1’s against the
actual values. This is done using the following two statements −
Finally, we will print the count of correct and incorrect predictions using the following two program
statements −
When you run the code, you will get the following output −
Now, as you have satisfactorily trained the model, we will save it for future use.
directory = "./models/"
name = 'handwrittendigitrecognition.h5'
path = os.path.join(save_dir, name)
model.save(path)
print('Saved trained model at %s ' % path)
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 21/22
06/10/2022, 18:24 Deep Learning with Keras - Quick Guide
Now, as you have saved a trained model, you may use it later on for processing your unknown
data.
Note that we are simply loading the .h5 file into memory. This sets up the entire neural network in
memory along with the weights assigned to each layer.
Now, to do your predictions on unseen data, load the data, let it be one or more items, into the
memory. Preprocess the data to meet the input requirements of our model as what you did on
your training and test data above. After preprocessing, feed it to your network. The model will
output its prediction.
https://www.tutorialspoint.com/deep_learning_with_keras/deep_learning_with_keras_quick_guide.htm 22/22