[FREE PDF SAMPLE] Deep Learning with Applications Using Python Chatbots and Face, Object, and Speech Recognition With TensorFlow and Keras 1st Edition by Navin Kumar Manaswi ISBN 1484235169 9781484235164 ebook full chapters
[FREE PDF SAMPLE] Deep Learning with Applications Using Python Chatbots and Face, Object, and Speech Recognition With TensorFlow and Keras 1st Edition by Navin Kumar Manaswi ISBN 1484235169 9781484235164 ebook full chapters
com
OR CLICK HERE
DOWLOAD EBOOK
https://ebookball.com/product/deep-learning-with-python-2nd-edition-
by-francois-chollet-1617296864-9781617296864-17216/
ebookball.com
iii
Table of Contents
Optimizers�����������������������������������������������������������������������������������������������������������25
Loss Function Examples��������������������������������������������������������������������������������26
Common Optimizers��������������������������������������������������������������������������������������27
Metrics����������������������������������������������������������������������������������������������������������������28
Metrics Examples������������������������������������������������������������������������������������������28
Common Metrics�������������������������������������������������������������������������������������������29
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
Index�������������������������������������������������������������������������������������������������213
viii
Foreword
Deep Learning has come a really long way. From the birth of the idea to
understand human mind and the concept of associationism — how we
perceive things and how relationships of objects and views influence our
thinking and doing, to the modelling of associationism which started in
the 1870s when Alexander Bain introduced the first concert of Artificial
Neural Networks by grouping the neurons.
Fast forward it to today 2018 and we see how Deep Learning has
dramatically improved and is in all forms of life — from object detection,
speech recognition, machine translation, autonomous vehicles, face
detection and the use of face detection from mundane tasks such as
unlocking your iPhoneX to doing more profound tasks such as crime
detection and prevention.
Convolutional Neural Networks and Recurrent Neural Networks
are shining brightly as they continue to help solve the world problems
in literally all industry areas such as Automotive & Transportation,
Healthcare & Medicine, Retail to name a few. Great progress is being made
in these areas and just metrics like these say enough about the palpability
of the deep learning industry:
ix
Foreword
And finally,
–– the error rate of image classification has dropped from 28% in 2012 to
2.5% in 2017 and it is going down all the time!
x
About the Author
Navin Kumar Manaswi has been developing
AI solutions with the use of cutting-edge
technologies and sciences related to artificial
intelligence for many years. Having worked for
consulting companies in Malaysia, Singapore,
and the Dubai Smart City project, as well
as his own company, he has developed a
rare mix of skills for delivering end-to-end
artificial intelligence solutions, including
video intelligence, document intelligence, and
human-like chatbots. Currently, he solves B2B problems in the verticals of
healthcare, enterprise applications, industrial IoT, and retail at Symphony
AI Incubator as a deep learning AI architect. With this book, he wants to
democratize the cognitive computing and services for everyone, especially
developers, data scientists, software engineers, database engineers, data
analysts, and C-level managers.
xi
About the Technical Reviewer
Sundar Rajan Raman has more than 14 years
of full stack IT experience in machine
learning, deep learning, and natural language
processing. He has six years of big data
development and architect experience,
including working with Hadoop and
its ecosystems as well as other NoSQL
technologies such as MongoDB and
Cassandra. In fact, he has been the technical
reviewer of several books on these topics.
He is also interested in strategizing using Design Thinking principles
and coaching and mentoring people.
xiii
CHAPTER 1
Basics of TensorFlow
This chapter covers the basics of TensorFlow, the deep learning
framework. Deep learning does a wonderful job in pattern recognition,
especially in the context of images, sound, speech, language, and time-
series data. With the help of deep learning, you can classify, predict,
cluster, and extract features. Fortunately, in November 2015, Google
released TensorFlow, which has been used in most of Google’s products
such as Google Search, spam detection, speech recognition, Google
Assistant, Google Now, and Google Photos. Explaining the basic
components of TensorFlow is the aim of this chapter.
TensorFlow has a unique ability to perform partial subgraph
computation so as to allow distributed training with the help of
partitioning the neural networks. In other words, TensorFlow allows model
parallelism and data parallelism. TensorFlow provides multiple APIs.
The lowest level API—TensorFlow Core—provides you with complete
programming control.
Note the following important points regarding TensorFlow:
T ensors
Before you jump into the TensorFlow library, let’s get comfortable with
the basic unit of data in TensorFlow. A tensor is a mathematical object
and a generalization of scalars, vectors, and matrices. A tensor can be
represented as a multidimensional array. A tensor of zero rank (order) is
nothing but a scalar. A vector/array is a tensor of rank 1, whereas a
2
Chapter 1 Basics of TensorFlow
3
Chapter 1 Basics of TensorFlow
So, the structure of TensorFlow programs has two phases, shown here:
4
Chapter 1 Basics of TensorFlow
To actually evaluate the nodes, you must run the computational graph
within a session.
A session encapsulates the control and state of the TensorFlow runtime.
The following code creates a Session object:
sess = tf.Session()
5
Chapter 1 Basics of TensorFlow
6
Chapter 1 Basics of TensorFlow
Generally, you have to deal with many images in deep learning, so you
have to place pixel values for each image and keep iterating over all images.
To train the model, you need to be able to modify the graph to tune
some objects such as weight and bias. In short, variables enable you to
add trainable parameters to a graph. They are constructed with a type and
initial value.
Let’s create a constant in TensorFlow and print it.
7
Chapter 1 Basics of TensorFlow
Now you will explore how you create a variable and initialize it. Here is
the code that does it:
8
Chapter 1 Basics of TensorFlow
Placeholders
A placeholder is a variable that you can feed something to at a later time. It
is meant to accept external inputs. Placeholders can have one or multiple
dimensions, meant for storing n-dimensional arrays.
9
Chapter 1 Basics of TensorFlow
You can also consider a 2D array in place of the 1D array. Here is the
code:
This is a 2×4 matrix. So, if you replace None with 2, you can see the
same output.
But if you create a placeholder of [3, 4] shape (note that you will feed
a 2×4 matrix at a later time), there is an error, as shown here:
10
Chapter 1 Basics of TensorFlow
Constants are initialized when you call tf.constant, and their values
can never change. By contrast, variables are not initialized when you call
tf.Variable. To initialize all the variables in a TensorFlow program, you
must explicitly call a special operation as follows.
11
Chapter 1 Basics of TensorFlow
Creating Tensors
An image is a tensor of the third order where the dimensions belong to
height, width, and number of channels (Red, Blue, and Green).
Here you can see how an image is converted into a tensor:
12
Chapter 1 Basics of TensorFlow
Fixed Tensors
Here is a fixed tensor:
13
Chapter 1 Basics of TensorFlow
Sequence Tensors
tf.range creates a sequence of numbers starting from the specified value
and having a specified increment.
14
Chapter 1 Basics of TensorFlow
Random Tensors
tf.random_uniform generates random values from uniform distribution
within a range.
15
Chapter 1 Basics of TensorFlow
If you are not able to find the result, please revise the previous portion
where I discuss the creation of tensors.
Here you can see the result:
Working on Matrices
Once you are comfortable creating tensors, you can enjoy working on
matrices (2D tensors).
16
Chapter 1 Basics of TensorFlow
Activation Functions
The idea of an activation function comes from the analysis of how a
neuron works in the human brain (see Figure 1-1). The neuron becomes
active beyond a certain threshold, better known as the activation potential.
It also attempts to put the output into a small range in most cases.
Sigmoid, hyperbolic tangent (tanh), ReLU, and ELU are most popular
activation functions.
Let’s look at the popular activation functions.
17
Chapter 1 Basics of TensorFlow
18
Chapter 1 Basics of TensorFlow
R
eLU and ELU
Figure 1-3 shows the ReLU and ELU functions.
19
Chapter 1 Basics of TensorFlow
ReLU6
ReLU6 is similar to ReLU except that the output cannot be more than six ever.
20
Chapter 1 Basics of TensorFlow
21
Chapter 1 Basics of TensorFlow
Loss Functions
The loss function (cost function) is to be minimized so as to get the best
values for each parameter of the model. For example, you need to get the
best value of the weight (slope) and bias (y-intercept) so as to explain the
target (y) in terms of the predictor (X). The method is to achieve the best
value of the slope, and y-intercept is to minimize the cost function/loss
function/sum of squares. For any model, there are numerous parameters,
and the model structure in prediction or classification is expressed in
terms of the values of the parameters.
You need to evaluate your model, and for that you need to define the
cost function (loss function). The minimization of the loss function can
be the driving force for finding the optimum value of each parameter. For
22
Chapter 1 Basics of TensorFlow
tf.contrib.losses.absolute_difference
tf.contrib.losses.add_loss
23
Chapter 1 Basics of TensorFlow
tf.contrib.losses.hinge_loss
tf.contrib.losses.compute_weighted_loss
tf.contrib.losses.cosine_distance
tf.contrib.losses.get_losses
tf.contrib.losses.get_regularization_losses
tf.contrib.losses.get_total_loss
tf.contrib.losses.log_loss
tf.contrib.losses.mean_pairwise_squared_error
tf.contrib.losses.mean_squared_error
tf.contrib.losses.sigmoid_cross_entropy
tf.contrib.losses.softmax_cross_entropy
tf.contrib.losses.sparse_softmax_cross_entropy
tf.contrib.losses.log(predictions,labels,weight=2.0)
24
Chapter 1 Basics of TensorFlow
Optimizers
Now you should be convinced that you need to use a loss function to
get the best value of each parameter of the model. How can you get the
best value?
Initially you assume the initial values of weight and bias for the model
(linear regression, etc.). Now you need to find the way to reach to the
best value of the parameters. The optimizer is the way to reach the best
value of the parameters. In each iteration, the value changes in a direction
suggested by the optimizer. Suppose you have 16 weight values (w1, w2,
w3, …, w16) and 4 biases (b1, b2, b3, b4). Initially you can assume every
weight and bias to be zero (or one or any number). The optimizer suggests
whether w1 (and other parameters) should increase or decrease in the
next iteration while keeping the goal of minimization in mind. After many
iterations, w1 (and other parameters) would stabilize to the best value
(or values) of parameters.
In other words, TensorFlow, and every other deep learning framework,
provides optimizers that slowly change each parameter in order to
minimize the loss function. The purpose of the optimizers is to give
direction to the weight and bias for the change in the next iteration.
Assume that you have 64 weights and 16 biases; you try to change the
weight and bias values in each iteration (during backpropagation) so that
you get the correct values of weights and biases after many iterations while
trying to minimize the loss function.
Selecting the best optimizer for the model to converge fast and to learn
weights and biases properly is a tricky task.
Adaptive techniques (adadelta, adagrad, etc.) are good optimizers
for converging faster for complex neural networks. Adam is supposedly
the best optimizer for most cases. It also outperforms other adaptive
techniques (adadelta, adagrad, etc.), but it is computationally costly. For
sparse data sets, methods such as SGD, NAG, and momentum are not the
best options; the adaptive learning rate methods are. An additional benefit
25
Chapter 1 Basics of TensorFlow
is that you won’t need to adjust the learning rate but can likely achieve the
best results with the default value.
26
Chapter 1 Basics of TensorFlow
Common Optimizers
The following is a list of common optimizers:
27
Chapter 1 Basics of TensorFlow
Metrics
Having learned some ways to build a model, it is time to evaluate the
model. So, you need to evaluate the regressor or classifier.
There are many evaluation metrics, among which classification
accuracy, logarithmic loss, and area under ROC curve are the most popular
ones.
Classification accuracy is the ratio of the number of correct predictions
to the number of all predictions. When observations for each class are not
much skewed, accuracy can be considered as a good metric.
tf.contrib.metrics.accuracy(actual_labels, predictions)
Metrics Examples
This section shows the code to demonstrate.
Here you create actual values (calling them x) and predicted values
(calling them y). Then you check the accuracy. Accuracy represents the
ratio of the number of times the actual equals the predicted values and
total number of instances.
28
Chapter 1 Basics of TensorFlow
Common Metrics
The following is a list of common metrics:
29
Chapter 1 Basics of TensorFlow
30
CHAPTER 2
Understanding and
Working with Keras
Keras is a compact and easy-to-learn high-level Python library for deep
learning that can run on top of TensorFlow (or Theano or CNTK). It
allows developers to focus on the main concepts of deep learning, such
as creating layers for neural networks, while taking care of the nitty-gritty
details of tensors, their shapes, and their mathematical details. TensorFlow
(or Theano or CNTK) has to be the back end for Keras. You can use Keras
for deep learning applications without interacting with the relatively
complex TensorFlow (or Theano or CNTK). There are two major kinds
of framework: the sequential API and the functional API. The sequential
API is based on the idea of a sequence of layers; this is the most common
usage of Keras and the easiest part of Keras. The sequential model can be
considered as a linear stack of layers.
In short, you create a sequential model where you can easily add
layers, and each layer can have convolution, max pooling, activation, drop-
out, and batch normalization. Let’s go through major steps to develop
deep learning models in Keras.
3. Fit the model with training data. Here you train the
model on the test data by calling the fit() function
on the model.
32
Chapter 2 Understanding and Working with Keras
Load Data
Here is how you load data:
33
Chapter 2 Understanding and Working with Keras
D
efine the Model
Sequential models in Keras are defined as a sequence of layers. You
create a sequential model and then add layers. You need to ensure the
input layer has the right number of inputs. Assume that you have 3,072
input variables; then you need to create the first hidden layer with 512
nodes/neurons. In the second hidden layer, you have 120 nodes/neurons.
Finally, you have ten nodes in the output layer. For example, an image
maps onto ten nodes that shows the probability of being label1 (airplane),
label2 (automobile), label3 (cat), …, label10 (truck). The node of highest
probability is the predicted class/label.
One image has three channels (RGB), and in each channel, the
image has 32×32 = 1024 pixels. So, each image has 3×1024 = 3072 pixels
(features/X/inputs).
With the help of 3,072 features, you need to predict the probability of
label1 (Digit 0), label2 (Digit 1), and so on. This means the model predicts
ten outputs (Digits 0–9) where each output represents the probability of
the corresponding label. The last activation function (sigmoid, as shown
earlier) gives 0 for nine outputs and 1 for only one output. That label is the
predicted class for the image (Figure 2-1).
For example, 3,072 features ➤ 512 nodes ➤ 120 nodes ➤ 10 nodes.
34
Chapter 2 Understanding and Working with Keras
The next question is, how do you know the number of layers to use and
their types? No one has the exact answer. What’s best for evaluation metrics is
that you decide the optimum number of layers and the parameters and steps
in each layer. A heuristics approach is also used. The best network structure
is found through a process of trial-and-error experimentation. Generally, you
need a network large enough to capture the structure of the problem.
35
Chapter 2 Understanding and Working with Keras
In this example, you will use a fully connected network structure with
three layers. A dense class defines fully connected layers.
In this case, you initialize the network weights to a small random
number generated from a uniform distribution (uniform) in this
case between 0 and 0.05 because that is the default uniform weight
initialization in Keras. Another traditional alternative would be normal for
small random numbers generated from a Gaussian distribution. You use or
snap to a hard classification of either class with a default threshold of 0.5.
You can piece it all together by adding each layer.
In short, this step is aimed at tuning the weights and biases based on
loss functions through iterations based on the optimizer evaluated by
metrics such as accuracy.
37
Another Random Document on
Scribd Without Any Related Topics
Huhuu! huhuu!
Lapio vaan on tarpeen
Huhuu! huhuu!
Hän ei pelännyt ketään koko kylässä, paitsi kirjuria. Kun hän vaan
kaukaakin näki vihriäraitaisen lakin, pystyn nokan ja pienen
pukinparran korkeilla jaloillaan ja hiljalleen likenevän, niin hän
paikalla kävi kiinni lakkiinsa. Kirjuri tiesi hänestäkin pienen jutun.
Kerran myrskyisinä aikoina oli Rzepaa käsketty viemään eräitä
papereita johonkin ja hän oli ne vienyt. Sen enempää hän ei niistä
tietänyt. Olihan hän siihen aikaan vasta viidentoista vanha poika ja
paimensi hanhia ja sikoja. Mutta myöhemmin hän tuli ajatelleeksi,
että häntä ehkä sentään vaaditaan tilinteolle noitten paperien
kuljetuksesta ja sentähden hän pelkäsi kirjuria.
Rzepa hämmästyi.
— Mitä siitä? Kyllä hän tietää, että sinä kannoit ne paperit toisesta
metsästä toiseen.
Kirjuri oli nuorimies, hänellä ei siis ollut omaa taloutta, vaan hän
asui neliskulmaisessa, niinkutsutussa "rapatussa talossa", joka oli
lammin rannalla. Hänellä oli siellä erityinen etehinen ja kaksi
huonetta. Ensimmäisessä huoneessa ei ollut muuta kuin hiukan olkia
ja päällyssaappaat. Toista huonetta käytettiin sekä salina että
makuuhuoneena. Vuode näytti siltä kuin ei sitä ikinä olisi järjestetty,
tyynyissä ei ollut vaaruja ja höyhenet pistivät esiin. Vuoteen vieressä
oli pöytä, pöydällä mustepullo, kyniä, kansliakirjoja, muutama vihko
herra Breslauerin kustantamaa Isabella Espanjalaista, kaksi likaista
englantilaista kaulusta, pomaadapurkki, paperossihylssyjä ja lopuksi
talikynttilä, joka seisoi läkkisessä jalassa ja jonka sydän oli
ruskeahtava; kärpäsiä oli hukkunut taliin kynttilän sydämen
ympärille.
Haava ärtyi.
Peto pauloissa.
— No, koska olette olevinanne niin kovin viisas, niin sanokaa, mikä
oli ensimmäisen ihmisen nimi?
— Maljanne, kuomaseni!
— Maljanne!
— Haim!
— Siulim![9]
— Onneks' olkoon!
He joivat kaikki kolme, mutta koska siihen aikaan juuri oli sota
Ranskan ja Preussin välillä, niin lautamies Gomula taasen palasi
politiikkaan.
— Onneks' olkoon!
— Kost' jumala!
— Antaa tulla!
— Ei ole sanottu.
— Tahdotteko koettaa?
— Olkaa siivolla, keskeytti nyt herrastuomari. — Oikeinko te aiotte
tapella? Juodaan pois vielä.
— Kirjuri sanoi…
Rzepa oli istunut lakki takaraivolla. Nyt hän kiireesti vetäisi sen
päästään, pudotti maahan, nousi ja änkötti:
— Ylistetty…
— Herra kirjuri…
— Mitä?
Pari kertaa hän nyökäytti sinne tänne, keikahti sitte alas penkiltä,
sopersi mennessään: "Jumala, ole minulle syntiselle armollinen!" ja
nukkui.
— Szmul!
Hän oli mies joka tiesi hienon maailman tavat ja joka ei koskaan
joutunut hämilleen. Hän ei milloinkaan kadottanut mielenmalttiaan,
vaan sekaantui rohkeasti keskusteluun, muistellen joko "tuota kelpo
komisariusta" tai "tuota erinomaista tirehtööriä", jonka kanssa hän
"eilen tai tässä tuonnoin" pelasi yhden kopeekan tikkiä. Sanalla
sanoen: Zolzikiewicz koetti kaikin voimin näyttää, että hän ja
Aasinkorvan piirikunnan johtavat miehet ovat likeisissä väleissä
keskenään. Tosin hän huomasi, että herrasväet hänen kertoessaan
vähän oudosti painuivat katselemaan lautasiaan, mutta hän ajatteli
että tapa sen vaatii. Monasti häntä myöskin oli hämmästyttänyt se,
ettei aatelinen isäntä, päivällisen päätyttyä, odottanut hänen
hyvästijättöään, vaan itse tuli taputtamaan häntä olalle ja
sanomaan: "no, jääkää hyvästi nyt, herra Zolzikiewicz!" mutta hän
arveli taaskin että tapa hienossa seurassa sen vaati. Kun hän sitte
hyvästi sanoessaan painoi talon herran kättä, tapasi hän aina siitä
jotakin kilisevää. Paikalla hän pusersi kokoon sormensa ja raapaisten
aatelisherran kämmentä, koppasi siitä kilisevän kappaleen. Mutta
samalla ei hän koskaan unohtanut lisätä: "voi, hyvä herra, ei sitä nyt
olisi ensinkään tarvittu!… Tuohon käräjäjuttuun nähden taas ei
herran ensinkään tarvitse huolehtia."
— Mutta minkätähden?
ebookball.com