Deep Learning in Computer Vision: Principles and Applications First Edition. Edition Mahmoud Hassaballah
Deep Learning in Computer Vision: Principles and Applications First Edition. Edition Mahmoud Hassaballah
com
https://textbookfull.com/product/deep-learning-in-computer-
vision-principles-and-applications-first-edition-edition-
mahmoud-hassaballah/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/digital-media-steganography-
principles-algorithms-and-advances-1st-edition-mahmoud-hassaballah-
editor/
textboxfull.com
https://textbookfull.com/product/deep-learning-on-windows-building-
deep-learning-computer-vision-systems-on-microsoft-windows-thimira-
amaratunga/
textboxfull.com
https://textbookfull.com/product/deep-learning-for-vision-systems-1st-
edition-mohamed-elgendy/
textboxfull.com
https://textbookfull.com/product/deep-learning-for-vision-systems-1st-
edition-mohamed-elgendy-2/
textboxfull.com
https://textbookfull.com/product/deep-learning-for-vision-systems-1st-
edition-mohamed-elgendy-3/
textboxfull.com
https://textbookfull.com/product/explainable-and-interpretable-models-
in-computer-vision-and-machine-learning-hugo-jair-escalante/
textboxfull.com
Deep Learning in
Computer Vision
Digital Imaging and Computer Vision Series
Series Editor
Rastislav Lukac
Foveon, Inc./Sigma Corporation San Jose, California, U.S.A.
Dermoscopy Image Analysis
by M. Emre Celebi, Teresa Mendonça, and Jorge S. Marques
Semantic Multimedia Analysis and Processing
by Evaggelos Spyrou, Dimitris Iakovidis, and Phivos Mylonas
Microarray Image and Data Analysis: Theory and Practice
by Luis Rueda
Perceptual Digital Imaging: Methods and Applications
by Rastislav Lukac
Image Restoration: Fundamentals and Advances
by Bahadir Kursat Gunturk and Xin Li
Image Processing and Analysis with Graphs: Theory and Practice
by Olivier Lézoray and Leo Grady
Visual Cryptography and Secret Image Sharing
by Stelvio Cimato and Ching-Nung Yang
Digital Imaging for Cultural Heritage Preservation: Analysis,
Restoration, and Reconstruction of Ancient Artworks
by Filippo Stanco, Sebastiano Battiato, and Giovanni Gallo
Computational Photography: Methods and Applications
by Rastislav Lukac
Super-Resolution Imaging
by Peyman Milanfar
Deep Learning in
Computer Vision
Principles and Applications
Edited by
Mahmoud Hassaballah and Ali Ismail Awad
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have
been made to publish reliable data and information, but the author and publisher cannot assume responsibility
for the validity of all materials or the consequences of their use. The authors and publishers have attempted to
trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if
permission to publish in this form has not been obtained. If any copyright material has not been acknowledged
please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microflming, and recording, or in any information storage or retrieval system, with-
out written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com
(http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978-750-8400. CCC is a not-for-proft organization that provides licenses and registration
for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate
system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identifcation and explanation without intent to infringe.
v
vi Contents
Index...................................................................................................................... 315
Foreword
Deep learning, while it has multiple defnitions in the literature, can be defned as
“inference of model parameters for decision making in a process mimicking the
understanding process in the human brain”; or, in short: “brain-like model iden-
tifcation”. We can say that deep learning is a way of data inference in machine
learning, and the two together are among the main tools of modern artifcial intel-
ligence. Novel technologies away from traditional academic research have fueled
R&D in convolutional neural networks (CNNs); companies like Google, Microsoft,
and Facebook ignited the “art” of data manipulation, and the term “deep learning”
became almost synonymous with decision making.
Various CNN structures have been introduced and invoked in many computer
vision-related applications, with greatest success in face recognition, autonomous
driving, and text processing. The reality is: deep learning is an art, not a science.
This state of affairs will remain until its developers develop the theory behind its
functionality, which would lead to “cracking its code” and explaining why it works,
and how it can be structured as a function of the information gained with data. In
fact, with deep learning, there is good and bad news. The good news is that the indus-
try—not necessarily academia—has adopted it and is pushing its envelope. The bad
news is that the industry does not share its secrets. Indeed, industries are never inter-
ested in procedural and textbook-style descriptions of knowledge.
This book, Deep Learning in Computer Vision: Principles and Applications—as
a journey in the progress made through deep learning by academia—confnes itself
to deep learning for computer vision, a domain that studies sensory information
used by computers for decision making, and has had its impacts and drawbacks for
nearly 60 years. Computer vision has been and continues to be a system: sensors,
computer, analysis, decision making, and action. This system takes various forms
and the fow of information within its components, not necessarily in tandem. The
linkages between computer vision and machine learning, and between it and arti-
fcial intelligence, are very fuzzy, as is the linkage between computer vision and
deep learning. Computer vision has moved forward, showing amazing progress in
its short history. During the sixties and seventies, computer vision dealt mainly with
capturing and interpreting optical data. In the eighties and nineties, geometric com-
puter vision added science (geometry plus algorithms) to computer vision. During
the frst decade of the new millennium, modern computing contributed to the evolu-
tion of object modeling using multimodality and multiple imaging. By the end of
that decade, a lot of data became available, and so the term “deep learning” crept
into computer vision, as it did into machine learning, artifcial intelligence, and other
domains.
This book shows that traditional applications in computer vision can be solved
through invoking deep learning. The applications addressed and described in the
eleven different chapters have been selected in order to demonstrate the capabilities
of deep learning algorithms to solve various issues in computer vision. The content
of this book has been organized such that each chapter can be read independently
vii
viii Foreword
of the others. Chapters of the book cover the following topics: accelerating the CNN
inference on feld-programmable gate arrays, fre detection in surveillance applica-
tions, face recognition, action and activity recognition, semantic segmentation for
autonomous driving, aerial imagery registration, robot vision, tumor detection, and
skin lesion segmentation as well as skin melanoma classifcation.
From the assortment of approaches and applications in the eleven chapters, the
common thread is that deep learning for identifcation of CNN provides accuracy
over traditional approaches. This accuracy is attributed to the fexibility of CNN
and the availability of large data to enable identifcation through the deep learning
strategy. I would expect that the content of this book to be welcomed worldwide by
graduate and postgraduate students and workers in computer vision, including prac-
titioners in academia and industry. Additionally, professionals who want to explore
the advances in concepts and implementation of deep learning algorithms applied to
computer vision may fnd in this book an excellent guide for such purpose. Finally,
I hope that readers would fnd the presented chapters in the book interesting and
inspiring to future research, from both theoretical and practical viewpoints, to spur
further advances in discovering the secrets of deep learning.
ix
x Preface
Mahmoud Hassaballah
Qena, Egypt
xiii
Contributors
Ahmad El Sallab Hesham F.A. Hamed
Valeo Company Egyptian Russian University
Cairo, Egypt Cairo, Egypt
and
Ahmed Nassar Minia University
IRISA Institute Minia, Egypt
Rennes, France
Javier Ruiz-Del-Solar
Alaa S. Al-Waisy University of Chile
University of Bradford Santiago, Chile
Bradford, UK
Kaidong Li
University of Kansas
Ali Ismail Awad
Kansas City, Kansas
Luleå University of Technology
Luleå, Sweden
Kamel Abdelouahab
and
Clermont Auvergne University
Al-Azhar University
Clermont-Ferrand, France
Qena, Egypt
Khalid M. Hosny
Amin Ullah Zagazig University
Sejong University Zagazig, Egypt
Seoul, South Korea
Khan Muhammad
Ashraf A. M. Khalaf Sejong University
Minia University Seoul, South Korea
Minia, Egypt
Mahmoud Hassaballah
François Berry South Valley University
University Clermont Auvergne Qena, Egypt
Clermont-Ferrand, France
Mahmoud Khaled Abd-Ellah
Guanghui Wang Al-Madina Higher Institute for
University of Kansas Engineering and Technology
Kansas City, Kansas Giza, Egypt
xv
xvi Contributors
CONTENTS
1.1 Introduction ......................................................................................................2
1.2 Background on CNNs and Their Computational Workload ............................3
1.2.1 General Overview.................................................................................3
1.2.2 Inference versus Training ..................................................................... 3
1.2.3 Inference, Layers, and CNN Models ....................................................3
1.2.4 Workloads and Computations...............................................................6
1.2.4.1 Computational Workload .......................................................6
1.2.4.2 Parallelism in CNNs ..............................................................8
1.2.4.3 Memory Accesses ..................................................................9
1.2.4.4 Hardware, Libraries, and Frameworks ................................ 10
1.3 FPGA-Based Deep Learning.......................................................................... 11
1.4 Computational Transforms ............................................................................. 12
1.4.1 The im2col Transformation ................................................................ 13
1.4.2 Winograd Transform .......................................................................... 14
1.4.3 Fast Fourier Transform ....................................................................... 16
1.5 Data-Path Optimizations ................................................................................ 16
1.5.1 Systolic Arrays.................................................................................... 16
1.5.2 Loop Optimization in Spatial Architectures ...................................... 18
Loop Unrolling ................................................................................... 19
Loop Tiling .........................................................................................20
1.5.3 Design Space Exploration................................................................... 21
1.5.4 FPGA Implementations ...................................................................... 22
1.6 Approximate Computing of CNN Models ..................................................... 23
1.6.1 Approximate Arithmetic for CNNs.................................................... 23
1.6.1.1 Fixed-Point Arithmetic ........................................................ 23
1.6.1.2 Dynamic Fixed Point for CNNs...........................................28
1.6.1.3 FPGA Implementations ....................................................... 29
1.6.1.4 Extreme Quantization and Binary Networks....................... 29
1.6.2 Reduced Computations....................................................................... 30
1.6.2.1 Weight Pruning .................................................................... 31
1.6.2.2 Low Rank Approximation ................................................... 31
1.6.2.3 FPGA Implementations ....................................................... 32
1
2 Deep Learning in Computer Vision
1.7 Conclusions..................................................................................................... 32
Bibliography ............................................................................................................ 33
1.1 INTRODUCTION
The exponential growth of big data during the last decade motivates for innovative
methods to extract high semantic information from raw sensor data such as videos,
images, and speech sequences. Among the proposed methods, convolutional neural
networks (CNNs) [1] have become the de facto standard by delivering near-human
accuracy in many applications related to machine vision (e.g., classifcation [2],
detection [3], segmentation [4]) and speech recognition [5].
This performance comes at the price of a large computational cost as CNNs
require up to 38 GOPs to classify a single frame [6]. As a result, dedicated hard-
ware is required to accelerate their execution. Graphics processing units GPUs
are the most widely used platform to implement CNNs as they offer the best per-
formance in terms of pure computational throughput, reaching up 11 TFLOPs
[7]. Nevertheless, in terms of power consumption, feld-programmable gate array
(FPGA) solutions are known to be more energy effcient (vs. GPU). While GPU
implementations have demonstrated state-of-the-art computational performance,
CNN acceleration will soon be moving towards FPGAs for two reasons. First,
recent improvements in FPGA technology put FPGA performance within striking
distance of GPUs with a reported performance of 9.2 TFLOPs for the latter [8].
Second, recent trends in CNN development increase the sparsity of CNNs and
use extremely compact data types. These trends favor FPGA devices, which are
designed to handle irregular parallelism and custom data types. As a result, next-
generation CNN accelerators are expected to deliver up to 5.4× better computa-
tional throughput than GPUs [7].
As an infection point in the development of CNN accelerators might be near, we
conduct a survey on FPGA-based CNN accelerators. While a similar survey can be
found in [9], we focus in this chapter on the recent techniques that were not covered
in the previous works. In addition to this chapter, we refer the reader to the works
of Venieris et al. [10], which review the toolfows automating the CNN mapping
process, and to the works of Sze et al., which focus on ASICs for deep learning
acceleration.
The amount and diversity of research on the subject of CNN FPGA acceleration
within the last 3 years demonstrate the tremendous industrial and academic interest.
This chapter presents a state-of-the-art review of CNN inference accelerators over
FPGAs. The computational workloads, their parallelism, and the involved memory
accesses are analyzed. At the level of neurons, optimizations of the convolutional
and fully connected (FC) layers are explained and the performances of the differ-
ent methods compared. At the network level, approximate computing and data-path
optimization methods are covered and state-of-the-art approaches compared. The
methods and tools investigated in this survey represent the recent trends in FPGA
CNN inference accelerators and will fuel the future advances on effcient hardware
deep learning.
Accelerating the CNN Inference on FPGAs 3
TABLE 1.1
Tensors Involved in the Inference of a Given Layer ˜ with Their Dimensions
X Input FMs B×C×H×W B Batch size (Number of input frames)
Y Output FMs B×N×V×U W/H/C Width/Height/Depth of Input FMs
Θ Learned Filters N×C×J×K U/V/N Width/Height/Depth of Output FMs
β Learned biases N K/J Horizontal/Vertical Kernel size
A convolutional layer (conv) carries out the feature extraction process by applying – as
illustrated in Figure 1.1 – a set of three-dimensional convolution flters Θconv to a set
of B input volumes Xconv. Each input volume has a depth C and can be a color image
(in the case of the frst conv layer), or an output generated by previous layers in the
network. Applying a three-dimensional flter to three-dimensional input results in
a 2D (FM). Thus, applying N three-dimensional flters in a layer results in a three-
dimensional output with a depth N.
In some CNN models, a learned offset βconv – called a bias – is added to processed
feature maps. However, this practice has been discarded in recent models [6]. The
computations involved in feed-forward propagation of conv layers are detailed in
Equation 1.1.
Y conv[b, n, v, u] = b conv[ n]
C J K (1.1)
+ åååX
c=1 j=1 k=1
conv
[b, c, v + j, u + k ] × Qconv[ n, c, j, k]
One may note that applying a depth convolution to a 3D input boils down to applying
a mainstream 2D convolution to each of the 2D channels of the input, then, at each
point, summing the results across all the channels, as shown in Equation 1.2.
FIGURE 1.1 Feed-forward propagation in conv, act, and pool layers (batch size B = 1, bias
β omitted).
Accelerating the CNN Inference on FPGAs 5
˜n °[1, N ]
C
Y[ n] conv
=b conv
[n] + åconv2D ( X[c]
c=1
conv
,Q[ c]conv ) (1.2)
Each conv layer of a CNN is usually followed by an activation layer that applies a
nonlinear function to all the values of FMs. Early CNNs were trained with TanH
or Sigmoid functions, but recent models employ the rectifed linear unit (ReLU)
function, which grants faster training times and less computational complexity, as
highlighted in Krizhevsky et al. [12].
Y act [b, n, h, w] = act(X act [b, n, h, w]) | act:=TanH, Sigmoid, ReLU… (1.3)
The convolutional and activation parts of a CNN are directly inspired by the
cells of visual cortex in neuroscience [13]. This is also the case with pooling
layers, which are periodically inserted in between successive conv layers. As
shown in Equation 1.4, pooling sub-samples each channel of the input FM by
selecting either the average, or, more commonly, the maximum of a given neigh-
borhood K. As a result, the dimensionality of an FM is reduced, as illustrated
in Figure 1.1.
(
Y pool [b, n, v, u] = max X pool [b, n, v + p, u + q]
p,q˜[1:K ]
) (1.4)
When deployed for classifcation purposes, the CNN pipeline is often terminated
by FC layers. In contrast with convolutional layers, FC layers do not implement
weight sharing and involve as much weight as input data (i.e., W = K, H = J,U = V = 1).
Moreover, in a similar way as conv layers, a nonlinear function is applied to the
outputs of FC layers.
C H W
Y [b, n] = b [ n] +
fc fc
åååX [b, c, h, w]× Q [n, c, h, w]
c=1 h=1 w=1
fc fc
(1.5)
6 Deep Learning in Computer Vision
X BN [b, n, u, v] − m (1.7)
Y BN [b, n, v, u] = g+a
s2 + ˜
TABLE 1.2
Popular CNN Models with Their Computational Workload*
AlexNet GoogleNet VGG16 VGG19 ResNet101 ResNet-152
Model [12] [16] [6] [6] [17] [17]
˜
Lc
conv 666 M 1.58 G 15.3 G 19.5 G 7.57 G 11.3 G
C˜
˜=1
˜
Lc 2.33 M 5.97 M 14.7 M 20 M 42.4 M 58 M
W˜conv
˜=1
Act ReLU
Pool 3 14 5 5 2 2
Lf 3 1 3 3 1 1
˜
Lf 58.6 M 1.02 M 124 M 124 M 2.05 M 2.05 M
C˜fc
˜=1
˜
Lf 58.6 M 1.02 M 124 M 124 M 2.05 M 2.05 M
W˜ fc
˜=1
Lc Lf
C= ˜C
˜=1
˜
conv
+ ˜C
˜=1
˜
fc
(1.8)
C˜conv = N ˜ × C˜ × J ˜ × K ˜ × U ˜ × V˜ (1.9)
C˜fc = N ˜ × C˜ × W˜ × H ˜ (1.10)
In a similar way, the number of weights, and consequently the size of a given CNN
model, can be expressed as follows:
Lc Lf
W= ˜W
˜=1
˜
conv
+ ˜W
˜=1
˜
fc
(1.11)
W˜conv = N ˜ × C˜ × J ˜ × K ˜ (1.12)
W˜fc = N ˜ × C˜ × W˜ × H ˜ (1.13)
For state-of-the-art CNN models, L c, N ˜ , and C˜ can be quite large. This makes
CNNs computationally and memory intensive, where for instance, the classifcation
of a single frame using the VGG19 network requires 19.5 billion MAC operations.
It can be observed in the same table that most of the MACs occur on the convolu-
tion parts, and consequently, 90% of the execution time of a typical inference is spent
on conv layers [18]. By contrast, FC layers marginalize most of the weights and thus
the size of a given CNN model.
Moreover, the execution of the most computationally intensive parts (i.e., conv lay-
ers), exhibits the four following types of concurrency:
Note that the fully connected parts of state-of-the-art models involve large values
of N ˜ and C˜ , making the memory reading of weights the most impacting factor,
as formulated in Equation 1.16. In this context, batch parallelism can signifcantly
accelerate the execution of CNNs with a large number of FC layers.
In the conv parts, the high number of MAC operations results in a high number
of memory accesses, as each MAC requires at least 2 memory reads and 1 memory
write*. This number of memory accesses accumulates with the high dimensions of
data manipulated by conv layers, as shown in Equation 1.18. If all these accesses are
towards external memory (for instance, DRAM), throughput and energy consumption
* This is the best-case scenario of a fully pipelined MAC, where intermediate results do not need to be
loaded.
10 Deep Learning in Computer Vision
will be highly impacted, because DRAM access engenders high latency and energy
consumption, even more than the computation itself [21].
The number of these DRAM accesses, and thus latency and energy consumption, can
be reduced by implementing a memory-caching hierarchy using on-chip memories.
As discussed in the next sections, state-of-the-art CNN accelerators employ register
fles as well as several levels of caches. The former, being the fastest, is implemented
at the nearest of the computational capabilities. The latency and energy consumption
resulting from these caches is lower by several orders of magnitude than external
memory accesses, as pointed out in Sze et al. [22].
1. A high density of hard-wired digital signal processor (DSP) blocks that are
able to achieve up to 20 (8 TFLOPs) TMACs [8].
2. A collection of in situ on-chip memories, located next to DSPs, that can be
exploited to signifcantly reduce the number of external memory accesses.
T (MACS)
T (FPS) = (1.19)
C(MAC)
* At a similar number of memory accesses. These accesses typically play the most dominant role in the
power consumption of an accelerator.
12 Deep Learning in Computer Vision
* https://www.openblas.net/
† https://developer.nvidia.com/cublas
Other documents randomly have
different content
Humming in misery “Non è ——”
He thinks not of the west so brightly ——
Nor listens to the faint and distant ——
But dreams of the false fair to whom is ——
The wo which never, never, will take ——.
Answer
367.
Convert the following into a couplet, perfect in rhyme and rhythm,
without adding or omitting a single letter:
Answer
368.
One and the same word of two syllables, answers each of the
following triplets:
I. MY FIRST springs in the mountains;
MY SECOND springs out of the mountains;
MY WHOLE comes with a spring over the mountains.
Answer
369.
370.
Answer
371.
Answer
372.
My FIRST is a little river in England that gave name to a
celebrated university; my SECOND is always near; my THIRD sounds
like several large bodies of water; and my WHOLE is the name of a
Persian monarch, the neighing of whose horse gave him a kingdom
and a crown.
Answer
373.
Answer
374.
Answer
375.
A DINNER PARTY.
THE GUESTS,
(See Key.)
PARADOXES.
1st. Polus instructed Ctesiphon in the art of pleading. Teacher
and pupil agreed that the tuition-fee should be paid when the latter
should win his first case. Some time having gone by, and the young
man being still without case or client, Polus, in despair of his fee,
brought the matter before the Court, each party pleading his own
cause. Polus spoke first, as follows:
“It is indifferent to me how the Court may decide this case. For, if
the decision be in my favor, I recover my fee by virtue of the
judgment; but, if my opponent wins the case, this being his first, I
obtain my fee according to the contract.”
Ctesiphon, being called on for his defense, said:
“The decision of the Court is indifferent to me. For, if in my favor, I
am thereby released from my debt to Polus. But, if I lose the case,
the fee cannot be demanded, according to our contract.”
2d. A certain king once built a bridge, and decreed that all
persons about to cross it, should be interrogated as to their
destination. If they told the truth they should be permitted to pass
unharmed; but, if they answered falsely, they should be hanged on a
gallows erected at the centre of the bridge. One day a man, about to
cross, was asked the usual question, and replied:
“I am going to be hanged on that gallows!”
Now, if they hanged him, he had told the truth, and ought to have
escaped; but, if they did not hang him, he had “answered falsely,”
and ought to have suffered the penalty of the law.
PART II.
When Hood and his family were living at Ostend for economy’s
sake, and with the same motive Mrs. Hood was doing her own work,
as we phrase it, he wrote to a friend in England: “Jane is becoming
an excellent cook and housemaid, and I intend to raise her wages.
She had nothing a week before, and now I mean to double it.”
CLUBS.
If any man loves comfort and
Has little cash to buy it, he
Should get into a crowded club—
A most select society!
OTHER WORLDS.
Mr. Mortimer Collins indulges in sundry very odd speculations
concerning them.
STILTS.
ETIQUETTE OF EQUITATION.
When a gentleman is to accompany a lady on horseback,
1st. There must be two horses. (Pillions are out of fashion, except
in some parts of Wales, Australia and New Jersey.)
2d. One horse must have a side saddle. The gentleman will not
mount this horse. By bearing this rule in mind he will soon find no
difficulty in recognizing his own steed.
3d. The gentleman will assist the lady to mount and adjust her foot
in the stirrup. There being but one stirrup, he will learn upon which
side to assist the lady after very little practice.
4th. He will then mount himself. As there are two stirrups to his
saddle, he may mount on either side, but by no means on both; at
least, not at the same time. The former is generally considered the
most graceful method of mounting. If he has known Mr. Rarey he may
mount without the aid of stirrups. If not, he may try, but will probably
fail.
5th. The gentleman should always ride on the right side of the lady.
According to some authorities, the right side is the left. According to
others, the other is the right. If the gentleman is left handed, this will of
course make a difference. Should he be ambidexter, it will be
indifferent.
6th. If the gentleman and lady meet persons on the road, these will
probably be strangers, that is if they are not acquaintances. In either
case the gentleman and lady must govern themselves accordingly.
Perhaps the latter is the evidence of highest breeding.
7th. If they be going in different directions, they will not be
expected to ride in company, nor must these request those to turn and
join the others; and vice versa. This is indecorous, and indicates a
lack of savoir faire.
8th. If the gentleman’s horse throw him he must not expect him to
pick him up, nor the lady; but otherwise the lady may. This is important
to be borne in mind by both.
9th. On their return, the gentleman will dismount first and assist the
lady from her horse, but he must not expect the same courtesy in
return.
N. B.—These rules apply equally to every species of equitation, as
pony riding, donkey riding, rocking horse riding, or “riding on a rail.”
There will, of course, be modifications required, according to the form
and style of the animal.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com