Machine Learning Engineering in Action MEAP V04 Ben T Wilson 2024 Scribd Download
Machine Learning Engineering in Action MEAP V04 Ben T Wilson 2024 Scribd Download
com
https://textbookfull.com/product/machine-learning-
engineering-in-action-meap-v04-ben-t-wilson/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/apache-pulsar-in-action-
meap-v04-david-kjerrumgaard/
textboxfull.com
https://textbookfull.com/product/spring-in-action-sixth-edition-
meap-v04-craig-walls/
textboxfull.com
https://textbookfull.com/product/grokking-machine-learning-
meap-v07-luis-g-serrano/
textboxfull.com
https://textbookfull.com/product/five-lines-of-code-
meap-v04-christian-clausen/
textboxfull.com
Microservices in NET Core Second Edition MEAP V04
Christian Horsdal Gammelgaard
https://textbookfull.com/product/microservices-in-net-core-second-
edition-meap-v04-christian-horsdal-gammelgaard/
textboxfull.com
https://textbookfull.com/product/domain-specific-languages-made-easy-
meap-v04-meinte-boersma/
textboxfull.com
https://textbookfull.com/product/interpretable-ai-building-
explainable-machine-learning-systems-meap-v02-ajay-thampi/
textboxfull.com
https://textbookfull.com/product/quantum-computing-in-action-
meap-v09-johan-vos/
textboxfull.com
https://textbookfull.com/product/kubernetes-in-action-second-edition-
meap-v05-marko-luksa/
textboxfull.com
MEAP Edition
Manning Early Access Program
Machine Learning Engineering in Action
Version 4
To get the most out of most of the lessons and topics in this book it will help to have at least a
moderate knowledge of the applications of ML (algorithmic theory will be discussed in places,
but there are no requirements to understand the nuances of these libraries) and have the ability
to build an ML solution in either Python or Scala.
Throughout the book we’ll be covering the enormous elephant in the room of nearly all
companies who are working towards getting benefits out of Data Science and ML work: why so
many projects fail. Whether it be failure-to-launch scenarios, overly complex solutions to
problems that could be solved in simpler manners, fragile code, expensive solutions, or poorly
trained implementations that provide really poor results, the end result in many projects across
so many industries are failing to live up to the promises of what is expected in predictive
modeling. As we go through scenarios and code implementation demonstrations, we’ll be
covering not only the ‘how’ of building resilient ML solutions, but the ‘why’ as well.
The goal of this book is not to show how to implement certain solutions through the application
of specific algorithms, explain the theory behind how the algorithms work, nor is it focused on
one particular sub-genre of ML. It instead is a focus on the broad topic of successful ML work,
how to apply Agile fundamentals to ML, and demonstrably production-ready code bases
that will, if adhered to, ensure that you have a maintainable and robust solution for projects.
With the dozens of ML theory and model-centric books that I’ve read and collected throughout
my career, I really wish that I had a copy of this book when I was getting started in this field
many years ago. The crippling failures that I’ve endured as I’ve learned these paradigms have
been both painful and formative, and to be able to provide them for the next generation of
ML practitioners out there is truly a gift for me.
Thank you once again for taking the time to look at this work, provide feedback in liveBook
discussion forum (pointing out things that I might have missed or that you’d like me to cover
Kindest Regards,
Ben Wilson
1
An Introduction to Machine Learning
Engineering
• Defining Machine Learning Engineering and why it is important to increase the chances of
successful ML project work
• Explaining why ML Engineering and the processes, tools, and paradigms surrounding it can
minimize the chances of ML project abandonment
• Discussing the six primary tenets of ML Engineering and how failing to follow them causes
project failure
Machine learning (ML) is exciting. To the layperson, it brings with it the promise of
seemingly magical abilities of soothsaying; uncovering mysterious and miraculous answers to
difficult problems. ML makes money for companies, it autonomously tackles overwhelmingly
large tasks, and relieves people from the burden of monotonous work involved in analyzing
data to draw conclusions from. To state the obvious, though, it’s challenging. From
thousands of algorithms, a diverse skill set ranging from Data Engineering (DE) skills to
advanced statistical analysis and visualization, the work required of a professional
practitioner of ML is truly intimidating and filled with complexity.
ML Engineering is the concept of applying a system around this staggering level of
complexity. It is a set of standards, tools, processes, and methodology that aims to minimize
the chances of abandoned, misguided, or irrelevant work being done in an effort to solve a
business problem or need. It, in essence, is the roadmap to creating ML-based systems that
can not only be deployed to production, but can be maintained and updated for years in the
future, allowing businesses to reap the rewards in efficiency, profitability, and accuracy that
ML in general has proven to provide (when done correctly).
This book is, at its essence, a roadmap to guide you through this system, as shown in
figure 1.1 below. It gives a proven set of processes (mostly a ‘lessons learned’ of things I’ve
screwed up in my career) about the planning phase of project work, navigating the difficult
and confusing translation of business needs into the language of ML work. From that, it
covers a standard methodology of experimentation work, focusing on the tools and coding
standards for creating an MVP that will be comprehensive and maintainable. Finally, it will
cover the various tools, techniques, and nuances involved in crafting production-grade
maintainable code that is both extensible and easy to troubleshoot.
Figure 1.1 The ML Engineering Roadmap, showing the proven stages of work involved in creating successful
ML solutions. While some projects may require additional steps (particularly if working with additional
Engineering teams), these are the fundamental steps that should be involved in any ML-backed project.
ML Engineering is not exclusively about the path shown in figure 1.1. It is also the
methodology within each of these stages that can make or break a project. It is the way in
which a Data Science team talks to the business about the problem, the manner in which
research is done, the details of experimentation, the way that code is written, and the
multitude of tools and technology that are employed while traveling along the roadmap path
that can greatly reduce the worst outcome that a project might have – abandonment.
The end goal of ML work is, after all, about solving a problem. By embracing the concepts
of ML Engineering and following the road of effective project work, the end goal of getting a
useful modeling solution can be shorter, far cheaper, and have a much higher probability of
succeeding than if you just ‘wing it’ and hope for the best.
delve into the specifics of these topics have already been (and continue to be) written in
great detail.
We’re here instead to talk about the giant elephant in the room when it comes to ML.
We’re going to be talking about why, with so many companies going all-in on ML, hiring
massive teams of highly compensated Data Scientists, devoting massive amounts of financial
and temporal resources to projects, these projects end up failing at an incredibly high rate.
We’ll be covering the 6 major parts of project failure, as showing in figure 1.1 below,
discussing process around how to identify the reasons why these each cause so many
projects to fail, be abandoned, or take far longer than they should to reach production. In
each section throughout the first part of this book we will be discussing the solutions to each
of these common failures and covering the processes that can make the chances of these
derailing your projects very low.
Hubris: 5 %
Cost: 10 %
Planning: 30 %
Fragility: 15 %
Technology: 15 %
Scoping: 25 %
Figure 1.2 Primary reasons for ML project failures from not only my own early work, but many others that I’ve
been involved in a reclamation capacity since those early years.
Figure 1.2 shows some rough estimates of what I’ve come to see as the primary reasons
why projects fail (and the rates of these failures in any given industry, from my experience,
are truly surprising). Generally, this is due to a DS team that is either inexperienced with
solving a large-scale production-grade model to solve a particular need or has simply failed
to understand what the desired outcome from the business is.
At no point in any project have I ever seen any of these issues arise due to malicious
intent; rather, these elements are due in large part to the fact that most ML projects are
incredibly challenging, complex, and are comprised of algorithmic software tooling that is
hard to explain to a layperson (hence the breakdowns in communication with business units
that most projects endure). With such complex solutions at play, so many moving parts, and
a world of corporations trying to win in this new data-focused arms race of profiting off of ML
as quickly as possible, it’s no wonder that the perilous journey of taking a solution to a point
of stability in production fails so frequently.
This book isn’t focused as a doom-riddled treatise on how challenging ML is; rather, it’s
meant to show how these elements can be a risk for projects and to teach the tools that help
minimize the risk of each. Through focusing on each of these areas in a conscientious and
deliberate manner, many of these risks can be entirely mitigated, if not eliminated entirely.
Below, in figure 1.3, we can see the representation of the path that all of us are moving
towards when we employ ML to solve a problem. From the outset of a project to its planned
successful, long running and maintainable state, the journey is fraught with detours that can
spell the termination of our hard work. However, by focusing on building up the knowledge,
skills, and utilization of processes and tooling, each of these 6 major areas can be generally
avoided (or, at the very least, addressed in a way that won’t cause a complete failure of the
project).
Figure 1.3 The branching paths of failure for the vast majority of ML projects. Nearly all ML solutions that don’t
plan to focus on these 6 core areas have a much higher chance of being abandoned either before production,
or shortly after running in production.
1.2.1 Planning
By far the largest cause of project failures, failing to plan out a project thoroughly is one
of the most demoralizing ways for a project to be cancelled. Imagine for a moment that
you’re the first-hired DS for a company. On your first week, an executive from marketing
approaches you, explaining (in their terms) a serious business issue that they are having.
They need to figure out an efficient means of communicating to customers through email to
let them know of upcoming sales that they might be interested in. With very little additional
detail provided to you, the executive merely says, “I want to see the click and open rates go
up.”
If this was the only information supplied and repeated queries to members of the
marketing team simply state the same end-goal of ‘increasing the clicking and opening rate’,
there seems to be a limitless number of avenues to pursue. Left to your own devices, do
you:
• Focus on content recommendation and craft custom emails for each user?
• Provide predictions with an NLP-backed system that will craft relevant subject lines for
each user?
• Attempt to predict a list of products most relevant to the customer base to put on sale
each day?
With so many options of varying complexity and approaches, with very little guidance,
the possibility of creating a solution that is aligned with the expectations of the executive is
highly unlikely.
If a proper planning discussion were had that goes into the correct amount of detail,
avoiding the complexity of the ML side of things, the true expectation might be revealed,
letting you know that the only thing that they are expecting is a prediction for each user for
when they would be most likely open to reading emails. They simply want to know when
someone is most likely to not be at work, commuting, or sleeping so that they can send
batches of emails throughout the day to different cohorts of customers.
The sad reality is that many ML projects start off in this way. There is frequently very
little communication with regards to project initiation and the general expectation is that ‘the
DS team will figure it out’. However, without the proper guidance on what needs to be built,
how it needs to function, and what the end goal of the predictions is, the project is almost
doomed to failure.
After all, what would have happened if an entire content recommendation system were
built for that use case, with months of development and effort wasted when a very simple
analytics query based on IP geolocation was what was really needed? The project would not
only be cancelled, but there would likely be many questions from on-high as to why this
system was built and why the cost of development was so high.
If we were to look at a very simplified planning discussion, at an initial phase of
discussion, as shown in figure 1.4 below, we can see how just a few careful questions and
clear answers can give the one thing that every Data Scientist should be looking for in this
situation (of being the first DS at a company working on the first problem): a quick win.
Figure 1.4 A simplified planning discussion to get to the root of what an internal customer (in this case, the
marketing executive who wants high open rates on their emails) actually needs for a solution.
As figure 1.4 showed at the right (the DS internal monologue), the problem at hand here
is not at all in the list of original assumptions that were made. There is no talk of content of
the emails, relevancy to the subject line or the items in the email. It’s a simple analytical
query to figure out which time zone customers are in and to analyze historic opening in local
times for each customer. By taking a few minutes to plan and understand the use case fully,
weeks (if not months) of wasted effort, time, and money were saved. The arguably more
important result from lines of questioning such as this is that it gives both parties (the DS
team and the business sponsor) and understanding of what will be built and why it’s
being built. Notice the complete lack of how it will be built. The business sponsor (usually)
won’t care and doesn’t need to know those details.
We will be covering the processes of planning, having project expectation discussions
with internal business customers, and general communications about ML work with a non-
technical audience at length and in much greater depth throughout Chapter 2.
Let’s take a look at another potentially quite familiar scenario to discuss polar opposite
ways that this stage of ML project development can go awry. For this example, there are two
separate DS teams at a company (team ‘A’ in figure 1.5, team ‘B’ in figure 1.6), each being
pitted against one another for developing a solution to an escalating incidence of fraud being
conducted with the company’s billing system.
Team A’s research and scoping process is illustrated in figure 1.5 below.
Figure 1.5 Research and scoping of a fraud detection problem for a junior team of well-intentioned but
inexperienced Data Scientists.
For team ‘A’, comprised of mostly junior Data Scientists, all of whom entered the workforce
without an extensive period in academia. Their actions, upon getting the details of the
project and what is expected of them, is to immediately go to blog posts. They search the
internet for ‘detecting payment fraud’ and ‘fraud algorithms’, finding hundreds of results
from consultancy companies, a few extremely high-level blog posts from similar junior Data
Scientists who have likely never put a model into production, and some open-source data
examples with very rudimentary examples.
Team B’s research and scoping, in contrast, is shown below in figure 1.6.
Figure 1.6 Research and scoping for an academia-focused group of researchers for the fraud detection
problem.
Team ‘B’, on the other hand, is filled with a group of PhD academic researchers. With
their studious approach to research and vetting of ideas, their first actions are to dig into
published papers on the topic of fraud modeling. Spending several days reading through
journals and papers, they are now armed with a large collection of theory encompassing
some of the most cutting-edge research being done on detecting fraudulent activity.
If we were to ask either of these teams what the level of effort is to produce a solution,
we would get wildly divergent answers. Team ‘A’ would likely state that it would take about 2
weeks to build their XGBoost binary classification model (they mistakenly believe that they
already have the code, after all, from the blog post that they found).
Team ‘B’ would tell a vastly different tale. They would estimate that it would take several
months to implement, train, and evaluate the novel deep learning structure that they found
in a highly regarded white paper whose proven accuracy for the research was significantly
better than any perforce implemented algorithm for this use case.
The problem here with scoping and research is that these two polar opposites would both
have their projects fail for two completely different reasons. Team ‘A’ would have a project
failure due to the fact that the solution to the problem is significantly more complex than
the example shown in the blog post (the class imbalance issue alone is too challenging of
a topic to effectively document in the short space of a blog post). Team ‘B’ would, even
though their solution would likely be extremely accurate, would never be allocated
resources to build the risky solution that they came up with as an initial fraud detection
service at the company (although it would be a great candidate for a version 2.0
implementation).
Project scoping for ML is incredibly challenging. Even for the most seasoned of ML
veterans, making a conjecture about how long a project will take, which approach is going to
be most successful, and the amount of resources that will need to be involved is a futile and
frustrating exercise. The risk associated with making erroneous claims is fairly high, but
there are means of structuring proper scoping and solution research that can help minimize
the chances of being wildly off on estimation.
Most companies have a mix of the types of people in the hyperbolic scenario above.
There are academics whose sole goal is to further the advancement of knowledge and
research into algorithms, paving the way for future discoveries from within industry. There
are also ‘Applications of ML’ Engineers who just want to use ML as a tool to solve a business
problem. It’s very important to embrace and balance both aspects of these philosophies
toward ML work, strike a compromise during the research and scoping phase of a project,
and know that the middle ground here is the best path to trod upon to ensure that a project
actually makes it to production.
1.2.3 Experimentation
In the experimentation phase, the largest causes of project failure is either due to the
experimentation taking too long (testing too many things or spending too long fine-tuning an
approach) or in an under-developed prototype that is so abysmally bad that the business
decides to move on to something else.
Let’s use a similar example from section 1.2.2 to illustrate how these two approaches
might play out at a company that is looking to build an image classifier for detecting
products on retail store shelves. The experimentation paths that the two groups take
(showing the extreme polar opposites of experimentation) are shown in figures 1.7 and 1.8
below.
creating a demo to show how good they could classify their own images, they chose a too-
simplistic test of only two classes. With cherry-picked results and a woefully inadequate
evaluation of the approach, this project would likely fail early in the development process (if
someone on their leadership team was checking in on their progress), or late into the final
delivery phases before production scheduling (when the business unit internal customer
could see just how badly the approach was performing). Either way, using this rushed and
lazy approach to testing of approaches will nearly always end in a project that is either
abandoned or cancelled.
Team ‘B’ and their approach to this problem is shown below in figure 1.8.
Figure 1.8 An overly thorough experimentation phase that effectively became the build-out of 3 separate
MVP’s for the project.
Team ‘B’ in figure 1.8, on the other hand, is the polar opposite of team ‘A’. They’re an
example of the ‘pure researchers’; people who, even though they currently work for a
company, still behave as though they are conducting research in a University Their approach
to solving this problem is to spend weeks searching for an devouring cutting edge papers,
reading journals, and getting a solid understanding of the nuances of both the theory and
construction around various convolutional neural network (CNN) approaches that might work
best for this project. They’ve settled on 3 broad approaches, each consisting of several tests
that need to run and be evaluated against the entire collection of their training image
dataset.
It isn’t the depth of research that failed them in this case. The research was appropriate,
but the problem that they got themselves into was that they were trying too many things.
Varying the structure and depth of a custom-built CNN requires dozens (if not hundreds) of
iterations to ‘get right’ for the use case that they’re trying to solve. This is work that should
be scoped into the development stage of the project, when they have no other distractions
other than developing this single approach. Instead of doing an abbreviated adjudication of
the custom CNN, they decided to test out transfer learning of 3 large pre-trained CNN’s, as
well as building a Generative Adversarial Network (GAN) to get semi-supervised learning to
work on the extremely large corpus of classes that are needed to be classified.
Team B quite simply took on too much work for an experimentation phase. What they’re
left with at the point that they need to show demonstrations of their approaches is nothing
more than decision paralysis and a truly staggering cloud services GPU VM bill. With no real
conclusion on the best approach and such a large amount of money already spent on the
project, the chances that the entire project will be scrapped is incredibly high.
While not the leading cause of project failure, an effective experimentation phase can, if
done too incorrectly, stall or cancel an otherwise great project. There are patterns of
experimentation that have proven to work remarkably well for ML project work, though, the
details of which lie somewhere between the paths that these two teams took. We will be
covering this series of patterns and ways in which they can be adapted to any ML-based
project at length in the remainder of Part 1 of this book.
1.2.4 Development
While not precisely a major factor for getting a project cancelled directly, having a poor
development practice for ML projects can manifest itself in a multitude of ways that can
completely kill a project. It’s usually not as directly visible as some of the other leading
causes but having a fragile and poorly designed code base and poor development practices
can actually make a project harder to work on, easier to break in production, and far harder
to improve as time goes on.
For instance, let’s look at a rather simple and frequent modification situation that comes
up during the development of a modeling solution: changes to the feature engineering. In
figure 1.9 below, we see two Data Scientists attempting to make a set of changes in a
monolithic code base. In this development paradigm, all of the logic for the entire job is
written in a single notebook through scripted variable declarations and functions.
Figure 1.9 Editing a monolithic code base (a script) for ML project work.
Julie, in the monolithic code base, is likely going to have a lot of searching and scrolling
to do, finding each individual location where the feature vector is defined and adding her new
fields to collections. Her encoding work will need to be correct and carried throughout the
script in the correct places as well. It’s a daunting amount of work for any sufficiently
complex ML code base (where the number of lines of code for feature engineering and
modeling combined can reach to the thousands if developed in a scripting paradigm) and is
prone to frustrating errors in the form of omissions, typos, and other transcription errors.
Joe, meanwhile, has far fewer edits to do, but is still subject to the act of searching
through the long code base and relying on editing the hard-coded values correctly.
The real problem with the monolithic approach comes when they try to incorporate each
of their changes into a single copy of the script. As they both have mutual dependencies on
one another’s work, they will both have to update their code and select one of their copies to
serve as a ‘master’ for the project, copying in the changes from the other’s work. It will be a
long and arduous process, wasting precious development time and likely requiring a
great deal of debugging to get correct.
Figure 1.10 below shows a different approach to maintaining an ML project’s code base,
utilizing modularized code architecture to separate the tight coupling that is present within
the large script from figure 1.9.
Figure 1.10 Updating of a modular ML code base to prevent rework and merge conflicts.
1.2.5 Deployment
Perhaps the most confusing and complex part of ML project work comes at the point long
after a powerfully accurate model is built. The path between the model creation and serving
of the predictions to a point that they can be used is nearly as difficult and its possible
implementations as varied as there are models to serve prediction needs.
Let’s take a company that provides analysis services to the fast food industry as an
example for this section. They’ve been fairly successful in serving predictions for inventory
management at region-level groupings for years, running large batch predictions for the per-
day demands of expected customer counts at a weekly level, submitting their forecasts as
bulk extracts each week.
The DS team up until this point has been accustomed to an ML architecture that
effectively looks like figure 1.11 below.
Figure 1.11 The relatively simple scheduled batch internal-facing prediction serving architecture.
This relatively standard architecture for serving up scheduled batch predictions, shown
above in figure 1.11, solely focused on exposing inference results to internal analytics
personnel, isn’t particularly complex and is a paradigm that they are very familiar with. With
the scheduled synchronous nature of the design, as well as the large amounts of time
between subsequent retraining and inference, the general sophistication of their technology
stack doesn’t have to be particularly high (which is a good thing; see sidenote below).
As the company has realized the benefits of predictive modeling over time, when a new
business segment opens up, the business unit approaches the DS team to build a new
prediction system for them. This new service is one requiring an approach to inventory
forecasting at a per-store level, with a requirement that the predictions respond in near-real-
time throughout the day. Realizing that they need to not only build a completely different
ensemble of models to solve this use case, the DS team focuses most of their time and
energy on the ML portion of the project. They didn’t realize that the serving component of
this solution would need to rely on not only a REST API to serve the data to individual store
owners through an application, but that they would have to be frequently updating the per-
store forecasts fairly frequently throughout the day.
After coming up with an architecture that supports the business need (months after the
start of the project, well after the modeling portion of the project had been finished), they
proceed to build it with the assistance of some Java software engineers. It wasn’t until after
the first week of going live that the business realized that the implementation’s in cloud
computing costs are more than an order of magnitude higher than the revenue they are
getting for the service. The new architecture that is needed to serve the business need is
shown in figure 1.12 below.
Figure 1.12 The far more complex pseudo-real time serving architecture required to meet the business needs
for the project.
It doesn’t take long for the project to get cancelled and a complete redesign of the
architecture and modeling approach to be commissioned to keep the costs down.
"No."
Mr. Kelly sank into profound silence. This was not his
usual mode; and Lettice became speedily aware that he had
something on his mind. He had been very friendly and
pleasant of late, and she enjoyed a call from him; but the
abstraction to-day became somewhat heavy. Lettice tried to
get up a conversation, and there was no response. She
spoke of Prue, and he only said, "Yes."
"No? Indeed?"
"Ah, yes!"
"Quite!"
"Then, could you tell me this? Is there the slightest
hope, that—say, under any circumstances—Miss Valentine
might marry?"
"Was she?"
"It was years ago, long before I first saw you. I knew
her then, well. In fact—though I had not meant to reveal
any personal interest in this question—I do not mind saying
that she made a very strong impression on me then. But I
was told that she was engaged; and I at once left the
place."
"Some do!"
The light that broke over his face was very singular.
Lettice had never seen anything exactly like it.
"PRUE."
"Can't tell."
"Go where?"
"Why 'ought'?"
"I had written to Prue, before you came in. I'll just add
a word of postscript . . . What a day this has been. I am so
glad for Prue! But oh, that poor little Keith! Felix, if you
knew how loving he was to me before I came away: and
how he said he would miss me. Oh, I hope we shall be in
time!"
"And Keith?"
"Yes; and the boy has seemed to turn from her in this
illness. It has been most painful. My dear, will you take off
your wraps; and then you must have supper. After that you
shall see him."
She bent over him, and he held her with his thin arms,
until they dropped through weakness. The breathing was
sorely oppressed. He seemed striving to say something, and
unable to bring it out.
"Yes: it is true," she said, "I did it! I wanted to get rid of
Lettice,—for Keith's sake! I could not have done such a
thing for anybody else's sake,—only for Keith! . . . Only for
Keith! . . . I know what I am saying! Don't stare at me—all
of you! I tell you—Lettice stole my boy's heart from me!
And she would have robbed him—robbed him of his rights! I
saw it all—and so—for his sake—my boy's sake—"
There was a bitter irony in the fact that her own idolised
boy should have been the one, of all others, to make known
her wrong-doing. Either, his resolution not to speak of it had
broken down under the weakness of suffering, or his
childish conscience had refused to let him pass away
without clearing Lettice from unjust accusation. Whichever
way it might be, Theodosia's cup was thereby filled to the
brim.
"Go away! Go and leave me!" she cried. "I want Keith!
Nobody but Keith! Lettice may have all your money now.
Now Keith is gone."
Dr. Bryant was still ashen pale, with the look of a man
who has received a severe blow. He came in front of Lettice,
and said, "My child, forgive me!"
Lettice was dumb, and Felix spoke for her. "She thought
she might divert suspicion upon me. I was in the room,
also, with the bank-note."
"Yes,—alone!"
"Always my child!"
Four days later, Keith was laid to rest in the little village
churchyard; and some who knew Theodosia well, said
plainly that it was a merciful stroke which had taken the boy
thus early away from her influence. Dr. Bryant, whatever he
might have felt, passed no such judgment. He uttered no
reproaches, and showed to his wife only a steadfast
compassion.
"If only I could live with them both!" she said often to
herself. But Felix was tied to the neighbourhood of London:
and that Dr. Bryant should be willing to quit his old home
was a notion which never so much as occurred to her
imagination. Everybody looked upon him as a fixture there.
"But—Felix!"
"I hope we shall not both get to care too much for
money," Lettice said seriously.
"Not much danger for you. That isn't your sort. And
mind—if you think I'm getting into an avaricious groove—
just speak and warn me, Lettice."
THE END.
With Illustrations. Crown 8vo. 2s. With gilt edges, 2s. 6d.
——————————————————
1. BUNYAN'S PILGRIM'S PROGRESS.
2. BUNYAN'S HOLY WAR.
3. FOXE'S BOOK OF MARTYRS.
4. BEN-HUR. By LEW WALLACE.
5. THE LAMPLIGHTER. By MARIA S. CUMMINS.
6. UNCLE TOM'S CABIN. By H. B. STOWE.
7. ROBINSON CRUSOE. By DANIEL DEFOE.
8. MY DESIRE. By SUSAN WARNER.
9. NOBODY. By SUSAN WARNER.
10. THE FAIRCHILD FAMILY. By Mrs. SHERWOOD.
11. THE SWISS FAMILY ROBINSON.
12. ROMANCE OF NATL. HISTORY. By P. H. GOSSE.
13. GREAT MEN. By the late Rev. F. MYERS.
14. LITTLE WOMEN AND GOOD WIVES. By L. M.
ALCOTT.
15. DRAYTON HALL. By JULIA MATHEWS.
16. THE END OF A COIL. By SUSAN WARNER.
17. GLEN LUNA. By ANNA WARNER.
18. DIANA. By SUSAN WARNER.
19. STEPHEN, M.D. By SUSAN WARNER.
20. MELBOURNE HOUSE. By SUSAN WARNER.
21. BIBLE WARNINGS. By the Rev. Dr. NEWTON.
22. THE PHYSICIAN'S DAUGHTERS. By LUCY NELSON.
23. THE WIDE, WIDE WORLD. By E. WETHERELL.
24. DAISY. By SUSAN WARNER.
25. DAISY IN THE FIELD. By SUSAN WARNER.
26. NOR'ARD OF THE DOGGER. By E. J. MATHER.
27. SOLDIERS AND SERVANTS OF CHRIST. By ANNA
LEHRER.
28. QUEECHY. By SUSAN WARNER.
29. DARE TO DO RIGHT. By JULIA MATHEWS.
30. NETTIE'S MISSION. By JULIA MATHEWS.
31. YOKED TOGETHER. By ELLEN DAVIS.
32. OPENING A CHESTNUT BURR. By E. P. ROE.
33. ST. ELMO. By A. J. E. WILSON.
34. NAOMI. By Mrs. J. B. WEBB.
35. BARRIERS BURNED AWAY. By E. P. ROE.
36. WYCH HAZEL. By S. and A. WARNER.
37. THE GOLD OF CHICKAREE. By S. WARNER.
38. THE OLD HELMET. By SUSAN WARNER.
39. A LETTER OF CREDIT. By SUSAN WARNER.
40. GENERAL GORDON. By Major SETON CHURCHILL.
41. A KNIGHT OF THE NINETEENTH CENTURY. By E. P.
ROE.
42. IN THE DAYS OF BRUCE. By GRACE AGUILAR.
43. HOME INFLUENCE. By GRACE AGUILAR.
44. A MOTHER'S RECOMPENSE. By GRACE AGUILAR.
45. THE VALE OF CEDARS. By GRACE AGUILAR.
46. THE GOLDEN LADDER. By SUSAN and ANNA
WARNER.
47. INFELICE. By A. J. E. WILSON.
48. AT THE MERCY OF TIBERIUS. By A. J. E. WILSON.
49. DORRINCOURT. By the Author of "Expelled."
50. WESTWARD HO! By CHARLES KINGSLEY.
51. HOLDEN WITH THE CORDS. By W. L. M. JAY.
52. A RED WALLFLOWER. By SUSAN WARNER.
53. JOHN HALIFAX, GENTLEMAN. By Mrs. CRAIK.
54. ARABIAN NIGHTS' ENTERTAINMENTS.
55. CRANFORD. By Mrs. GASKELL.
56. WAVERLEY. By Sir WALTER SCOTT.
57. HYPATIA. By CHARLES KINGSLEY.
58. IT IS NEVER TOO LATE TO MEND. By CHARLES
READE.
59. ADAM BEDE. By GEORGE ELIOT.
60. WAGES. By L. T. MEADE.
61. BETWIXT TWO FIRES. By J. JACKSON WRAY.
——————————————————
Updated editions will replace the previous one—the old editions will
be renamed.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com