Pro Tbb: C++ Parallel Programming with Threading Building Blocks 1st Edition Michael Voss pdf download
Pro Tbb: C++ Parallel Programming with Threading Building Blocks 1st Edition Michael Voss pdf download
https://textbookfull.com/product/pro-tbb-c-parallel-programming-
with-threading-building-blocks-1st-edition-michael-voss/
https://textbookfull.com/product/primary-mathematics-
textbook-2b-jennifer-hoerst/
https://textbookfull.com/product/handbook-of-macroeconomics-
volume-2a-2b-set-1st-edition-john-b-taylor/
https://textbookfull.com/product/fortran-2018-with-parallel-
programming-1st-edition-subrata-ray-author/
https://textbookfull.com/product/parallel-programming-with-co-
arrays-robert-w-numrich/
Concurrency in NET Modern patterns of concurrent and
parallel programming With examples in C and F 1st
Edition Riccardo Terrell
https://textbookfull.com/product/concurrency-in-net-modern-
patterns-of-concurrent-and-parallel-programming-with-examples-in-
c-and-f-1st-edition-riccardo-terrell/
https://textbookfull.com/product/semirings-as-building-blocks-in-
cryptography-1st-edition-mariana-durcheva/
https://textbookfull.com/product/pro-c-8-with-net-
core-3-foundational-principles-and-practices-in-programming-
ninth-edition-andrew-troelsen/
https://textbookfull.com/product/data-parallel-c-mastering-dpc-
for-programming-of-heterogeneous-systems-using-c-and-sycl-1st-
edition-james-reinders/
https://textbookfull.com/product/concurrency-in-c-cookbook-
asynchronous-parallel-and-multithreaded-programming-2nd-edition-
stephen-cleary/
Pro TBB
C++ Parallel Programming with
Threading Building Blocks
—
Michael Voss
Rafael Asenjo
James Reinders
Pro TBB
C++ Parallel Programming with
Threading Building Blocks
Michael Voss
Rafael Asenjo
James Reinders
Pro TBB: C++ Parallel Programming with Threading Building Blocks
Michael Voss Rafael Asenjo
Austin, Texas, USA Málaga, Spain
James Reinders
Portland, Oregon, USA
Acknowledgments�������������������������������������������������������������������������������������������������xvii
Preface�������������������������������������������������������������������������������������������������������������������xix
Part 1�������������������������������������������������������������������������������������������������������������� 1
Chapter 1: Jumping Right In: “Hello, TBB!”�������������������������������������������������������������� 3
Why Threading Building Blocks?��������������������������������������������������������������������������������������������������� 3
Performance: Small Overhead, Big Benefits for C++�������������������������������������������������������������� 4
Evolving Support for Parallelism in TBB and C++������������������������������������������������������������������� 5
Recent C++ Additions for Parallelism������������������������������������������������������������������������������������� 6
The Threading Building Blocks (TBB) Library�������������������������������������������������������������������������������� 7
Parallel Execution Interfaces��������������������������������������������������������������������������������������������������� 8
Interfaces That Are Independent of the Execution Model������������������������������������������������������ 10
Using the Building Blocks in TBB������������������������������������������������������������������������������������������� 10
Let’s Get Started Already!����������������������������������������������������������������������������������������������������������� 11
Getting the Threading Building Blocks (TBB) Library������������������������������������������������������������� 11
Getting a Copy of the Examples��������������������������������������������������������������������������������������������� 12
Writing a First “Hello, TBB!” Example������������������������������������������������������������������������������������ 12
Building the Simple Examples����������������������������������������������������������������������������������������������� 15
Building on Windows Using Microsoft Visual Studio������������������������������������������������������������� 16
Building on a Linux Platform from a Terminal����������������������������������������������������������������������� 17
A More Complete Example���������������������������������������������������������������������������������������������������������� 21
Starting with a Serial Implementation����������������������������������������������������������������������������������� 21
Adding a Message-Driven Layer Using a Flow Graph������������������������������������������������������������ 25
Adding a Fork-Join Layer Using a parallel_for���������������������������������������������������������������������� 27
Adding a SIMD Layer Using a Parallel STL Transform����������������������������������������������������������� 29
iii
Table of Contents
v
Table of Contents
vi
Table of Contents
cached_aligned_allocator��������������������������������������������������������������������������������������������������������� 223
Memory Pool Support: memory_pool_allocator������������������������������������������������������������������ 223
Array Allocation Support: aligned_space����������������������������������������������������������������������������� 224
Replacing new and delete Selectively�������������������������������������������������������������������������������������� 224
Performance Tuning: Some Control Knobs�������������������������������������������������������������������������������� 228
What Are Huge Pages?�������������������������������������������������������������������������������������������������������� 228
TBB Support for Huge Pages����������������������������������������������������������������������������������������������� 228
scalable_allocation_mode(int mode, intptr_t value)����������������������������������������������������������� 229
TBBMALLOC_USE_HUGE_PAGES����������������������������������������������������������������������������������������� 229
TBBMALLOC_SET_SOFT_HEAP_LIMIT��������������������������������������������������������������������������������� 230
int scalable_allocation_command(int cmd, void *param)��������������������������������������������������� 230
TBBMALLOC_CLEAN_ALL_BUFFERS����������������������������������������������������������������������������������� 230
TBBMALLOC_CLEAN_THREAD_BUFFERS���������������������������������������������������������������������������� 230
vii
Table of Contents
Chapter 11: Controlling the Number of Threads Used for Execution�������������������� 313
A Brief Recap of the TBB Scheduler Architecture��������������������������������������������������������������������� 314
Interfaces for Controlling the Number of Threads��������������������������������������������������������������������� 315
viii
Table of Contents
Chapter 12: Using Work Isolation for Correctness and Performance������������������� 337
Work Isolation for Correctness�������������������������������������������������������������������������������������������������� 338
Creating an Isolated Region with this_task_arena::isolate������������������������������������������������� 343
Using Task Arenas for Isolation: A Double-Edged Sword���������������������������������������������������������� 349
Don’t Be Tempted to Use task_arenas to Create Work Isolation for Correctness���������������� 353
ix
Table of Contents
x
Table of Contents
xi
Table of Contents
xii
Table of Contents
Glossary���������������������������������������������������������������������������������������������������������������� 729
Index��������������������������������������������������������������������������������������������������������������������� 745
xiii
About the Authors
Michael Voss is a Principal Engineer in the Intel Architecture, Graphics and Software
Group at Intel. He has been a member of the TBB development team since before the
1.0 release in 2006 and was the initial architect of the TBB flow graph API. He is also
one of the lead developers of Flow Graph Analyzer, a graphical tool for analyzing data
flow applications targeted at both homogeneous and heterogeneous platforms. He
has co-authored over 40 published papers and articles on topics related to parallel
programming and frequently consults with customers across a wide range of domains to
help them effectively use the threading libraries provided by Intel. Prior to joining Intel
in 2006, he was an Assistant Professor in the Edward S. Rogers Department of Electrical
and Computer Engineering at the University of Toronto. He received his Ph.D. from the
School of Electrical and Computer Engineering at Purdue University in 2001.
James Reinders is a consultant with more than three decades experience in Parallel
Computing, and is an author/co-author/editor of nine technical books related to parallel
programming. He has had the great fortune to help make key contributions to two of
the world’s fastest computers (#1 on Top500 list) as well as many other supercomputers,
xv
About the Authors
and software developer tools. James finished 10,001 days (over 27 years) at Intel in mid-
2016, and now continues to write, teach, program, and do consulting in areas related to
parallel computing (HPC and AI).
xvi
Acknowledgments
Two people offered their early and continuing support for this project – Sanjiv Shah and
Herb Hinstorff. We are grateful for their encouragement, support, and occasional gentle
pushes.
The real heroes are reviewers who invested heavily in providing thoughtful and
detailed feedback on draft copies of the chapters within this book. The high quality
of their input helped drive us to allow more time for review and adjustment than we
initially planned. The book is far better as a result.
The reviewers are a stellar collection of users of TBB and key developers of TBB. It
is rare for a book project to have such an energized and supportive base of help in
refining a book. Anyone reading this book can know it is better because of these kind
souls: Eduard Ayguade, Cristina Beldica, Konstantin Boyarinov, José Carlos Cabaleiro
Domínguez, Brad Chamberlain, James Jen-Chang Chen, Jim Cownie, Sergey Didenko,
Alejandro (Alex) Duran, Mikhail Dvorskiy, Rudolf (Rudi) Eigenmann, George Elkoura,
Andrey Fedorov, Aleksei Fedotov, Tomás Fernández Pena, Elvis Fefey, Evgeny Fiksman,
Basilio Fraguela, Henry Gabb, José Daniel García Sánchez, Maria Jesus Garzaran,
Alexander Gerveshi, Darío Suárez Gracia, Kristina Kermanshahche, Yaniv Klein, Mark
Lubin, Anton Malakhov, Mark McLaughlin, Susan Meredith, Yeser Meziani, David
Padua, Nikita Ponomarev, Anoop Madhusoodhanan Prabha, Pablo Reble, Arch Robison,
Timmie Smith, Rubén Gran Tejero, Vasanth Tovinkere, Sergey Vinogradov, Kyle Wheeler,
and Florian Zitzelsberger.
We sincerely thank all those who helped, and we apologize for any who helped us
and we failed to mention!
Mike (along with Rafa and James!) thanks all of the people who have been involved
in TBB over the years: the many developers at Intel who have left their mark on the
library, Alexey Kukanov for sharing insights as we developed this book, the open-source
contributors, the technical writers and marketing professionals that have worked on
documentation and getting the word out about TBB, the technical consulting engineers
and application engineers that have helped people best apply TBB to their problems, the
managers who have kept us all on track, and especially the users of TBB that have always
provided the feedback on the library and its features that we needed to figure out where
xvii
Acknowledgments
to go next. And most of all, Mike thanks his wife Natalie and their kids, Nick, Ali, and
Luke, for their support and patience during the nights and weekends spent on this book.
Rafa thanks his PhD students and colleagues for providing feedback regarding
making TBB concepts more gentle and approachable: José Carlos Romero, Francisco
Corbera, Alejandro Villegas, Denisa Andreea Constantinescu, Angeles Navarro;
particularly to José Daniel García for his engrossing and informative conversations about
C++11, 14, 17, and 20, to Aleksei Fedotov and Pablo Reble for helping with the OpenCL_
node examples, and especially his wife Angeles Navarro for her support and for taking
over some of his duties when he was mainly focused on the book.
James thanks his wife Susan Meredith – her patient and continuous support was
essential to making this book a possibility. Additionally, her detailed editing, which often
added so much red ink on a page that the original text was hard to find, made her one of
our valued reviewers.
As coauthors, we cannot adequately thank each other enough. Mike and James have
known each other for years at Intel and feel fortunate to have come together on this book
project. It is difficult to adequately say how much Mike and James appreciate Rafa! How
lucky his students are to have such an energetic and knowledgeable professor! Without
Rafa, this book would have been much less lively and fun to read. Rafa’s command of
TBB made this book much better, and his command of the English language helped
correct the native English speakers (Mike and James) more than a few times. The three
of us enjoyed working on this book together, and we definitely spurred each other on to
great heights. It has been an excellent collaboration.
We thank Todd Green who initially brought us to Apress. We thank Natalie Pao, of
Apress, and John Somoza, of Intel, who cemented the terms between Intel and Apress
on this project. We appreciate the hard work by the entire Apress team through contract,
editing, and production.
Thank you all,
Mike Voss, Rafael Asenjo, and James Reinders
xviii
Preface
Think Parallel
We have aimed to make this book useful for those who are new to parallel programming
as well as those who are expert in parallel programming. We have also made this book
approachable for those who are comfortable only with C programming, as well as those
who are fluent in C++.
In order to address this diverse audience without “dumbing down” the book, we
have written this Preface to level the playing field.
What Is TBB
TBB is a solution for writing parallel programs in C++ which has become the most
popular, and extensive, support for parallel programming in C++. It is widely used
and very popular for a good reason. More than 10 years old, TBB has stood the test
of time and has been influential in the inclusion of parallel programming support in
the C++ standard. While C++11 made major additions for parallel programming, and
C++17 and C++2x take that ever further, most of what TBB offers is much more than
what belongs in a language standard. TBB was introduced in 2006, so it contains
support for pre-C++11 compilers. We have simplified matters by taking a modern
look at TBB and assuming C++11. Common advice today is “if you don’t have a
C++11 compiler, get one.” Compared with the 2007 book on TBB, we think C++11,
with lambda support in particular, makes TBB both richer and easier to understand
and use.
TBB is simply the best way to write a parallel program in C++, and we hope to help
you be very productive in using TBB.
xix
Preface
T hink Parallel
For those new to parallel programming, we offer this Preface to provide a foundation
that will make the remainder of the book more useful, approachable, and self-contained.
We have attempted to assume only a basic understanding of C programming and
introduce the key elements of C++ that TBB relies upon and supports. We introduce
parallel programming from a practical standpoint that emphasizes what makes parallel
programs most effective. For experienced parallel programmers, we hope this Preface
will be a quick read that provides a useful refresher on the key vocabulary and thinking
that allow us to make the most of parallel computer hardware.
xx
Preface
After reading this Preface, you should be able to explain what it means to “Think
Parallel” in terms of decomposition, scaling, correctness, abstraction, and patterns.
You will appreciate that locality is a key concern for all parallel programming. You
will understand the philosophy of supporting task programming instead of thread
programming – a revolutionary development in parallel programming supported by TBB.
You will also understand the elements of C++ programming that are needed above and
beyond a knowledge of C in order to use TBB well.
The remainder of this Preface contains five parts:
(1) An explanation of the motivations behind TBB (begins on page xxi)
xxi
Preface
xxii
Preface
With these definitions in mind, a program written in terms of threads would have
to map each algorithm onto specific systems of hardware and software. This is not only
a distraction, it causes a whole host of issues that make parallel programming more
difficult, less effective, and far less portable.
Whereas, a program written in terms of tasks allows a runtime mechanism, for
example, the TBB runtime, to map tasks onto the hardware which is actually present at
runtime. This removes the distraction of worrying about the number of actual hardware
threads available on a system. More importantly, in practice this is the only method
which opens up nested parallelism effectively. This is such an important capability, that
we will revisit and emphasize the importance of nested parallelism in several chapters.
xxiii
Preface
available nonmandatory parallelism, the runtime is free to use that information to match
the capabilities of the machine in the most effective manner.
We have come to expect composability in our programming languages, but most
parallel programming models have failed to preserve it (fortunately, TBB does preserve
composability!). Consider “if” and “while” statements. The C and C++ languages allow
them to freely mix and nest as we desire. Imagine this was not so, and we lived in a world
where a function called from within an if statement was forbidden to contain a while
statement! Hopefully, any suggestion of such a restriction seems almost silly. TBB brings
this type of composability to parallel programming by allowing parallel constructs to be
freely mixed and nested without restrictions, and without causing issues.
WHAT IS SPEEDUP?
Speedup is formerly defined to be the time to run sequentially (not in parallel) divided by the
time to run in parallel. If my program runs in 3 seconds normally, but in only 1 second on a
quad-core processor, we would say it has a speedup of 3×. Sometimes, we might speak of
efficiency which is speedup divided by the number of processing cores. Our 3× would be 75%
efficient at using the parallelism.
The ideal goal of a 16× gain in performance when moving from a quad-core machine
to one with 64 cores is called linear scaling or perfect scaling.
xxiv
Preface
To accomplish this, we need to keep all the cores busy as we grow their
numbers – something that requires considerable available parallelism. We will dive
more into this concept of “available parallelism” starting on page xxxvii when we discuss
Amdahl’s Law and its implications.
For now, it is important to know that TBB supports high-performance programming
and helps significantly with performance portability. The high-performance support
comes because TBB introduces essentially no overhead which allows scaling to proceed
without issue. Performance portability lets our application harness available parallelism
as new machines offer more.
In our confident claims here, we are assuming a world where the slight additional
overhead of dynamic task scheduling is the most effective at exposing the parallelism
and exploiting it. This assumption has one fault: if we can program an application to
perfectly match the hardware, without any dynamic adjustments, we may find a few
percentage points gain in performance. Traditional High-Performance Computing
(HPC) programming, the name given to programming the world’s largest computers
for intense computations, has long had this characteristic in highly parallel scientific
computations. HPC developer who utilize OpenMP with static scheduling, and find it
does well with their performance, may find the dynamic nature of TBB to be a slight
reduction in performance. Any advantage previously seen from such static scheduling is
becoming rarer for a variety of reasons. All programming including HPC programming,
is increasing in complexity in a way that demands support for nested and dynamic
parallelism support. We see this in all aspects of HPC programming as well, including
growth to multiphysics models, introduction of AI (artificial intelligence), and use of ML
(machine learning) methods. One key driver of additional complexity is the increasing
diversity of hardware, leading to heterogeneous compute capabilities within a single
machine. TBB gives us powerful options for dealing with these complexities, including
its flow graph features which we will dive into in Chapter 3.
xxv
Preface
• Long lines: When you have to wait in a long line, you have
undoubtedly wished there were multiple shorter (faster) lines, or
multiple people at the front of the line helping serve customers more
quickly. Grocery store check-out lines, lines to get train tickets, and
lines to buy coffee are all examples.
• Lots of repetitive work: When you have a big task to do, which many
people could help with at the same time, you have undoubtedly wished for
more people to help you. Moving all your possessions from an old dwelling
to a new one, stuffing letters in envelopes for a mass mailing, and installing
the same software on each new computer in your lab are examples. The
proverb “Many hands make light work” holds true for computers too.
Once you dig in and start using parallelism, you will Think Parallel. You will learn to
think first about the parallelism in your project, and only then think about coding it.
xxvi
Preface
Figure P-1. Parallel vs. Concurrent: Tasks (A) and (B) are concurrent relative to
each other but not parallel relative to each other; all other combinations are both
concurrent and parallel
xxvii
Preface
Enemies of Parallelism
Bearing in mind the enemies of parallel programming will help understand our advocacy
for particular programming methods. Key parallel programming enemies include
• Not “Thinking Parallel”: Use of clever bandages and patches will not
make up for a poorly thought out strategy for scalable algorithms.
Knowing where the parallelism is available, and how it can be
xxviii
Preface
• Forgetting that algorithms win: This may just be another way to say
“Think Parallel.” The choice of algorithms has a profound effect on
the scalability of applications. Our choice of algorithms determine
how tasks can divide, data structures are accessed, and results are
coalesced. The optimal algorithm is really the one which serves as the
basis for optimal solution. An optimal solution is a combination of the
appropriate algorithm, with the best matching parallel data structure,
and the best way to schedule the computation over the data. The
search for, and discovery of, algorithms which are better is seemingly
unending for all of us as programmers. Now, as parallel programmers,
we must add scalable to the definition of better for an algorithm.
Terminology of Parallelism
The vocabulary of parallel programming is something we need to learn in order to
converse with other parallel programmers. None of the concepts are particularly hard,
but they are very important to internalize. A parallel programmer, like any programmer,
spends years gaining a deep intuitive feel for their craft, despite the fundamentals being
simple enough to explain.
We will discuss decomposition of work into parallel tasks, scaling terminology,
correctness considerations, and the importance of locality due primarily to cache effects.
When we think about our application, how do we find the parallelism?
At the highest level, parallelism exists either in the form of data to operate on
in parallel, or in the form of tasks to execute in parallel. And they are not mutually
exclusive. In a sense, all of the important parallelism is in data parallelism. Nevertheless,
we will introduce both because it can be convenient to think of both. When we discuss
scaling, and Amdahl’s Law, our intense bias to look for data parallelism will become
more understandable.
xxix
Discovering Diverse Content Through
Random Scribd Documents
“That’s so,” admitted Paul in a tone of deep disappointment.
“How much did you say the debt amounted to?” asked
Amesbury.
“Eighteen dollars for each of us,” answered Paul, “but we’ve
been here working two months with wages, and that takes off six
dollars from each debt, so the first of the month our debts’ll each be
down to twelve dollars.”
“Good arithmetic; worked it out right the first time,” Amesbury
nodded in approval. “Now if you each pay the old pirate twelve
dollars, how much will you owe him and how long can he hold you
at the post?”
“Why the debt would be squared and he couldn’t keep us at all.”
“Right again.”
“But we has no money to pay un,” broke in Dan.
“Just leave all that to me,” counseled Amesbury. “I’ll attend to
his case.”
“Oh, thank you, Mr. Amesbury,” and Paul grasped the trapper’s
hand.
“’Tis wonderful kind of you,” said Dan.
“Don’t waste your words thanking me,” cautioned Amesbury.
“Wait till I get you out in the bush. I’ll get my money’s worth out of
you chaps.”
T AMMAS, Samuel, and Amos, who had spent the day caribou
hunting, but had killed nothing, were gathered around the stove
engaged in a heated argument as to whether a caribou would or
would not charge a man when at close quarters, when Paul and Dan
entered with the visitors.
“Weel! Weel!” exclaimed Tammas, rising. “If ’tis no Charley
Amesbury and John Buck wi’ the laddies!”
Amesbury and Ahmik were old visitors at the post. Every one
knew them and gave them a most hearty welcome. Even Chuck,
who was mixing biscuit for supper, wiped his dough-debaubed right
hand upon his trousers, that he might offer it to the visitors, and
Jerry, who lived with his family in a little nearby cabin, and had seen
them pass, came over to greet them.
Amesbury warned the lads to say nothing of their plan to the
post folk. “I’ll break the news gently to Davy MacTavish when the
time is ripe for it,” said he. “You fellows keep right at your work as
though you were to stay here forever.” And therefore no mention
was made of the arrangement to Tammas and the others.
During the days that followed Amesbury and Ahmik made some
purchases at the post shop, including the provisions necessary for
the return journey to their trapping grounds. They had no debt here,
and therefore bartered pelts to pay for their purchases. Their trading
completed, Amesbury produced two particularly fine marten skins,
and laid them upon the counter. “I’ve got everything I need,” said
he, “but I don’t want to carry these back with me. How much’ll you
give?”
“Trade or cash?” asked MacTavish, examining them critically.
“Trade. Give me credit for ’em. I may want something more
before I go.”
“Ten dollars each.”
“Not this time. They’re prime, and they’re worth forty dollars
apiece in Winnipeg.”
“This isn’t Winnipeg.”
“Give them back. They’re light to pack, and I guess I’ll take
them to Winnipeg.”
But MacTavish was gloating over them. They were glossy black,
remarkably well furred, the flesh side clean and white.
“They are pretty fair martens,” he said finally, as though
weighing the matter. “I may do a little better; say fifteen dollars.”
“I’ll take them to Winnipeg.”
“You can’t get Winnipeg prices here.”
“No, but I don’t have to sell them here. I thought if you’d give
me half what they’re worth I’d let you have them. You can keep
them for twenty dollars each. Not a cent less.”
“Can’t do it, but I’ll say as a special favor to you eighteen
dollars.”
“Hand them back. I’m not an Indian.”
“You know I’d not give an Indian over five dollars.”
“I know that, but I don’t ask for a debt. You see I’m pretty free
to do as I please. Hand ’em back.”
But the pelts were too good for MacTavish to let pass him, and
after a show of hesitancy he placed them upon the shelf behind him
and said reluctantly:
“They’re not worth it, but I’ll allow you twenty dollars each for
them. But it’s a very special favor.”
“Needn’t if you don’t want them. I wouldn’t bankrupt the
company for the world.”
“I’ll take them.”
The bargain concluded, Amesbury strolled away, humming:
When they laid down the saw to place another stick on the
buck, he said:
“Never mind that. You chaps come along with me, and we’ll pay
our respects to Mr. MacTavish.”
“Oh, have you told him we were going? I was almost afraid
you’d forgotten it!” exclaimed Paul exultantly.
“Never a word. Reserved the entertainment for an audience,
and you fellows are to be the audience. Come along; he’s in his
office now,” and Amesbury strode toward the office, Paul and Dan
expectantly following.
MacTavish glanced up from his desk as they entered, and
nodding to Amesbury, who had advanced to the center of the room,
noticed Paul and Dan near the door.
“What are you fellows knocking off work at this time of day for?
Get back to work, and if you want anything, come around after
hours.”
“They’ve knocked off for good,” Amesbury answered for them,
his eyes reflecting amusement. “They’re going trapping with me up
Indian Lake way. I’m sorry to deprive you of them, but I guess I’ll
have to.”
“What!” roared MacTavish, jumping to his feet. “Are you
inducing those boys to desert? What does this nonsense mean?”
“Yes, they’re going. Sorry you feel so badly at losing their
society, but I don’t see any way out of it.”
“Well, they’re not going.” MacTavish spoke more quietly, but
with determination, glowering at Amesbury. “They have a debt here
and they will stay until it is worked out. They’ve signed articles to
remain here until the debt is worked out, and I will hold them under
the articles. You fellows go back to your work.”
“We’re not going to work for you any more,” said Paul, his anger
rising. “Mr. Amesbury has told you we’re going with him, and we
are.”
“Go back to your work, I say, or I’ll have you flogged!”
MacTavish was now in a rage, and he made for the lads as though to
strike them, only to find the ungainly figure of Amesbury in the way.
“Tut! Tut! Big Jack Blunderbuss trying to strike the little
Tiddledewinks! Fine display of courage! But not this time. No
pugilistic encounters with any one but me while I’m around, and my
hands have an awful itch to get busy.”
“None of your interference in the affairs of this post!” bellowed
MacTavish. “You’re breeding mutiny here, and I’ve a mind to run you
off the reservation.”
“Hey diddle diddle,” broke in Amesbury, who had not for a
moment lost his temper, and who fairly oozed good humor. “This
isn’t seemly in a man in your position, MacTavish. Now let’s be
reasonable. Sit down and talk the matter over.”
“There’s nothing to talk over with you!” shouted MacTavish, who
nevertheless resumed his seat.
“Well, now, we’ll see.” Amesbury drew a chair up, sat down in
front of MacTavish, and leaning forward assumed a confidential
attitude. “In the first place,” he began, “the lads owe a debt, you
say, and you demand that it be paid.”
“They can’t leave here until it is paid! They can’t leave anyhow!”
still in a loud voice.
“No, no; of course not. That’s what we’ve got to talk about. I’ll
pay the debt. Now, how much is it?”
“That won’t settle it. They both signed on here for at least six
months, at three dollars a month, and they’ve got to stay the six
months.”
“Now you know, MacTavish, they are both minors and under the
law they are not qualified to make such a contract with you. Even
were they of age, there isn’t a court within the British Empire but
would adjudge such a contract unconscionable, and throw it out
upon the ground that it was signed under duress. You couldn’t hire
Indians to do the work these lads have done under twelve dollars a
month. In all justice you owe them a balance, for they’ve more than
worked out their debt.”
“I’m the court here, and I’m the judge, and I’m going to keep
these fellows right here.”
“Wrong in this case. There’s no law or court here except the law
and the court of the strong arm. Now I’ve unanimously elected
myself judge, jury and sheriff to deal with this matter. In these
various capacities I’ve decided their debt is paid and they’re going
with me. As their friend and your friend, however, I’ve suggested for
the sake of good feeling that they pay the balance you claim is due
you under the void agreement, and I offer to make settlement in full
now. I believe you claim twelve dollars due from each—twenty-four
dollars in all?”
It was plain that Amesbury had determined to carry out the plan
detailed, with or without the factor’s consent, and finally MacTavish
agreed to release Paul and Dan, and charge the twenty-four dollars
which he claimed still due on their debt against the forty dollars
credited to Amesbury for the two marten skins. He declared,
however, that had he known Amesbury’s intention he would not have
accepted a pelt from him, nor would he have sold Amesbury the
provisions necessary to support him and the lads on their journey to
Indian Lake.
“You can never trade another shilling’s worth at this post,”
announced MacTavish as the three turned to the door, “not another
shilling’s worth.”
“Now, now, MacTavish,” said Amesbury, smiling, “you know
better. I’ve a credit here that I’ll come back to trade out, and I’ll
have some nice pelts that you’ll be glad enough to take from me.”
“Not a shilling’s worth,” repeated the factor, whose anger was
not appeased when he heard Amesbury humming, as he passed out
of the door:
T HERE was yet no hint of dawn. Moon and stars shone cold and
white out of a cold, steel-blue sky. The moisture of the frozen
atmosphere, shimmering particles of frost, hung suspended in space.
The snow crunched and creaked under their swiftly moving
snowshoes.
They traveled in single file, after the fashion of the woods.
Amesbury led, then followed Ahmik, after him Paul, with Dan
bringing up the rear. Each hauled a toboggan, and though Paul’s and
Dan’s were much less heavily laden than Amesbury’s and Ahmik’s,
the lads had difficulty in keeping pace with the long, swinging half-
trot of the trapper and Indian.
Presently they entered the spruce forest of a river valley, dead
and cold, haunted by weird shadows, flitting ghostlike hither and
thither across ghastly white patches of moonlit snow. Now and again
a sharp report, like a pistol shot, startled them. It was the action of
frost upon the trees, a sure indication of extremely low temperature.
Dawn at length began to break—slowly—slowly—dispersing the
grotesque and ghostlike shadows. As dawn melted into day, the real
took the place of the unreal, and the frigid white wilderness that had
engulfed them presented its true face to the adventurous travelers.
Scarce a word was spoken as they trudged on. Amesbury and
Ahmik kept the silence born of long life in the wilderness where men
exist by pitting human skill against animal instinct, and learn from
the wild creatures they stalk the lesson of necessary silence and
acute listening. Dan, too, in his hunting experiences with his father,
had learned to some degree the same lesson, and Paul had small
inclination to talk, for he needed all his breath to hold the rapid
pace.
Rime had settled upon their clothing, and dawn revealed them
white as the snow over which they passed. The moisture from their
eyes froze upon their eyelashes, and now and again it was found
necessary to pick it off, painfully, as they walked.
The sun was two hours high when Amesbury and Ahmik
suddenly halted, and when Paul and Dan, who had fallen
considerably in the rear, overtook them, Ahmik was cutting wood,
while Amesbury, lighting a fire, was singing:
“‘Polly put the kettle on,
Polly put the kettle on,
Polly put the kettle on,
And let’s drink tea.’”
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com