MATLAB Parallel Computing Toolbox User s Guide The Mathworks download
MATLAB Parallel Computing Toolbox User s Guide The Mathworks download
https://textbookfull.com/product/matlab-parallel-computing-
toolbox-user-s-guide-the-mathworks/
https://textbookfull.com/product/matlab-econometrics-toolbox-
user-s-guide-the-mathworks/
https://textbookfull.com/product/matlab-bioinformatics-toolbox-
user-s-guide-the-mathworks/
https://textbookfull.com/product/matlab-mapping-toolbox-user-s-
guide-the-mathworks/
https://textbookfull.com/product/matlab-optimization-toolbox-
user-s-guide-the-mathworks/
MATLAB Trading Toolbox User s Guide The Mathworks
https://textbookfull.com/product/matlab-trading-toolbox-user-s-
guide-the-mathworks/
https://textbookfull.com/product/matlab-computer-vision-toolbox-
user-s-guide-the-mathworks/
https://textbookfull.com/product/matlab-curve-fitting-toolbox-
user-s-guide-the-mathworks/
https://textbookfull.com/product/matlab-fuzzy-logic-toolbox-user-
s-guide-the-mathworks/
https://textbookfull.com/product/matlab-global-optimization-
toolbox-user-s-guide-the-mathworks/
Parallel Computing Toolbox™
User's Guide
R2020a
How to Contact MathWorks
Phone: 508-647-7000
Getting Started
1
Parallel Computing Toolbox Product Description . . . . . . . . . . . . . . . . . . . . 1-2
v
Parallel for-Loops (parfor)
2
Decide When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
parfor-Loops in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Deciding When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Example of parfor With Low Parallel Overhead . . . . . . . . . . . . . . . . . . . . . 2-3
Example of parfor With High Parallel Overhead . . . . . . . . . . . . . . . . . . . . 2-4
vi Contents
Temporary Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Uninitialized Temporaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Temporary Variables Intended as Reduction Variables . . . . . . . . . . . . . . . 2-49
ans Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49
vii
Load Distributed Arrays in Parallel Using datastore . . . . . . . . . . . . . . . . 3-10
Alternative Methods for Creating Distributed and Codistributed Arrays . 3-12
Programming Overview
5
How Parallel Computing Products Run a Job . . . . . . . . . . . . . . . . . . . . . . . 5-2
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Toolbox and Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Life Cycle of a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
viii Contents
Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks . . . . . . . . . . . 5-21
ix
Program Independent Jobs
6
Program Independent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
GPU Computing
8
GPU Capabilities and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Performance Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
x Contents
Establish Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays from Existing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Examine gpuArray Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Save and Load gpuArrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
xi
Objects
9
Functions
10
xii Contents
1
Getting Started
Parallel Computing Toolbox lets you solve computationally and data-intensive problems using
multicore processors, GPUs, and computer clusters. High-level constructs—parallel for-loops, special
array types, and parallelized numerical algorithms—enable you to parallelize MATLAB® applications
without CUDA or MPI programming. The toolbox lets you use parallel-enabled functions in MATLAB
and other toolboxes. You can use the toolbox with Simulink® to run multiple simulations of a model in
parallel. Programs and models can run in both interactive and batch modes.
The toolbox lets you use the full processing power of multicore desktops by executing applications on
workers (MATLAB computational engines) that run locally. Without changing the code, you can run
the same applications on clusters or clouds (using MATLAB Parallel Server™). You can also use the
toolbox with MATLAB Parallel Server to execute matrix calculations that are too large to fit into the
memory of a single machine.
1-2
Parallel Computing Support in MathWorks Products
Most MathWorks products enable you to run applications in parallel. For example, Simulink models
can run simultaneously in parallel, as described in “Run Multiple Simulations” (Simulink). MATLAB
Compiler™ and MATLAB Compiler SDK™ software let you build and deploy parallel applications; for
example, see the “Parallel Computing” section of MATLAB Compiler “Standalone Applications”
(MATLAB Compiler).
Several MathWorks products now offer built-in support for the parallel computing products, without
requiring extra coding. For the current list of these products and their parallel functionality, see:
https://www.mathworks.com/products/parallel-computing/parallel-support.html
1-3
1 Getting Started
In this section...
“Creating Distributed Arrays” on page 1-4
“Creating Codistributed Arrays” on page 1-5
If your data is currently in the memory of your local machine, you can use the distributed function
to distribute an existing array from the client workspace to the workers of a parallel pool.
Distributed arrays use the combined memory of multiple workers in a parallel pool to store the
elements of an array. For alternative ways of partitioning data, see “Distributing Arrays to Parallel
Workers” on page 3-10.You can use distributed arrays to scale up your big data computation.
Consider distributed arrays when you have access to a cluster, as you can combine the memory of
multiple machines in your cluster.
A distributed array is a single variable, split over multiple workers in your parallel pool. You can
work with this variable as one single entity, without having to worry about its distributed nature.
Explore the functionalities available for distributed arrays in the Parallel Computing Toolbox:
“Run MATLAB Functions with Distributed Arrays” on page 4-19.
When you create a distributed array, you cannot control the details of the distribution. On the
other hand, codistributed arrays allow you to control all aspects of distribution, including
dimensions and partitions. In the following, you learn how to create both distributed and
codistributed arrays.
• Use the distributed function to distribute an existing array from the client workspace to the
workers of a parallel pool.
• You can directly construct a distributed array on the workers. You do not need to first create the
array in the client, so that client workspace memory requirements are reduced. The functions
available include eye(___,'distributed'), rand(___,'distributed'), etc. For a full list,
see the distributed object reference page.
• Create a codistributed array inside an spmd statement, see “Single Program Multiple Data
(spmd)” on page 1-12. Then access it as a distributed array outside the spmd statement. This
lets you use distribution schemes other than the default.
In this example, you create an array in the client workspace, then turn it into a distributed array:
You have createdB as a distributed array, split over the workers in your parallel pool. This is
shown in the figure.
1-4
Create and Use Distributed Arrays
• “Partitioning a Larger Array” on page 4-6 — Start with a large array that is replicated on all
workers, and partition it so that the pieces are distributed across the workers. This is most useful
when you have sufficient memory to store the initial replicated array.
• “Building from Smaller Arrays” on page 4-6 — Start with smaller replicated arrays stored on
each worker, and combine them so that each array becomes a segment of a larger codistributed
array. This method reduces memory requirements as it lets you build a codistributed array from
smaller pieces.
• “Using MATLAB Constructor Functions” on page 4-7 — Use any of the MATLAB constructor
functions like rand or zeros with a codistributor object argument. These functions offer a quick
means of constructing a codistributed array of any size in just one step.
In this example, you create a codistributed array inside an spmd statement, using a nondefault
distribution scheme. First, define 1-D distribution along the third dimension, with 4 parts on worker
1, and 12 parts on worker 2. Then create a 3-by-3-by-16 array of zeros.
For more details on codistributed arrays, see “Working with Codistributed Arrays” on page 4-4.
See Also
Related Examples
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Big Data Workflow Using Tall Arrays and Datastores” on page 5-46
• “Single Program Multiple Data (spmd)” on page 1-12
1-5
1 Getting Started
ver
When you enter this command, MATLAB displays information about the version of MATLAB you are
running, including a list of all toolboxes installed on your system and their version numbers.
If you want to run your applications on a cluster, see your system administrator to verify that the
version of Parallel Computing Toolbox you are using is the same as the version of MATLAB Parallel
Server installed on your cluster.
1-6
Interactively Run a Loop in Parallel Using parfor
This example calculates the spectral radius of a matrix and converts a for-loop into a parfor-loop.
Find out how to measure the resulting speedup.
1 In the MATLAB Editor, enter the following for-loop. Add tic and toc to measure the time
elapsed.
tic
n = 200;
A = 500;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
2 Run the script, and note the elapsed time.
Elapsed time is 31.935373 seconds.
3 In the script, replace the for-loop with a parfor-loop.
tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
4 Run the new script, and run it again. Note that the first run is slower than the second run,
because the parallel pool takes some time to start and make the code available to the workers.
Note the elapsed time for the second run.
By default, MATLAB automatically opens a parallel pool of workers on your local machine.
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
...
Elapsed time is 10.760068 seconds.
1-7
1 Getting Started
The parfor run on four workers is about three times faster than the corresponding for-loop
run. The speed-up is smaller than the ideal speed-up of a factor of four on four workers. This is
due to parallel overhead, including the time required to transfer data from the client to the
workers and back. This example shows a good speed-up with relatively small parallel overhead,
and benefits from conversion into a parfor-loop. Not all for-loop iterations can be turned into
faster parfor-loops. To learn more, see “Decide When to Use parfor” on page 2-2.
One key requirement for using parfor-loops is that the individual iterations must be independent.
Independent problems suitable for parfor processing include Monte Carlo simulations and
parameter sweeps. For next steps, see “Convert for-Loops Into parfor-Loops” on page 2-7.
In this example, you managed to speed up the calculation by converting the for-loop into a parfor-
loop on four workers. You might reduce the elapsed time further by increasing the number of workers
in your parallel pool, see “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21.
You can modify your cluster profiles to control how many workers run your loops, and whether the
workers are local or on a cluster. For more information on profiles, see “Discover Clusters and Use
Cluster Profiles” on page 5-11.
Modify your parallel preferences to control whether a parallel pool is created automatically, and how
long it remains available before timing out. For more information on preferences, see “Specify Your
Parallel Preferences” on page 5-9.
You can run Simulink models in parallel with the parsim command instead of using parfor-loops.
For more information and examples of using Simulink in parallel, see “Run Multiple Simulations”
(Simulink).
See Also
parfor | parpool | tic | toc
More About
• “Decide When to Use parfor” on page 2-2
• “Convert for-Loops Into parfor-Loops” on page 2-7
• “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21
1-8
Run Batch Parallel Jobs
5 batch does not block MATLAB and you can continue working while computations take place. If
you need to block MATLAB until the job finishes, use the wait function on the job object.
wait(job)
6 After the job finishes, you can retrieve and view its results. The load command transfers
variables created on the worker to the client workspace, where you can view the results:
load(job,'A')
plot(A)
7 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job
batch runs your code on a local worker or a cluster worker, but does not require a parallel pool.
You can use batch to run either scripts or functions. For more details, see the batch reference page.
1-9
1 Getting Started
parfor i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Run the script in MATLAB with the batch command. Indicate that the script should use a
parallel pool for the loop:
job = batch('mywave','Pool',3)
This command specifies that three workers (in addition to the one running the batch script) are
to evaluate the loop iterations. Therefore, this example uses a total of four local workers,
including the one worker running the batch script. Altogether, there are five MATLAB sessions
involved, as shown in the following diagram.
wait(job)
load(job,'A')
plot(A)
The results look the same as before, however, there are two important differences in execution:
• The work of defining the parfor-loop and accumulating its results are offloaded to another
MATLAB session by batch.
• The loop iterations are distributed from one MATLAB worker to another set of workers
running simultaneously ('Pool' and parfor), so the loop might run faster than having only
one worker execute it.
6 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job
1-10
Run Batch Parallel Jobs
Running a script as a batch from the browser uses only one worker from the cluster. So even if the
script contains a parfor loop or spmd block, it does not open an additional pool of workers on the
cluster. These code blocks execute on the single worker used for the batch job. If your batch script
requires opening an additional pool of workers, you can run it from the command line, as described in
“Run a Batch Job with a Parallel Pool” on page 1-9.
When you run a batch job from the browser, this also opens the Job Monitor. The Job Monitor is a tool
that lets you track your job in the scheduler queue. For more information about the Job Monitor and
its capabilities, see “Job Monitor” on page 5-24.
See Also
batch
Related Examples
• “Run Batch Job and Access Files from Workers”
1-11
1 Getting Started
Now MM is a distributed array, equivalent to M, and you can manipulate or access its elements in the
same way as any other array.
M2 = 2*MM; % M2 is also distributed, calculation performed on workers
x = M2(1,1) % x on the client is set to first element of M2
This code creates an individual 4-by-4 matrix, R, of random numbers on each worker in the pool.
Composites
Following an spmd statement, in the client context, the values from the block are accessible, even
though the data is actually stored on the workers. On the client, these variables are called Composite
objects. Each element of a composite is a symbol referencing the value (data) on a worker in the pool.
Note that because a variable might not be defined on every worker, a Composite might have
undefined elements.
Continuing with the example from above, on the client, the Composite R has one element for each
worker:
X = R{3}; % Set X to the value of R from worker 3.
The line above retrieves the data from worker 3 to assign the value of X. The following code sends
data to worker 3:
X = X + 2;
R{3} = X; % Send the value of X from the client to worker 3.
If the parallel pool remains open between spmd statements and the same workers are used, the data
on each worker persists from one spmd statement to another.
spmd
R = R + labindex % Use values of R from previous spmd.
end
1-12
Distribute Arrays and Run SPMD
A typical use for spmd is to run the same code on a number of workers, each of which accesses a
different set of data. For example:
spmd
INP = load(['somedatafile' num2str(labindex) '.mat']);
RES = somefun(INP)
end
Then the values of RES on the workers are accessible from the client as RES{1} from worker 1,
RES{2} from worker 2, etc.
There are two forms of indexing a Composite, comparable to indexing a cell array:
Although data persists on the workers from one spmd block to another as long as the parallel pool
remains open, data does not persist from one instance of a parallel pool to another. That is, if the pool
is deleted and a new one created, all data from the first pool is lost.
For more information about using distributed arrays, spmd, and Composites, see “Distributed
Arrays”.
1-13
1 Getting Started
• Accelerate your code using interactive parallel computing tools, such as parfor and parfeval
• Scale up your computation using interactive Big Data processing tools, such as distributed,
tall, datastore, and mapreduce
• Use gpuArray to speed up your calculation on the GPU of your computer
• Use batch to offload your calculation to computer clusters or cloud computing facilities
• Node: standalone computer, containing one or more CPUs / GPUs. Nodes are networked to form a
cluster or supercomputer
• Thread: smallest set of instructions that can be managed independently by a scheduler. On a GPU,
multiprocessor or multicore system, multiple threads can be executed simultaneously (multi-
threading)
• Batch: off-load execution of a functional script to run in the background
• Scalability: increase in parallel speedup with the addition of more resources
• MATLAB workers: MATLAB computational engines that run in the background without a graphical
desktop. You use functions in the Parallel Computing Toolbox to automatically divide tasks and
assign them to these workers to execute the computations in parallel. You can run local workers to
take advantage of all the cores in your multicore desktop computer. You can also scale up to run
your workers on a cluster of machines, using the MATLAB Parallel Server. The MATLAB session
you interact with is known as the MATLAB client. The client instructs the workers with parallel
language functions.
• Parallel pool: a parallel pool of MATLAB workers created using parpool or functions with
automatic parallel support. By default, parallel language functions automatically create a parallel
pool for you when necessary. To learn more, see “Run Code on Parallel Pools” on page 2-56.
For the default local profile, the default number of workers is one per physical CPU core using a
single computational thread. This is because even though each physical core can have several
virtual cores, the virtual cores share some resources, typically including a shared floating point
unit (FPU). Most MATLAB computations use this unit because they are double-precision floating
point. Restricting to one worker per physical core ensures that each worker has exclusive access
to a floating point unit, which generally optimizes performance of computational code. If your
code is not computationally intensive, for example, it is input/output (I/O) intensive, then consider
1-14
Exploring the Variety of Random
Documents with Different Content
MR. HARVEY’S REMARKS.
“My patient, Mr. Banting, having published for the benefit of his
fellow sufferers, some account of the diet which I recommended him
to adopt with a view to relieve him of a distressing degree of
hypertrophy of the adipose tissue. I have been frequently urged by
him to explain the principles upon which I was enabled to treat with
success this inconvenient, and in some instances, distressing
condition of the system.
“The simple history of my finding occasion to investigate this subject
is as follows:—When in Paris, in the year 1856, I took the
opportunity of attending a discussion on the views of M. Bernard,
who was at that time propounding his now generally admitted
theory of the liver functions. After he had discovered by chemical
processes and physiological experiments, which it is unnecessary for
me to recapitulate here, that the liver not only secreted bile, but also
a peculiar amyloid or starch-like product which he called glucose,
and which in its chemical and physical properties appeared to be
nearly allied to saccharine matter, he further found that this glucose
could be directly produced in the liver by the ingestion of sugar and
its ally starch, and that in diabetes it existed there in considerable
excess. It had long been well known that a purely animal diet greatly
assisted in checking the secretion of diabetic urine; and it seemed to
follow, as a matter of course, that the total abstinence from
saccharine and farinaceous matter must drain the liver of this
excessive amount of glucose, and thus arrest in a similar proportion
the diabetic tendency. Reflecting on this chain of argument, and
knowing too that a saccharine and farinaceous diet is used to fatten
certain animals, and that in diabetes, the whole of the fat of the
body rapidly disappears, it occurred to me that excessive obesity
might be allied to diabetes as to its cause, although widely diverse in
its development: and that if a purely animal diet was useful in the
latter disease, a combination of animal food with such vegetable
matters as contained neither sugar nor starch, might serve to arrest
the undue formation of fat. I soon afterwards had an opportunity of
testing this idea. A dispensary patient, who consulted me for
deafness, and who was enormously corpulent, I found to have no
distinguishable disease of the ear. I therefore suspected that his
deafness arose from the great development of adipose matter in the
throat, pressing upon and stopping up the eustachian tubes. I
subjected him to a strict non-farinaceous and non-saccharine diet,
and treated him with the volatile alkali alluded to in his Pamphlet,
and occasional aperients, and in about seven months he was
reduced to almost normal proportions, his hearing restored, and his
general health immensely improved. This case seemed to give
substance and reality to my conjectures, which further experience
has confirmed.
“When we consider that fat is what is termed hydro carbon, and
deposits itself so insidiously and yet so gradually amongst the
tissues of the body, it is at once manifest that we require such
substances as contain a superfluity of oxygen and nitrogen to arrest
its formation and to vitalize the system. That is the principal upon
which the diet suggested in his Pamphlet works, and explains on the
one hand the necessity of abstaining from all vegetable roots which
hold a large quantity of saccharine matter, and on the other the
beneficial effects derivable from those vegetables, the fruits of which
are on the exterior of the earth, as they lose, probably by means of
the sun’s action, a large proportion of their sugar.
“With regard to the tables of Dr. Hutchinson, referred to in his
Pamphlet, it is no doubt difficult, as he says, to determine what is a
man’s proper weight, which must be influenced by various causes.
Those tables, however, were formed by him on the principle of
considering the amount of air which the lungs in their healthy state
can receive and apply to the oxydation of the blood. I gave them to
Mr. Banting as an indication only of what the approximate weight of
adult persons in proportion to their stature should be, and with the
view of proving to them the importance of keeping down the
tendency to grow fat; for, as that tendency increases, the capacity of
the lungs, and consequently the vitality and power of the whole
system must diminish. In conclusion, I would suggest the propriety
of advising a dietary such as this in diseases that are in any way
influenced by a disordered condition of the hepatic functions, as they
cannot fail to yield in some degree to this simple method of
treatment if fairly and properly carried out; it remains for me to
watch its progress in a more limited sphere.
“William Harvey, F.R.C.S.,
“Surgeon to the Royal Dispensary,
for Diseases of the Ear.”
2, Soho Square,
April, 1864.
*** END OF THE PROJECT GUTENBERG EBOOK LETTER ON
CORPULENCE, ADDRESSED TO THE PUBLIC ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com