100% found this document useful (2 votes)
13 views46 pages

MATLAB Parallel Computing Toolbox User s Guide The Mathworks download

The document is the User's Guide for the MATLAB Parallel Computing Toolbox, detailing its features and functionalities. It covers topics such as creating and using distributed arrays, running parallel jobs, and utilizing parfor loops for parallel processing. Additionally, it provides contact information for MathWorks and outlines the software's licensing and trademark details.

Uploaded by

fsstgpmh1445
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
13 views46 pages

MATLAB Parallel Computing Toolbox User s Guide The Mathworks download

The document is the User's Guide for the MATLAB Parallel Computing Toolbox, detailing its features and functionalities. It covers topics such as creating and using distributed arrays, running parallel jobs, and utilizing parfor loops for parallel processing. Additionally, it provides contact information for MathWorks and outlines the software's licensing and trademark details.

Uploaded by

fsstgpmh1445
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

MATLAB Parallel Computing Toolbox User s Guide

The Mathworks download

https://textbookfull.com/product/matlab-parallel-computing-
toolbox-user-s-guide-the-mathworks/

Download full version ebook from https://textbookfull.com


We believe these products will be a great fit for you. Click
the link to download now, or visit textbookfull.com
to discover even more!

MATLAB Econometrics Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-econometrics-toolbox-
user-s-guide-the-mathworks/

MATLAB Bioinformatics Toolbox User s Guide The


Mathworks

https://textbookfull.com/product/matlab-bioinformatics-toolbox-
user-s-guide-the-mathworks/

MATLAB Mapping Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-mapping-toolbox-user-s-
guide-the-mathworks/

MATLAB Optimization Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-optimization-toolbox-
user-s-guide-the-mathworks/
MATLAB Trading Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-trading-toolbox-user-s-
guide-the-mathworks/

MATLAB Computer Vision Toolbox User s Guide The


Mathworks

https://textbookfull.com/product/matlab-computer-vision-toolbox-
user-s-guide-the-mathworks/

MATLAB Curve Fitting Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-curve-fitting-toolbox-
user-s-guide-the-mathworks/

MATLAB Fuzzy Logic Toolbox User s Guide The Mathworks

https://textbookfull.com/product/matlab-fuzzy-logic-toolbox-user-
s-guide-the-mathworks/

MATLAB Global Optimization Toolbox User s Guide The


Mathworks

https://textbookfull.com/product/matlab-global-optimization-
toolbox-user-s-guide-the-mathworks/
Parallel Computing Toolbox™
User's Guide

R2020a
How to Contact MathWorks

Latest news: www.mathworks.com

Sales and services: www.mathworks.com/sales_and_services

User community: www.mathworks.com/matlabcentral

Technical support: www.mathworks.com/support/contact_us

Phone: 508-647-7000

The MathWorks, Inc.


1 Apple Hill Drive
Natick, MA 01760-2098
Parallel Computing Toolbox™ User's Guide
© COPYRIGHT 2004–2020 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied
only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form
without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through
the federal government of the United States. By accepting delivery of the Program or Documentation, the government
hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer
software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014.
Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain
to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and
Documentation by the federal government (or other entity acquiring for or through the federal government) and shall
supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is
inconsistent in any respect with federal procurement law, the government agrees to return the Program and
Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See
www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be
trademarks or registered trademarks of their respective holders.
Patents
MathWorks products are protected by one or more U.S. patents. Please see www.mathworks.com/patents for
more information.
Revision History
November 2004 Online only New for Version 1.0 (Release 14SP1+)
March 2005 Online only Revised for Version 1.0.1 (Release 14SP2)
September 2005 Online only Revised for Version 1.0.2 (Release 14SP3)
November 2005 Online only Revised for Version 2.0 (Release 14SP3+)
March 2006 Online only Revised for Version 2.0.1 (Release 2006a)
September 2006 Online only Revised for Version 3.0 (Release 2006b)
March 2007 Online only Revised for Version 3.1 (Release 2007a)
September 2007 Online only Revised for Version 3.2 (Release 2007b)
March 2008 Online only Revised for Version 3.3 (Release 2008a)
October 2008 Online only Revised for Version 4.0 (Release 2008b)
March 2009 Online only Revised for Version 4.1 (Release 2009a)
September 2009 Online only Revised for Version 4.2 (Release 2009b)
March 2010 Online only Revised for Version 4.3 (Release 2010a)
September 2010 Online only Revised for Version 5.0 (Release 2010b)
April 2011 Online only Revised for Version 5.1 (Release 2011a)
September 2011 Online only Revised for Version 5.2 (Release 2011b)
March 2012 Online only Revised for Version 6.0 (Release 2012a)
September 2012 Online only Revised for Version 6.1 (Release 2012b)
March 2013 Online only Revised for Version 6.2 (Release 2013a)
September 2013 Online only Revised for Version 6.3 (Release 2013b)
March 2014 Online only Revised for Version 6.4 (Release 2014a)
October 2014 Online only Revised for Version 6.5 (Release 2014b)
March 2015 Online only Revised for Version 6.6 (Release 2015a)
September 2015 Online only Revised for Version 6.7 (Release 2015b)
March 2016 Online only Revised for Version 6.8 (Release 2016a)
September 2016 Online only Revised for Version 6.9 (Release 2016b)
March 2017 Online only Revised for Version 6.10 (Release 2017a)
September 2017 Online only Revised for Version 6.11 (Release 2017b)
March 2018 Online only Revised for Version 6.12 (Release 2018a)
September 2018 Online only Revised for Version 6.13 (Release 2018b)
March 2019 Online only Revised for Version 7.0 (Release 2019a)
September 2019 Online only Revised for Version 7.1 (Release 2019b)
March 2020 Online only Revised for Version 7.2 (Release 2020a)
Contents

Getting Started
1
Parallel Computing Toolbox Product Description . . . . . . . . . . . . . . . . . . . . 1-2

Parallel Computing Support in MathWorks Products . . . . . . . . . . . . . . . . 1-3

Create and Use Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4


Creating Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Creating Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5

Determine Product Installation and Versions . . . . . . . . . . . . . . . . . . . . . . . 1-6

Interactively Run a Loop in Parallel Using parfor . . . . . . . . . . . . . . . . . . . 1-7

Run Batch Parallel Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9


Run a Batch Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Run a Batch Job with a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Run Script as Batch Job from the Current Folder Browser . . . . . . . . . . . . 1-11

Distribute Arrays and Run SPMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12


Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Single Program Multiple Data (spmd) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12

What Is Parallel Computing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14

Choose a Parallel Computing Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16

Run MATLAB Functions with Automatic Parallel Support . . . . . . . . . . . . 1-20


Find Automatic Parallel Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20

Run Non-Blocking Code in Parallel Using parfeval . . . . . . . . . . . . . . . . . 1-22

Evaluate Functions in the Background Using parfeval . . . . . . . . . . . . . . 1-23

Use Parallel Computing Toolbox with Cloud Center clusters in MATLAB


Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24

v
Parallel for-Loops (parfor)
2
Decide When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
parfor-Loops in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Deciding When to Use parfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Example of parfor With Low Parallel Overhead . . . . . . . . . . . . . . . . . . . . . 2-3
Example of parfor With High Parallel Overhead . . . . . . . . . . . . . . . . . . . . 2-4

Convert for-Loops Into parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7

Ensure That parfor-Loop Iterations are Independent . . . . . . . . . . . . . . . 2-10

Nested parfor and for-Loops and Other parfor Requirements . . . . . . . . 2-13


Nested parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Convert Nested for-Loops to parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . 2-14
Nested for-Loops: Requirements and Limitations . . . . . . . . . . . . . . . . . . 2-16
parfor-Loop Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17

Scale Up parfor-Loops to Cluster and Cloud . . . . . . . . . . . . . . . . . . . . . . . 2-21

Use parfor-Loops for Reduction Assignments . . . . . . . . . . . . . . . . . . . . . . 2-26

Use Objects and Handles in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . 2-27


Using Objects in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Handle Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Sliced Variables Referencing Function Handles . . . . . . . . . . . . . . . . . . . 2-27

Troubleshoot Variables in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29


Ensure That parfor-Loop Variables Are Consecutive Increasing Integers
..................................................... 2-29
Avoid Overflows in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29
Solve Variable Classification Issues in parfor-Loops . . . . . . . . . . . . . . . . 2-30
Structure Arrays in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
Converting the Body of a parfor-Loop into a Function . . . . . . . . . . . . . . . 2-32
Unambiguous Variable Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
Transparent parfor-loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
Global and Persistent Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33

Loop Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35

Sliced Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37


Characteristics of a Sliced Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37
Sliced Input and Output Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38
Nested for-Loops with Sliced Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 2-39

Broadcast Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-41

Reduction Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42


Notes About Required and Recommended Guidelines . . . . . . . . . . . . . . . 2-43
Basic Rules for Reduction Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-43
Requirements for Reduction Assignments . . . . . . . . . . . . . . . . . . . . . . . . 2-44
Using a Custom Reduction Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-45
Chaining Reduction Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46

vi Contents
Temporary Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Uninitialized Temporaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
Temporary Variables Intended as Reduction Variables . . . . . . . . . . . . . . . 2-49
ans Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-49

Ensure Transparency in parfor-Loops or spmd Statements . . . . . . . . . . . 2-50


Parallel Simulink Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51

Improve parfor Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52


Where to Create Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52
Profiling parfor-loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
Slicing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-54
Optimizing on Local vs. Cluster Workers . . . . . . . . . . . . . . . . . . . . . . . . . 2-55

Run Code on Parallel Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-56


What Is a Parallel Pool? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-56
Automatically Start and Stop a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . 2-56
Alternative Ways to Start and Stop Pools . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
Pool Size and Cluster Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-59

Choose Between Thread-Based and Process-Based Environments . . . . . 2-61


Select Parallel Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-61
Compare Process Workers and Thread Workers . . . . . . . . . . . . . . . . . . . 2-64
Solve Optimization Problem in Parallel on Thread-Based Pool . . . . . . . . . 2-65
What Are Thread-Based Environments? . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
What are Process-Based Environments? . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
Check Support for Thread-Based Environment . . . . . . . . . . . . . . . . . . . . 2-68

Repeat Random Numbers in parfor-Loops . . . . . . . . . . . . . . . . . . . . . . . . 2-70

Recommended System Limits for Macintosh and Linux . . . . . . . . . . . . . 2-71

Single Program Multiple Data (spmd)


3
Run Single Programs on Multiple Data Sets . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
When to Use spmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Define an spmd Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Display Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
MATLAB Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
spmd Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4

Access Worker Variables with Composites . . . . . . . . . . . . . . . . . . . . . . . . . 3-7


Introduction to Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Create Composites in spmd Statements . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Variable Persistence and Sequences of spmd . . . . . . . . . . . . . . . . . . . . . . 3-8
Create Composites Outside spmd Statements . . . . . . . . . . . . . . . . . . . . . . 3-9

Distributing Arrays to Parallel Workers . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10


Using Distributed Arrays to Partition Data Across Workers . . . . . . . . . . . 3-10

vii
Load Distributed Arrays in Parallel Using datastore . . . . . . . . . . . . . . . . 3-10
Alternative Methods for Creating Distributed and Codistributed Arrays . 3-12

Math with Codistributed Arrays


4
Nondistributed Versus Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Nondistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3

Working with Codistributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4


How MATLAB Software Distributes Arrays . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Creating a Codistributed Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Local Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Obtaining information About the Array . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Changing the Dimension of Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Restoring the Full Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Indexing into a Codistributed Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
2-Dimensional Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12

Looping Over a Distributed Range (for-drange) . . . . . . . . . . . . . . . . . . . . 4-16


Parallelizing a for-Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Codistributed Arrays in a for-drange Loop . . . . . . . . . . . . . . . . . . . . . . . 4-17

Run MATLAB Functions with Distributed Arrays . . . . . . . . . . . . . . . . . . . 4-19


Check Distributed Array Support in Functions . . . . . . . . . . . . . . . . . . . . 4-19
Support for Sparse Distributed Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19

Programming Overview
5
How Parallel Computing Products Run a Job . . . . . . . . . . . . . . . . . . . . . . . 5-2
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Toolbox and Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Life Cycle of a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6

Program a Job on a Local Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8

Specify Your Parallel Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9

Discover Clusters and Use Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . 5-11


Create and Manage Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Discover Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Create Cloud Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Add and Modify Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Import and Export Cluster Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Edit Number of Workers and Cluster Settings . . . . . . . . . . . . . . . . . . . . . 5-19
Use Your Cluster from MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19

viii Contents
Apply Callbacks to MATLAB Job Scheduler Jobs and Tasks . . . . . . . . . . . 5-21

Job Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24


Typical Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Manage Jobs Using the Job Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Identify Task Errors Using the Job Monitor . . . . . . . . . . . . . . . . . . . . . . . 5-25

Programming Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26


Program Development Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Current Working Directory of a MATLAB Worker . . . . . . . . . . . . . . . . . . 5-27
Writing to Files from Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Saving or Sending Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27
Using clear functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Running Tasks That Call Simulink Software . . . . . . . . . . . . . . . . . . . . . . 5-28
Using the pause Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Transmitting Large Amounts of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Interrupting a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
Speeding Up a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28

Control Random Number Streams on Workers . . . . . . . . . . . . . . . . . . . . . 5-29


Client and Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
Different Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
Normally Distributed Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 5-31

Profiling Parallel Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32


Profile Parallel Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32
Analyze Parallel Profile Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-34

Troubleshooting and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42


Attached Files Size Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
File Access and Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
No Results or Failed Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-43
Connection Problems Between the Client and MATLAB Job Scheduler . . 5-44
SFTP Error: Received Message Too Long . . . . . . . . . . . . . . . . . . . . . . . . 5-44

Big Data Workflow Using Tall Arrays and Datastores . . . . . . . . . . . . . . . . 5-46


Running Tall Arrays in Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
Use mapreducer to Control Where Your Code Runs . . . . . . . . . . . . . . . . . 5-47

Use Tall Arrays on a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48

Use Tall Arrays on a Spark Enabled Hadoop Cluster . . . . . . . . . . . . . . . . 5-51


Creating and Using Tall Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-51

Run mapreduce on a Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54


Start Parallel Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54
Compare Parallel mapreduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54

Run mapreduce on a Hadoop Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57


Cluster Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Output Format and Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57
Calculate Mean Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-57

Partition a Datastore in Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-60

ix
Program Independent Jobs
6
Program Independent Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2

Program Independent Jobs on a Local Cluster . . . . . . . . . . . . . . . . . . . . . . 6-3


Create and Run Jobs with a Local Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Local Cluster Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5

Program Independent Jobs for a Supported Scheduler . . . . . . . . . . . . . . . 6-7


Create and Run Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Manage Objects in the Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11

Share Code with the Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13


Workers Access Files Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Pass Data to and from Worker Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14
Pass MATLAB Code for Startup and Finish . . . . . . . . . . . . . . . . . . . . . . . 6-15

Plugin Scripts for Generic Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17


Sample Plugin Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17
Writing Custom Plugin Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19
Adding User Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24
Managing Jobs with Generic Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25
Submitting from a Remote Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
Submitting without a Shared File System . . . . . . . . . . . . . . . . . . . . . . . . 6-27

Program Communicating Jobs


7
Program Communicating Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2

Program Communicating Jobs for a Supported Scheduler . . . . . . . . . . . . 7-3


Schedulers and Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Code the Task Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Code in the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4

Further Notes on Communicating Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-6


Number of Tasks in a Communicating Job . . . . . . . . . . . . . . . . . . . . . . . . . 7-6
Avoid Deadlock and Other Dependency Errors . . . . . . . . . . . . . . . . . . . . . 7-6

GPU Computing
8
GPU Capabilities and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Performance Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2

x Contents
Establish Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays from Existing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Create GPU Arrays Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Examine gpuArray Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4
Save and Load gpuArrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5

Random Number Streams on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6


Client CPU and GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
Worker CPU and GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7
Normally Distributed Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7

Run MATLAB Functions on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9


MATLAB Functions with gpuArray Arguments . . . . . . . . . . . . . . . . . . . . . 8-9
Check or Select a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Use MATLAB Functions with a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10
Sharpen an Image Using the GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12
Compute the Mandelbrot Set using GPU-Enabled Functions . . . . . . . . . . 8-13
Work with Sparse Arrays on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
Work with Complex Numbers on a GPU . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Special Conditions for gpuArray Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-18

Identify and Select a GPU Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-19

Run CUDA or PTX Code on GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Create a CUDAKernel Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-20
Run a CUDAKernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-24
Complete Kernel Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26

Run MEX-Functions Containing CUDA Code . . . . . . . . . . . . . . . . . . . . . . . 8-28


Write a MEX-File Containing CUDA Code . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Run the Resulting MEX-Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
Comparison to a CUDA Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Access Complex Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Compile a GPU MEX-File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-30

Measure and Improve GPU Performance . . . . . . . . . . . . . . . . . . . . . . . . . 8-31


Getting Started with GPU Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . 8-31
Improve Performance Using Single Precision Calculations . . . . . . . . . . . 8-31
Basic Workflow for Improving Performance . . . . . . . . . . . . . . . . . . . . . . . 8-31
Advanced Tools for Improving Performance . . . . . . . . . . . . . . . . . . . . . . 8-32
Best Practices for Improving Performance . . . . . . . . . . . . . . . . . . . . . . . 8-33
Measure Performance on the GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-34
Vectorize for Improved GPU Performance . . . . . . . . . . . . . . . . . . . . . . . . 8-35
Troubleshooting GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-36

GPU Support by Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38


Supported GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38
CUDA Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39
Increase the CUDA Cache Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39

xi
Objects
9

Functions
10

xii Contents
1

Getting Started

• “Parallel Computing Toolbox Product Description” on page 1-2


• “Parallel Computing Support in MathWorks Products” on page 1-3
• “Create and Use Distributed Arrays” on page 1-4
• “Determine Product Installation and Versions” on page 1-6
• “Interactively Run a Loop in Parallel Using parfor” on page 1-7
• “Run Batch Parallel Jobs” on page 1-9
• “Distribute Arrays and Run SPMD” on page 1-12
• “What Is Parallel Computing?” on page 1-14
• “Choose a Parallel Computing Solution” on page 1-16
• “Run MATLAB Functions with Automatic Parallel Support” on page 1-20
• “Run Non-Blocking Code in Parallel Using parfeval” on page 1-22
• “Evaluate Functions in the Background Using parfeval” on page 1-23
• “Use Parallel Computing Toolbox with Cloud Center clusters in MATLAB Online” on page 1-24
1 Getting Started

Parallel Computing Toolbox Product Description


Perform parallel computations on multicore computers, GPUs, and computer clusters

Parallel Computing Toolbox lets you solve computationally and data-intensive problems using
multicore processors, GPUs, and computer clusters. High-level constructs—parallel for-loops, special
array types, and parallelized numerical algorithms—enable you to parallelize MATLAB® applications
without CUDA or MPI programming. The toolbox lets you use parallel-enabled functions in MATLAB
and other toolboxes. You can use the toolbox with Simulink® to run multiple simulations of a model in
parallel. Programs and models can run in both interactive and batch modes.

The toolbox lets you use the full processing power of multicore desktops by executing applications on
workers (MATLAB computational engines) that run locally. Without changing the code, you can run
the same applications on clusters or clouds (using MATLAB Parallel Server™). You can also use the
toolbox with MATLAB Parallel Server to execute matrix calculations that are too large to fit into the
memory of a single machine.

1-2
Parallel Computing Support in MathWorks Products

Parallel Computing Support in MathWorks Products


Parallel Computing Toolbox provides you with tools for a local cluster of workers on your client
machine. MATLAB Parallel Server software allows you to run as many MATLAB workers on a remote
cluster of computers as your licensing allows.

Most MathWorks products enable you to run applications in parallel. For example, Simulink models
can run simultaneously in parallel, as described in “Run Multiple Simulations” (Simulink). MATLAB
Compiler™ and MATLAB Compiler SDK™ software let you build and deploy parallel applications; for
example, see the “Parallel Computing” section of MATLAB Compiler “Standalone Applications”
(MATLAB Compiler).

Several MathWorks products now offer built-in support for the parallel computing products, without
requiring extra coding. For the current list of these products and their parallel functionality, see:
https://www.mathworks.com/products/parallel-computing/parallel-support.html

1-3
1 Getting Started

Create and Use Distributed Arrays

In this section...
“Creating Distributed Arrays” on page 1-4
“Creating Codistributed Arrays” on page 1-5

If your data is currently in the memory of your local machine, you can use the distributed function
to distribute an existing array from the client workspace to the workers of a parallel pool.
Distributed arrays use the combined memory of multiple workers in a parallel pool to store the
elements of an array. For alternative ways of partitioning data, see “Distributing Arrays to Parallel
Workers” on page 3-10.You can use distributed arrays to scale up your big data computation.
Consider distributed arrays when you have access to a cluster, as you can combine the memory of
multiple machines in your cluster.

A distributed array is a single variable, split over multiple workers in your parallel pool. You can
work with this variable as one single entity, without having to worry about its distributed nature.
Explore the functionalities available for distributed arrays in the Parallel Computing Toolbox:
“Run MATLAB Functions with Distributed Arrays” on page 4-19.

When you create a distributed array, you cannot control the details of the distribution. On the
other hand, codistributed arrays allow you to control all aspects of distribution, including
dimensions and partitions. In the following, you learn how to create both distributed and
codistributed arrays.

Creating Distributed Arrays


You can create a distributed array in different ways:

• Use the distributed function to distribute an existing array from the client workspace to the
workers of a parallel pool.
• You can directly construct a distributed array on the workers. You do not need to first create the
array in the client, so that client workspace memory requirements are reduced. The functions
available include eye(___,'distributed'), rand(___,'distributed'), etc. For a full list,
see the distributed object reference page.
• Create a codistributed array inside an spmd statement, see “Single Program Multiple Data
(spmd)” on page 1-12. Then access it as a distributed array outside the spmd statement. This
lets you use distribution schemes other than the default.

In this example, you create an array in the client workspace, then turn it into a distributed array:

parpool('local',4) % Create pool


A = magic(4); % Create magic 4-by-4 matrix
B = distributed(A); % Distribute to the workers
B % View results in client.
whos % B is a distributed array here.
delete(gcp) % Stop pool

You have createdB as a distributed array, split over the workers in your parallel pool. This is
shown in the figure.

1-4
Create and Use Distributed Arrays

Creating Codistributed Arrays


Unlike distributed arrays, codistributed arrays allow you to control all aspects of distribution,
including dimensions and partitions. You can create a codistributed array in different ways:

• “Partitioning a Larger Array” on page 4-6 — Start with a large array that is replicated on all
workers, and partition it so that the pieces are distributed across the workers. This is most useful
when you have sufficient memory to store the initial replicated array.
• “Building from Smaller Arrays” on page 4-6 — Start with smaller replicated arrays stored on
each worker, and combine them so that each array becomes a segment of a larger codistributed
array. This method reduces memory requirements as it lets you build a codistributed array from
smaller pieces.
• “Using MATLAB Constructor Functions” on page 4-7 — Use any of the MATLAB constructor
functions like rand or zeros with a codistributor object argument. These functions offer a quick
means of constructing a codistributed array of any size in just one step.

In this example, you create a codistributed array inside an spmd statement, using a nondefault
distribution scheme. First, define 1-D distribution along the third dimension, with 4 parts on worker
1, and 12 parts on worker 2. Then create a 3-by-3-by-16 array of zeros.

parpool('local',2) % Create pool


spmd
codist = codistributor1d(3,[4,12]);
Z = zeros(3,3,16,codist);
Z = Z + labindex;
end
Z % View results in client.
whos % Z is a distributed array here.
delete(gcp) % Stop pool

For more details on codistributed arrays, see “Working with Codistributed Arrays” on page 4-4.

See Also

Related Examples
• “Distributing Arrays to Parallel Workers” on page 3-10
• “Big Data Workflow Using Tall Arrays and Datastores” on page 5-46
• “Single Program Multiple Data (spmd)” on page 1-12

1-5
1 Getting Started

Determine Product Installation and Versions


To determine if Parallel Computing Toolbox software is installed on your system, type this command
at the MATLAB prompt.

ver

When you enter this command, MATLAB displays information about the version of MATLAB you are
running, including a list of all toolboxes installed on your system and their version numbers.

If you want to run your applications on a cluster, see your system administrator to verify that the
version of Parallel Computing Toolbox you are using is the same as the version of MATLAB Parallel
Server installed on your cluster.

1-6
Interactively Run a Loop in Parallel Using parfor

Interactively Run a Loop in Parallel Using parfor


In this example, you start with a slow for-loop, and you speed up the calculation using a parfor-
loop instead. parfor splits the execution of for-loop iterations over the workers in a parallel pool.

This example calculates the spectral radius of a matrix and converts a for-loop into a parfor-loop.
Find out how to measure the resulting speedup.
1 In the MATLAB Editor, enter the following for-loop. Add tic and toc to measure the time
elapsed.
tic
n = 200;
A = 500;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
2 Run the script, and note the elapsed time.
Elapsed time is 31.935373 seconds.
3 In the script, replace the for-loop with a parfor-loop.
tic
n = 200;
A = 500;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
end
toc
4 Run the new script, and run it again. Note that the first run is slower than the second run,
because the parallel pool takes some time to start and make the code available to the workers.
Note the elapsed time for the second run.

By default, MATLAB automatically opens a parallel pool of workers on your local machine.
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
...
Elapsed time is 10.760068 seconds.

1-7
1 Getting Started

The parfor run on four workers is about three times faster than the corresponding for-loop
run. The speed-up is smaller than the ideal speed-up of a factor of four on four workers. This is
due to parallel overhead, including the time required to transfer data from the client to the
workers and back. This example shows a good speed-up with relatively small parallel overhead,
and benefits from conversion into a parfor-loop. Not all for-loop iterations can be turned into
faster parfor-loops. To learn more, see “Decide When to Use parfor” on page 2-2.

One key requirement for using parfor-loops is that the individual iterations must be independent.
Independent problems suitable for parfor processing include Monte Carlo simulations and
parameter sweeps. For next steps, see “Convert for-Loops Into parfor-Loops” on page 2-7.

In this example, you managed to speed up the calculation by converting the for-loop into a parfor-
loop on four workers. You might reduce the elapsed time further by increasing the number of workers
in your parallel pool, see “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21.

You can modify your cluster profiles to control how many workers run your loops, and whether the
workers are local or on a cluster. For more information on profiles, see “Discover Clusters and Use
Cluster Profiles” on page 5-11.

Modify your parallel preferences to control whether a parallel pool is created automatically, and how
long it remains available before timing out. For more information on preferences, see “Specify Your
Parallel Preferences” on page 5-9.

You can run Simulink models in parallel with the parsim command instead of using parfor-loops.
For more information and examples of using Simulink in parallel, see “Run Multiple Simulations”
(Simulink).

See Also
parfor | parpool | tic | toc

More About
• “Decide When to Use parfor” on page 2-2
• “Convert for-Loops Into parfor-Loops” on page 2-7
• “Scale Up parfor-Loops to Cluster and Cloud” on page 2-21

1-8
Run Batch Parallel Jobs

Run Batch Parallel Jobs


Run a Batch Job
To offload work from your MATLAB session to run in the background in another session, you can use
the batch command inside a script.
1 To create the script, type:
edit mywave
2 In the MATLAB Editor, create a for-loop:
for i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Use the batch command in the MATLAB Command Window to run your script on a separate
MATLAB worker:
job = batch('mywave')

5 batch does not block MATLAB and you can continue working while computations take place. If
you need to block MATLAB until the job finishes, use the wait function on the job object.
wait(job)
6 After the job finishes, you can retrieve and view its results. The load command transfers
variables created on the worker to the client workspace, where you can view the results:
load(job,'A')
plot(A)
7 When the job is complete, permanently delete its data and remove its reference from the
workspace:
delete(job)
clear job

batch runs your code on a local worker or a cluster worker, but does not require a parallel pool.

You can use batch to run either scripts or functions. For more details, see the batch reference page.

Run a Batch Job with a Parallel Pool


You can combine the abilities to offload a job and run a loop in a parallel pool. This example combines
the two to create a simple batch parfor-loop.
1 To create a script, type:
edit mywave

1-9
1 Getting Started

2 In the MATLAB Editor, create a parfor-loop:

parfor i = 1:1024
A(i) = sin(i*2*pi/1024);
end
3 Save the file and close the Editor.
4 Run the script in MATLAB with the batch command. Indicate that the script should use a
parallel pool for the loop:

job = batch('mywave','Pool',3)

This command specifies that three workers (in addition to the one running the batch script) are
to evaluate the loop iterations. Therefore, this example uses a total of four local workers,
including the one worker running the batch script. Altogether, there are five MATLAB sessions
involved, as shown in the following diagram.

5 To view the results:

wait(job)
load(job,'A')
plot(A)

The results look the same as before, however, there are two important differences in execution:

• The work of defining the parfor-loop and accumulating its results are offloaded to another
MATLAB session by batch.
• The loop iterations are distributed from one MATLAB worker to another set of workers
running simultaneously ('Pool' and parfor), so the loop might run faster than having only
one worker execute it.
6 When the job is complete, permanently delete its data and remove its reference from the
workspace:

delete(job)
clear job

1-10
Run Batch Parallel Jobs

Run Script as Batch Job from the Current Folder Browser


From the Current Folder browser, you can run a MATLAB script as a batch job by browsing to the
file’s folder, right-clicking the file, and selecting Run Script as Batch Job. The batch job runs on the
cluster identified by the default cluster profile. The following figure shows the menu option to run the
script file script1.m:

Running a script as a batch from the browser uses only one worker from the cluster. So even if the
script contains a parfor loop or spmd block, it does not open an additional pool of workers on the
cluster. These code blocks execute on the single worker used for the batch job. If your batch script
requires opening an additional pool of workers, you can run it from the command line, as described in
“Run a Batch Job with a Parallel Pool” on page 1-9.

When you run a batch job from the browser, this also opens the Job Monitor. The Job Monitor is a tool
that lets you track your job in the scheduler queue. For more information about the Job Monitor and
its capabilities, see “Job Monitor” on page 5-24.

See Also
batch

Related Examples
• “Run Batch Job and Access Files from Workers”

1-11
1 Getting Started

Distribute Arrays and Run SPMD


Distributed Arrays
The workers in a parallel pool communicate with each other, so you can distribute an array among
the workers. Each worker contains part of the array, and all the workers are aware of which portion
of the array each worker has.

Use the distributed function to distribute an array among the workers:


M = magic(4) % a 4-by-4 magic square in the client workspace
MM = distributed(M)

Now MM is a distributed array, equivalent to M, and you can manipulate or access its elements in the
same way as any other array.
M2 = 2*MM; % M2 is also distributed, calculation performed on workers
x = M2(1,1) % x on the client is set to first element of M2

Single Program Multiple Data (spmd)


The single program multiple data (spmd) construct lets you define a block of code that runs in parallel
on all the workers in a parallel pool. The spmd block can run on some or all the workers in the pool.
spmd % By default creates pool and uses all workers
R = rand(4);
end

This code creates an individual 4-by-4 matrix, R, of random numbers on each worker in the pool.

Composites
Following an spmd statement, in the client context, the values from the block are accessible, even
though the data is actually stored on the workers. On the client, these variables are called Composite
objects. Each element of a composite is a symbol referencing the value (data) on a worker in the pool.
Note that because a variable might not be defined on every worker, a Composite might have
undefined elements.

Continuing with the example from above, on the client, the Composite R has one element for each
worker:
X = R{3}; % Set X to the value of R from worker 3.

The line above retrieves the data from worker 3 to assign the value of X. The following code sends
data to worker 3:
X = X + 2;
R{3} = X; % Send the value of X from the client to worker 3.

If the parallel pool remains open between spmd statements and the same workers are used, the data
on each worker persists from one spmd statement to another.
spmd
R = R + labindex % Use values of R from previous spmd.
end

1-12
Distribute Arrays and Run SPMD

A typical use for spmd is to run the same code on a number of workers, each of which accesses a
different set of data. For example:

spmd
INP = load(['somedatafile' num2str(labindex) '.mat']);
RES = somefun(INP)
end

Then the values of RES on the workers are accessible from the client as RES{1} from worker 1,
RES{2} from worker 2, etc.

There are two forms of indexing a Composite, comparable to indexing a cell array:

• AA{n} returns the values of AA from worker n.


• AA(n) returns a cell array of the content of AA from worker n.

Although data persists on the workers from one spmd block to another as long as the parallel pool
remains open, data does not persist from one instance of a parallel pool to another. That is, if the pool
is deleted and a new one created, all data from the first pool is lost.

For more information about using distributed arrays, spmd, and Composites, see “Distributed
Arrays”.

1-13
1 Getting Started

What Is Parallel Computing?


Parallel computing allows you to carry out many calculations simultaneously. Large problems can
often be split into smaller ones, which are then solved at the same time.

The main reasons to consider parallel computing are to

• Save time by distributing tasks and executing these simultaneously


• Solve big data problems by distributing data
• Take advantage of your desktop computer resources and scale up to clusters and cloud computing

With Parallel Computing Toolbox, you can

• Accelerate your code using interactive parallel computing tools, such as parfor and parfeval
• Scale up your computation using interactive Big Data processing tools, such as distributed,
tall, datastore, and mapreduce
• Use gpuArray to speed up your calculation on the GPU of your computer
• Use batch to offload your calculation to computer clusters or cloud computing facilities

Here are some useful Parallel Computing concepts:

• Node: standalone computer, containing one or more CPUs / GPUs. Nodes are networked to form a
cluster or supercomputer
• Thread: smallest set of instructions that can be managed independently by a scheduler. On a GPU,
multiprocessor or multicore system, multiple threads can be executed simultaneously (multi-
threading)
• Batch: off-load execution of a functional script to run in the background
• Scalability: increase in parallel speedup with the addition of more resources

What tools do MATLAB and Parallel Computing Toolbox offer?

• MATLAB workers: MATLAB computational engines that run in the background without a graphical
desktop. You use functions in the Parallel Computing Toolbox to automatically divide tasks and
assign them to these workers to execute the computations in parallel. You can run local workers to
take advantage of all the cores in your multicore desktop computer. You can also scale up to run
your workers on a cluster of machines, using the MATLAB Parallel Server. The MATLAB session
you interact with is known as the MATLAB client. The client instructs the workers with parallel
language functions.
• Parallel pool: a parallel pool of MATLAB workers created using parpool or functions with
automatic parallel support. By default, parallel language functions automatically create a parallel
pool for you when necessary. To learn more, see “Run Code on Parallel Pools” on page 2-56.

For the default local profile, the default number of workers is one per physical CPU core using a
single computational thread. This is because even though each physical core can have several
virtual cores, the virtual cores share some resources, typically including a shared floating point
unit (FPU). Most MATLAB computations use this unit because they are double-precision floating
point. Restricting to one worker per physical core ensures that each worker has exclusive access
to a floating point unit, which generally optimizes performance of computational code. If your
code is not computationally intensive, for example, it is input/output (I/O) intensive, then consider

1-14
Exploring the Variety of Random
Documents with Different Content
MR. HARVEY’S REMARKS.
“My patient, Mr. Banting, having published for the benefit of his
fellow sufferers, some account of the diet which I recommended him
to adopt with a view to relieve him of a distressing degree of
hypertrophy of the adipose tissue. I have been frequently urged by
him to explain the principles upon which I was enabled to treat with
success this inconvenient, and in some instances, distressing
condition of the system.
“The simple history of my finding occasion to investigate this subject
is as follows:—When in Paris, in the year 1856, I took the
opportunity of attending a discussion on the views of M. Bernard,
who was at that time propounding his now generally admitted
theory of the liver functions. After he had discovered by chemical
processes and physiological experiments, which it is unnecessary for
me to recapitulate here, that the liver not only secreted bile, but also
a peculiar amyloid or starch-like product which he called glucose,
and which in its chemical and physical properties appeared to be
nearly allied to saccharine matter, he further found that this glucose
could be directly produced in the liver by the ingestion of sugar and
its ally starch, and that in diabetes it existed there in considerable
excess. It had long been well known that a purely animal diet greatly
assisted in checking the secretion of diabetic urine; and it seemed to
follow, as a matter of course, that the total abstinence from
saccharine and farinaceous matter must drain the liver of this
excessive amount of glucose, and thus arrest in a similar proportion
the diabetic tendency. Reflecting on this chain of argument, and
knowing too that a saccharine and farinaceous diet is used to fatten
certain animals, and that in diabetes, the whole of the fat of the
body rapidly disappears, it occurred to me that excessive obesity
might be allied to diabetes as to its cause, although widely diverse in
its development: and that if a purely animal diet was useful in the
latter disease, a combination of animal food with such vegetable
matters as contained neither sugar nor starch, might serve to arrest
the undue formation of fat. I soon afterwards had an opportunity of
testing this idea. A dispensary patient, who consulted me for
deafness, and who was enormously corpulent, I found to have no
distinguishable disease of the ear. I therefore suspected that his
deafness arose from the great development of adipose matter in the
throat, pressing upon and stopping up the eustachian tubes. I
subjected him to a strict non-farinaceous and non-saccharine diet,
and treated him with the volatile alkali alluded to in his Pamphlet,
and occasional aperients, and in about seven months he was
reduced to almost normal proportions, his hearing restored, and his
general health immensely improved. This case seemed to give
substance and reality to my conjectures, which further experience
has confirmed.
“When we consider that fat is what is termed hydro carbon, and
deposits itself so insidiously and yet so gradually amongst the
tissues of the body, it is at once manifest that we require such
substances as contain a superfluity of oxygen and nitrogen to arrest
its formation and to vitalize the system. That is the principal upon
which the diet suggested in his Pamphlet works, and explains on the
one hand the necessity of abstaining from all vegetable roots which
hold a large quantity of saccharine matter, and on the other the
beneficial effects derivable from those vegetables, the fruits of which
are on the exterior of the earth, as they lose, probably by means of
the sun’s action, a large proportion of their sugar.
“With regard to the tables of Dr. Hutchinson, referred to in his
Pamphlet, it is no doubt difficult, as he says, to determine what is a
man’s proper weight, which must be influenced by various causes.
Those tables, however, were formed by him on the principle of
considering the amount of air which the lungs in their healthy state
can receive and apply to the oxydation of the blood. I gave them to
Mr. Banting as an indication only of what the approximate weight of
adult persons in proportion to their stature should be, and with the
view of proving to them the importance of keeping down the
tendency to grow fat; for, as that tendency increases, the capacity of
the lungs, and consequently the vitality and power of the whole
system must diminish. In conclusion, I would suggest the propriety
of advising a dietary such as this in diseases that are in any way
influenced by a disordered condition of the hepatic functions, as they
cannot fail to yield in some degree to this simple method of
treatment if fairly and properly carried out; it remains for me to
watch its progress in a more limited sphere.
“William Harvey, F.R.C.S.,
“Surgeon to the Royal Dispensary,
for Diseases of the Ear.”
2, Soho Square,
April, 1864.
*** END OF THE PROJECT GUTENBERG EBOOK LETTER ON
CORPULENCE, ADDRESSED TO THE PUBLIC ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for


the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,


the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like