100% found this document useful (30 votes)
193 views76 pages

Full Download Introduction To Parallel Processing Algorithms and Architectures 1st Edition by Behrooz Parhami ISBN 9780306469640 0306469642 PDF

The document introduces the ebook 'Introduction to Parallel Processing Algorithms and Architectures' by Behrooz Parhami, detailing its structure and educational goals. It emphasizes the significance of parallel processing in computer architecture, highlighting the challenges and advancements in the field. The book is designed to provide a comprehensive understanding of parallel processing through a modular approach, practical examples, and a consistent notation system.

Uploaded by

dewickgjani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (30 votes)
193 views76 pages

Full Download Introduction To Parallel Processing Algorithms and Architectures 1st Edition by Behrooz Parhami ISBN 9780306469640 0306469642 PDF

The document introduces the ebook 'Introduction to Parallel Processing Algorithms and Architectures' by Behrooz Parhami, detailing its structure and educational goals. It emphasizes the significance of parallel processing in computer architecture, highlighting the challenges and advancements in the field. The book is designed to provide a comprehensive understanding of parallel processing through a modular approach, practical examples, and a consistent notation system.

Uploaded by

dewickgjani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Explore the full ebook collection and download it now at ebookball.

com

Introduction to Parallel Processing Algorithms and


Architectures 1st Edition by Behrooz Parhami ISBN
9780306469640 0306469642

https://ebookball.com/product/introduction-to-parallel-
processing-algorithms-and-architectures-1st-edition-by-
behrooz-parhami-isbn-9780306469640-0306469642-19872/

OR CLICK HERE

DOWLOAD EBOOK

Browse and Get More Ebook Downloads Instantly at https://ebookball.com


Click here to visit ebookball.com and download ebookball now
Your digital treasures (PDF, ePub, MOBI) await
Download instantly and pick your perfect format...

Read anywhere, anytime, on any device!

Algorithms Architectures and Information Systems Security


1st edition by Bhargab Bhattacharya 9814469467
9789814469463
https://ebookball.com/product/algorithms-architectures-and-
information-systems-security-1st-edition-by-bhargab-
bhattacharya-9814469467-9789814469463-13068/

ebookball.com

Coding Theory Algorithms Architectures and Applications


1st Edition by Andre Neubauer, Jurgen Freudenberger,
Volker Kuhn ISBN 9780470028612
https://ebookball.com/product/coding-theory-algorithms-architectures-
and-applications-1st-edition-by-andre-neubauer-jurgen-freudenberger-
volker-kuhn-isbn-9780470028612-13832/

ebookball.com

Speech and Language Processing An Introduction to Natural


Language Processing Computational Linguistics and Speech
Recognition 1st Edition by Daniel Saul Jurafsky, James
Martin 0130950696 9780130950697
https://ebookball.com/product/speech-and-language-processing-an-
introduction-to-natural-language-processing-computational-linguistics-
and-speech-recognition-1st-edition-by-daniel-saul-jurafsky-james-
martin-0130950696-9780130950697/
ebookball.com

Introduction to Game Development: Using Processing 1st


edition by James Parker 1942270658 9781942270652

https://ebookball.com/product/introduction-to-game-development-using-
processing-1st-edition-by-james-parker-1942270658-9781942270652-25160/

ebookball.com
Adapting Parallel Algorithms to the W Stream Model with
Applications to Graph Problems 1st Edition by Camil
Demetrescu, Bruno Escoffier, Gabriel Moruz, Andrea
Ribichini ISBN 9783540744566
https://ebookball.com/product/adapting-parallel-algorithms-to-the-w-
stream-model-with-applications-to-graph-problems-1st-edition-by-camil-
demetrescu-bruno-escoffier-gabriel-moruz-andrea-ribichini-
isbn-9783540744566-12246/
ebookball.com

Job Scheduling Strategies for Parallel Processing 1st


Edition by Dalibor Klusacek, Walfredo Cirne, Gonzalo P
Rodrigo ISBN 9783030882242
https://ebookball.com/product/job-scheduling-strategies-for-parallel-
processing-1st-edition-by-dalibor-klusacek-walfredo-cirne-gonzalo-p-
rodrigo-isbn-9783030882242-13704/

ebookball.com

Parallel processing for artificial intelligence 3 1st


edition by Geller, Kitano, Suttner ISBN 0444824863
 978-0444824868
https://ebookball.com/product/parallel-processing-for-artificial-
intelligence-3-1st-edition-by-geller-kitano-suttner-
isbn-0444824863-978-0444824868-19586/

ebookball.com

Algorithms for Image Processing and Computer Vision 2nd


Edition by Parker 0470643854 978-0470643853

https://ebookball.com/product/algorithms-for-image-processing-and-
computer-vision-2nd-edition-by-parker-0470643854-978-0470643853-17240/

ebookball.com

Cost Optimization of Structures Fuzzy Logic Genetic


Algorithms and Parallel Computing 1st Edition by Hojjat
Adeli, Kamal Sarma ISBN 0470867345 9780470867341
https://ebookball.com/product/cost-optimization-of-structures-fuzzy-
logic-genetic-algorithms-and-parallel-computing-1st-edition-by-hojjat-
adeli-kamal-sarma-isbn-0470867345-9780470867341-9304/

ebookball.com
Introduction to
Parallel Processing
Algorithms and Architectures
PLENUM SERIES IN COMPUTER SCIENCE
Series Editor: Rami G. Melhem
University of Pittsburgh
Pittsburgh, Pennsylvania

FUNDAMENTALS OF X PROGRAMMING
Graphical User Interfaces and Beyond
Theo Pavlidis
INTRODUCTION TO PARALLEL PROCESSING
Algorithms and Architectures
Behrooz Parhami
Introduction to
Parallel Processing
Algorithms and Architectures

Behrooz Parhami
University of California at Santa Barbara
Santa Barbara, California

KLUWER ACADEMIC PUBLISHERS


NEW YORK, BOSTON , DORDRECHT, LONDON , MOSCOW
eBook ISBN 0-306-46964-2
Print ISBN 0-306-45970-1

©2002 Kluwer Academic Publishers


New York, Boston, Dordrecht, London, Moscow

All rights reserved

No part of this eBook may be reproduced or transmitted in any form or by any means, electronic,
mechanical, recording, or otherwise, without written consent from the Publisher

Created in the United States of America

Visit Kluwer Online at: http://www.kluweronline.com


and Kluwer's eBookstore at: http://www.ebooks.kluweronline.com
To the four parallel joys in my life,

for their love and support.


This page intentionally left blank.
Preface

THE CONTEXT OF PARALLEL PROCESSING

The field of digital computer architecture has grown explosively in the past two decades.
Through a steady stream of experimental research, tool-building efforts, and theoretical
studies, the design of an instruction-set architecture, once considered an art, has been
transformed into one of the most quantitative branches of computer technology. At the same
time, better understanding of various forms of concurrency, from standard pipelining to
massive parallelism, and invention of architectural structures to support a reasonably efficient
and user-friendly programming model for such systems, has allowed hardware performance
to continue its exponential growth. This trend is expected to continue in the near future.
This explosive growth, linked with the expectation that performance will continue its
exponential rise with each new generation of hardware and that (in stark contrast to software)
computer hardware will function correctly as soon as it comes off the assembly line, has its
down side. It has led to unprecedented hardware complexity and almost intolerable devel-
opment costs. The challenge facing current and future computer designers is to institute
simplicity where we now have complexity; to use fundamental theories being developed in
this area to gain performance and ease-of-use benefits from simpler circuits; to understand
the interplay between technological capabilities and limitations, on the one hand, and design
decisions based on user and application requirements on the other.
In computer designers’ quest for user-friendliness, compactness, simplicity, high per-
formance, low cost, and low power, parallel processing plays a key role. High-performance
uniprocessors are becoming increasingly complex, expensive, and power-hungry. A basic
trade-off thus exists between the use of one or a small number of such complex processors,
at one extreme, and a moderate to very large number of simpler processors, at the other.
When combined with a high-bandwidth, but logically simple, interprocessor communication
facility, the latter approach leads to significant simplification of the design process. However,
two major roadblocks have thus far prevented the widespread adoption of such moderately
to massively parallel architectures: the interprocessor communication bottleneck and the
difficulty, and thus high cost, of algorithm/software development.

vii
viii INTRODUCTION TO PARALLEL PROCESSING

The above context is changing because of several factors. First, at very high clock rates,
the link between the processor and memory becomes very critical. CPUs can no longer be
designed and verified in isolation. Rather, an integrated processor/memory design optimiza-
tion is required, which makes the development even more complex and costly. VLSI
technology now allows us to put more transistors on a chip than required by even the most
advanced superscalar processor. The bulk of these transistors are now being used to provide
additional on-chip memory. However, they can just as easily be used to build multiple
processors on a single chip. Emergence of multiple-processor microchips, along with
currently available methods for glueless combination of several chips into a larger system
and maturing standards for parallel machine models, holds the promise for making parallel
processing more practical.
This is the reason parallel processing occupies such a prominent place in computer
architecture education and research. New parallel architectures appear with amazing regu-
larity in technical publications, while older architectures are studied and analyzed in novel
and insightful ways. The wealth of published theoretical and practical results on parallel
architectures and algorithms is truly awe-inspiring. The emergence of standard programming
and communication models has removed some of the concerns with compatibility and
software design issues in parallel processing, thus resulting in new designs and products with
mass-market appeal. Given the computation-intensive nature of many application areas (such
as encryption, physical modeling, and multimedia), parallel processing will continue to
thrive for years to come.
Perhaps, as parallel processing matures further, it will start to become invisible. Packing
many processors in a computer might constitute as much a part of a future computer
architect’s toolbox as pipelining, cache memories, and multiple instruction issue do today.
In this scenario, even though the multiplicity of processors will not affect the end user or
even the professional programmer (other than of course boosting the system performance),
the number might be mentioned in sales literature to lure customers in the same way that
clock frequency and cache size are now used. The challenge will then shift from making
parallel processing work to incorporating a larger number of processors, more economically
and in a truly seamless fashion.

THE GOALS AND STRUCTURE OF THIS BOOK

The field of parallel processing has matured to the point that scores of texts and reference
books have been published. Some of these books that cover parallel processing in general
(as opposed to some special aspects of the field or advanced/unconventional parallel systems)
are listed at the end of this preface. Each of these books has its unique strengths and has
contributed to the formation and fruition of the field. The current text, Introduction to Parallel
Processing: Algorithms and Architectures, is an outgrowth of lecture notes that the author
has developed and refined over many years, beginning in the mid-1980s. Here are the most
important features of this text in comparison to the listed books:

1. Division of material into lecture-size chapters. In my approach to teaching, a lecture


is a more or less self-contained module with links to past lectures and pointers to
what will transpire in the future. Each lecture must have a theme or title and must
PREFACE ix

proceed from motivation, to details, to conclusion. There must be smooth transitions


between lectures and a clear enunciation of how each lecture fits into the overall
plan. In designing the text, I have strived to divide the material into chapters, each
of which is suitable for one lecture (l–2 hours). A short lecture can cover the first
few subsections, while a longer lecture might deal with more advanced material
near the end. To make the structure hierarchical, as opposed to flat or linear, chapters
have been grouped into six parts, each composed of four closely related chapters
(see diagram on page xi).
2. A large number of meaningful problems. At least 13 problems have been provided
at the end of each of the 24 chapters. These are well-thought-out problems, many
of them class-tested, that complement the material in the chapter, introduce new
viewing angles, and link the chapter material to topics in other chapters.
3. Emphasis on both the underlying theory and practical designs. The ability to cope
with complexity requires both a deep knowledge of the theoretical underpinnings
of parallel processing and examples of designs that help us understand the theory.
Such designs also provide hints/ideas for synthesis as well as reference points for
cost–performance comparisons. This viewpoint is reflected, e.g., in the coverage of
problem-driven parallel machine designs (Chapter 8) that point to the origins of the
butterfly and binary-tree architectures. Other examples are found in Chapter 16
where a variety of composite and hierarchical architectures are discussed and some
fundamental cost–performance trade-offs in network design are exposed. Fifteen
carefully chosen case studies in Chapters 21–23 provide additional insight and
motivation for the theories discussed.
4. Linking parallel computing to other subfields of computer design. Parallel comput-
ing is nourished by, and in turn feeds, other subfields of computer architecture and
technology. Examples of such links abound. In computer arithmetic, the design of
high-speed adders and multipliers contributes to, and borrows many methods from,
parallel processing. Some of the earliest parallel systems were designed by re-
searchers in the field of fault-tolerant computing in order to allow independent
multichannel computations and/or dynamic replacement of failed subsystems.
These links are pointed out throughout the book.
5. Wide coverage of important topics. The current text covers virtually all important
architectural and algorithmic topics in parallel processing, thus offering a balanced
and complete view of the field. Coverage of the circuit model and problem-driven
parallel machines (Chapters 7 and 8), some variants of mesh architectures (Chapter
12), composite and hierarchical systems (Chapter 16), which are becoming increas-
ingly important for overcoming VLSI layout and packaging constraints, and the
topics in Part V (Chapters 17–20) do not all appear in other textbooks. Similarly,
other books that cover the foundations of parallel processing do not contain
discussions on practical implementation issues and case studies of the type found
in Part VI.
6. Unified and consistent notation/terminology throughout the text. I have tried very
hard to use consistent notation/terminology throughout the text. For example, n
always stands for the number of data elements (problem size) and p for the number
of processors. While other authors have done this in the basic parts of their texts,
there is a tendency to cover more advanced research topics by simply borrowing
x INTRODUCTION TO PARALLEL PROCESSING

the notation and terminology from the reference source. Such an approach has the
advantage of making the transition between reading the text and the original
reference source easier, but it is utterly confusing to the majority of the students
who rely on the text and do not consult the original references except, perhaps, to
write a research paper.

SUMMARY OF TOPICS

The six parts of this book, each composed of four chapters, have been written with the
following goals:

 Part I sets the stage, gives a taste of what is to come, and provides the needed
perspective, taxonomy, and analysis tools for the rest of the book.
 Part II delimits the models of parallel processing from above (the abstract PRAM
model) and from below (the concrete circuit model), preparing the reader for everything
else that falls in the middle.
 Part III presents the scalable, and conceptually simple, mesh model of parallel process-
ing, which has become quite important in recent years, and also covers some of its
derivatives.
 Part IV covers low-diameter parallel architectures and their algorithms, including the
hypercube, hypercube derivatives, and a host of other interesting interconnection
topologies.
 Part V includes broad (architecture-independent) topics that are relevant to a wide range
of systems and form the stepping stones to effective and reliable parallel processing.
 Part VI deals with implementation aspects and properties of various classes of parallel
processors, presenting many case studies and projecting a view of the past and future
of the field.

POINTERS ON HOW TO USE THE BOOK

For classroom use, the topics in each chapter of this text can be covered in a lecture
spanning 1–2 hours. In my own teaching, I have used the chapters primarily for 1-1/2-hour
lectures, twice a week, in a 10-week quarter, omitting or combining some chapters to fit the
material into 18–20 lectures. But the modular structure of the text lends itself to other lecture
formats, self-study, or review of the field by practitioners. In the latter two cases, the readers
can view each chapter as a study unit (for 1 week, say) rather than as a lecture. Ideally, all
topics in each chapter should be covered before moving to the next chapter. However, if fewer
lecture hours are available, then some of the subsections located at the end of chapters can
be omitted or introduced only in terms of motivations and key results.
Problems of varying complexities, from straightforward numerical examples or exercises
to more demanding studies or miniprojects, have been supplied for each chapter. These problems
form an integral part of the book and have not been added as afterthoughts to make the book
more attractive for use as a text. A total of 358 problems are included (13–16 per chapter).
Assuming that two lectures are given per week, either weekly or biweekly homework can
be assigned, with each assignment having the specific coverage of the respective half-part
PREFACE xi

The structure of this book in parts, half-parts, and chapters.

(two chapters) or full part (four chapters) as its “title.” In this format, the half-parts, shown
above, provide a focus for the weekly lecture and/or homework schedule.
An instructor’s manual, with problem solutions and enlarged versions of the diagrams
and tables, suitable for reproduction as transparencies, is planned. The author’s detailed
syllabus for the course ECE 254B at UCSB is available at http://www.ece.ucsb.edu/courses/
syllabi/ece254b.html.
References to important or state-of-the-art research contributions and designs are
provided at the end of each chapter. These references provide good starting points for doing
in-depth studies or for preparing term papers/projects.
xii INTRODUCTION TO PARALLEL PROCESSING

New ideas in the field of parallel processing appear in papers presented at several annual
conferences, known as FMPC, ICPP, IPPS, SPAA, SPDP (now merged with IPPS), and in
archival journals such as IEEE Transactions on Computers [TCom], IEEE Transactions on
Parallel and Distributed Systems [TPDS], Journal of Parallel and Distributed Computing
[JPDC], Parallel Computing [ParC], and Parallel Processing Letters [PPL]. Tutorial and
survey papers of wide scope appear in IEEE Concurrency [Conc] and, occasionally, in IEEE
Computer [Comp]. The articles in IEEE Computer provide excellent starting points for
research projects and term papers.

ACKNOWLEDGMENTS

The current text, Introduction to Parallel Processing: Algorithms and Architectures, is


an outgrowth of lecture notes that the author has used for the graduate course “ECE 254B:
Advanced Computer Architecture: Parallel Processing” at the University of California, Santa
Barbara, and, in rudimentary forms, at several other institutions prior to 1988. The text has
benefited greatly from keen observations, curiosity, and encouragement of my many students
in these courses. A sincere thanks to all of them! Particular thanks go to Dr. Ding-Ming Kwai
who read an early version of the manuscript carefully and suggested numerous corrections
and improvements.

GENERAL REFERENCES
[Akl89] Akl, S. G., The Design and Analysis of Parallel Algorithms, Prentice–Hall, 1989.
[Akl97] Akl, S. G., Parallel Computation: Models and Methods, Prentice–Hall, 1997.
[Alma94] Almasi, G. S., and A. Gottlieb, Highly Parallel Computing, Benjamin/Cummings, 2nd ed., 1994.
[Bert89] Bertsekas, D. P., and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods,
Prentice–Hall, 1989.
[Code93] Codenotti, B., and M. Leoncini, Introduction to Parallel Processing, Addison–Wesley, 1993.
[Comp] IEEE Computer, journal published by IEEE Computer Society: has occasional special issues on
parallel/distributed processing (February 1982, June 1985, August 1986, June 1987, March 1988,
August 1991, February 1992, November 1994, November 1995, December 1996).
[Conc] IEEE Concurrency, formerly IEEE Parallel and Distributed Technology, magazine published by
IEEE Computer Society.
[Cric88] Crichlow, J. M., Introduction to Distributed and Parallel Computing, Prentice–Hall, 1988.
[DeCe89] DeCegama, A. L., Parallel Processing Architectures and VLSI Hardware, Prentice–Hall, 1989.
[Desr87] Desrochers, G. R., Principles of Parallel and Multiprocessing, McGraw-Hill, 1987.
[Duat97] Duato, J., S. Yalamanchili, and L. Ni, Interconnection Networks: An Engineering Approach, IEEE
Computer Society Press, 1997.
[Flyn95] Flynn, M. J., Computer Architecture: Pipelined and Parallel Processor Design, Jones and Bartlett,
1995.
[FMPC] Proc. Symp. Frontiers of Massively Parallel Computation, sponsored by IEEE Computer Society and
NASA. Held every 1 1/2–2 years since 1986. The 6th FMPC was held in Annapolis, MD, October
27–31, 1996, and the 7th is planned for February 20–25, 1999.
[Foun94] Fountain, T. J., Parallel Computing: Principles and Practice, Cambridge University Press, 1994.
[Hock81] Hockney, R. W., and C. R. Jesshope, Parallel Computers, Adam Hilger, 1981.
[Hord90] Hord, R. M., Parallel Supercomputing in SIMD Architectures, CRC Press, 1990.
[Hord93] Hord, R. M., Parallel Supercomputing in MIMD Architectures, CRC Press, 1993.
[Hwan84] Hwang, K., and F. A. Briggs, Computer Architecture and Parallel Processing, McGraw-Hill, 1984.
[Hwan93] Hwang, K., Advanced Computer Architecture: Parallelism, Scalability, Programmability, McGraw-
Hill, 1993.
PREFACE xiii

[Hwan98] Hwang, K., and Z. Xu, Scalable Parallel Computing: Technology, Architecture, Programming,
McGraw-Hill, 1998.
[ICPP] Proc. Int. Conference Parallel Processing, sponsored by The Ohio State University (and in recent
years, also by the International Association for Computers and Communications). Held annually since
1972.
[IPPS] Proc. Int. Parallel Processing Symp., sponsored by IEEE Computer Society. Held annually since
1987. The 11th IPPS was held in Geneva, Switzerland, April 1–5, 1997. Beginning with the 1998
symposium in Orlando, FL, March 30–April 3, IPPS was merged with SPDP. **
[JaJa92] JaJa, J., An Introduction to Parallel Algorithms, Addison-Wesley, 1992.
[JPDC] Journal of Parallel and Distributed Computing, Published by Academic Press.
[Kris89] Krishnamurthy, E. V., Parallel Processing: Principles and Practice, Addison–Wesley, 1989.
[Kuma94] Kumar, V., A. Grama, A. Gupta, and G. Karypis, Introduction to Parallel Computing: Design and
Analysis of Algorithms, Benjamin/Cummings, 1994.
[Laks90] Lakshmivarahan, S., and S. K. Dhall, Analysis and Design of Parallel Algorithms: Arithmetic and
Matrix Problems, McGraw-Hill, 1990.
[Leig92] Leighton, F. T., Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes,
Morgan Kaufmann, 1992.
[Lerm94] Lerman, G., and L. Rudolph, Parallel Evolution of Parallel Processors, Plenum, 1994.
[Lipo87] Lipovski, G. J., and M. Malek, Parallel Computing: Theory and Comparisons, Wiley, 1987.
[Mold93] Moldovan, D. I., Parallel Processing: From Applications to Systems, Morgan Kaufmann, 1993.
[ParC] Parallel Computing, journal published by North-Holland.
[PPL] Parallel Processing Letters, journal published by World Scientific.
[Quin87] Quinn, M. J., Designing Efficient Algorithms for Parallel Computers, McGraw-Hill, 1987.
[Quin94] Quinn, M. J., Parallel Computing: Theory and Practice, McGraw-Hill, 1994.
[Reif93] Reif, J. H. (ed.), Synthesis of Parallel Algorithms, Morgan Kaufmann, 1993.
[Sanz89] Sanz, J. L. C. (ed.), Opportunities and Constraints of Parallel Computing (IBM/NSF Workshop, San
Jose, CA, December 1988), Springer-Verlag, 1989.
[Shar87] Sharp, J. A., An Introduction to Distributed and Parallel Processing, Blackwell Scientific Publica-
tions, 1987.
[Sieg85] Siegel, H. J., Interconnection Networks for Large-Scale Parallel Processing, Lexington Books, 1985.
[SPAA] Proc. Symp. Parallel Algorithms and Architectures, sponsored by the Association for Computing
Machinery (ACM). Held annually since 1989. The 10th SPAA was held in Puerto Vallarta, Mexico,
June 28–July 2, 1998.
[SPDP] Proc. Int. Symp. Parallel and Distributed Systems, sponsored by IEEE Computer Society. Held
annually since 1989, except for 1997. The 8th SPDP was held in New Orleans, LA, October 23–26,
1996. Beginning with the 1998 symposium in Orlando, FL, March 30–April 3, SPDP was merged
with IPPS.
[Ston93] Stone, H. S., High-Performance Computer Architecture, Addison–Wesley, 1993.
[TCom] IEEE Trans. Computers, journal published by IEEE Computer Society; has occasional special issues
on parallel and distributed processing (April 1987, December 1988, August 1989, December 1991,
April 1997, April 1998).
[TPDS] IEEE Trans. Parallel and Distributed Systems, journal published by IEEE Computer Society.
[Varm94] Varma, A., and C. S. Raghavendra, Interconnection Networks for Multiprocessors and Multicomput-
ers: Theory and Practice, IEEE Computer Society Press, 1994.
[Zoma96] Zomaya, A. Y. (ed.), Parallel and Distributed Computing Handbook, McGraw-Hill, 1996.

*The 27th ICPP was held in Minneapolis, MN, August 10–15, 1998, and the 28th is scheduled for September
21–24, 1999, in Aizu, Japan.
**The next joint IPPS/SPDP is sceduled for April 12–16, 1999, in San Juan, Puerto Rico.
This page intentionally left blank.
Contents

Part I. Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . 1

1. Introduction to Parallelism . . . . . . . . . . . . . . . . . . . . . 3
1.1. Why Parallel Processing? . . . . . . . . . . . . . . . . . . . . . . 5
1.2. A Motivating Example . . . . . . . . . . . . . . . . . . . . . . . 8
1.3. Parallel Processing Ups and Downs . . . . . . . . . . . . . . . . 13
1.4. Types of Parallelism: A Taxonomy . . . . . . . . . . . . . . . . . 15
1.5. Roadblocks to Parallel Processing . . . . . . . . . . . . . . . . . 16
1.6. Effectiveness of Parallel Processing . . . . . . . . . . . . . . . . 19
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 21
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 23

2. A Taste of Parallel Algorithms . . . . . . . . . . . . . . . . . . . 25


2.1. Some Simple Computations . . . . . . . . . . . . . . . . . . . . 27
2.2. Some Simple Architectures . . . . . . . . . . . . . . . . . . . . . 28
2.3. Algorithms for a Linear Array . . . . . . . . . . . . . . . . . . . 30
2.4. Algorithms for a Binary Tree . . . . . . . . . . . . . . . . . . . . 34
2.5. Algorithms for a 2D Mesh . . . . . . . . . . . . . . . . . . . . . 39
2.6. Algorithms with Shared Variables . . . . . . . . . . . . . . . . . 40
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 43

3. Parallel Algorithm Complexity . . . . . . . . . . . . . . . . . . . 45


3.1. Asymptotic Complexity . . . . . . . . . . . . . . . . . . . . . . . 47
3.2. Algorithm Optimality and Efficiency . . . . . . . . . . . . . . . . 50
3.3. Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4. Parallelizable Tasks and the NC Class . . . . . . . . . . . . . . . 55
3.5. Parallel Programming Paradigms . . . . . . . . . . . . . . . . . . 56
3.6. Solving Recurrences . . . . . . . . . . . . . . . . . . . . . . . . 58

xv
xvi INTRODUCTION TO PARALLEL PROCESSING

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 63

4. Models of Parallel Processing . . . . . . . . . . . . . . . . . . . 65


4.1. Development of Early Models . . . . . . . . . . . . . . . . . . . 67
4.2. SIMD versus MIMD Architectures . . . . . . . . . . . . . . . . 69
4.3. Global versus Distributed Memory . . . . . . . . . . . . . . . . . 71
4.4. The PRAM Shared-Memory Model . . . . . . . . . . . . . . . . 74
4.5. Distributed-Memory or Graph Models . . . . . . . . . . . . . . . 77
4.6. Circuit Model and Physical Realizations . . . . . . . . . . . . . . 80
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 85

Part II. Extreme Models . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5. PRAM and Basic Algorithms . . . . . . . . . . . . . . . . . . . . 89


5.1. PRAM Submodels and Assumptions . . . . . . . . . . . . . . . 91
5.2. Data Broadcasting . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3. Semigroup or Fan-In Computation . . . . . . . . . . . . . . . . . 96
5.4. Parallel Prefix Computation . . . . . . . . . . . . . . . . . . . 98
5.5. Ranking the Elements of a Linked List . . . . . . . . . . . . . . 99
5.6. Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . 102
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 108

6. More Shared-Memory Algorithms . . . . . . . . . . . . . . . . . 109


6.1. Sequential Rank-Based Selection . . . . . . . . . . . . . . . . . 111
6.2. A Parallel Selection Algorithm . . . . . . . . . . . . . . . . . . . 113
6.3. A Selection-Based Sorting Algorithm . . . . . . . . . . . . . . . 114
6.4. Alternative Sorting Algorithms . . . . . . . . . . . . . . . . . . . 117
6.5. Convex Hull of a 2D Point Set . . . . . . . . . . . . . . . . . . . 118
6.6. Some Implementation Aspects . . . . . . . . . . . . . . . . . . . 121
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 127

7. Sorting and Selection Networks . . . . . . . . . . . . . . . . . . 129


7.1. What Is a Sorting Network . . . . . . . . . . . . . . . . . . . . . 131
7.2. Figures of Merit for Sorting Networks . . . . . . . . . . . . . . . 133
7.3. Design of Sorting Networks . . . . . . . . . . . . . . . . . . . . 135
7.4. Batcher Sorting Networks . . . . . . . . . . . . . . . . . . . . . 136
7.5. Other Classes of Sorting Networks . . . . . . . . . . . . . . . . . 141
7.6. Selection Networks . . . . . . . . . . . . . . . . . . . . . . . . . 142
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
References and Suggested Reading . . . . . . . . . . . . . . . . . . . 147
CONTENTS xvii

8. Other Circuit-Level Examples . . . . . . . . . . . . . . . . . . . 149


8.1. Searching and Dictionary Operations . . . . . . . . . . . . . . . . 151
8.2. A Tree-Structured Dictionary Machine . . . . . . . . . . . . . . . 152
8.3. Parallel Prefix Computation . . . . . . . . . . . . . . . . . . . . 156
8.4. Parallel Prefix Networks . . . . . . . . . . . . . . . . . . . . . . 157
8.5. The Discrete Fourier Transform . . . . . . . . . . . . . . . . . . 161
8.6. Parallel Architectures for FFT . . . . . . . . . . . . . . . . . . . 163
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 168

Part III. Mesh-Based Architectures . . . . . . . . . . . . . . . . . . . . . . . . 169

9. Sorting on a 2D Mesh or Torus . . . . . . . . . . . . . . . . . . . . 171

9.1. Mesh-Connected Computers . . . . . . . . . . . . . . . . . . . . 173


9.2. The Shearsort Algorithm . . . . . . . . . . . . . . . . . . . . . . 176
9.3. Variants of Simple Shearsort . . . . . . . . . . . . . . . . . . . . 179
9.4. Recursive Sorting Algorithms . . . . . . . . . . . . . . . . . . . 180
9.5. A Nontrivial Lower Bound . . . . . . . . . . . . . . . . . . . . . 183
9.6. Achieving the Lower Bound . . . . . . . . . . . . . . . . . . . . 186
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 190

10. Routing on a 2D Mesh or Torus . . . . . . . . . . . . . . . . . . . . 191


10.1. Types of Data Routing Operations . . . . . . . . . . . . . . . . 193
10.2. Useful Elementary Operations . . . . . . . . . . . . . . . . . . 195
10.3. Data Routing on a 2D Array . . . . . . . . . . . . . . . . . . . 197
10.4. Greedy Routing Algorithms . . . . . . . . . . . . . . . . . . . . 199
10.5. Other Classes of Routing Algorithms . . . . . . . . . . . . . . . 202
10.6. Wormhole Routing . . . . . . . . . . . . . . . . . . . . . . . . 204
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 210

11. Numerical 2D Mesh Algorithms . . . . . . . . . . . . . . . . . . . 211


11.1. Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . 213
11.2. Triangular System of Equations . . . . . . . . . . . . . . . . . . 215
11.3. Tridiagonal System of Linear Equations . . . . . . . . . . . . . 218
11.4. Arbitrary System of Linear Equations . . . . . . . . . . . . . . . 221
11.5. Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 225
11.6. Image-Processing Algorithms . . . . . . . . . . . . . . . . . . . 228
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 233

12. Other Mesh-Related Architectures . . . . . . . . . . . . . . . . . 2 3 5


12.1. Three or More Dimensions . . . . . . . . . . . . . . . . . . . . 2 3 7
xviii INTRODUCTION TO PARALLEL PROCESSING

12.2. Stronger and Weaker Connectivities . . . . . . . . . . . . . . . 240


12.3. Meshes Augmented with Nonlocal Links . . . . . . . . . . . . . 242
12.4. Meshes with Dynamic Links . . . . . . . . . . . . . . . . . . . . . . . . . 245
12.5. Pyramid and Multigrid Systems . . . . . . . . . . . . . . . . . . . . . . . . 246
12.6. Meshes of Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
P r o b l e m s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 256

Part IV. Low-Diameter Architectures . . . . . . . . . . . . . . . . . . . . 257

13. Hypercubes and Their Algorithms . . . . . . . . . . . . . . . . . 259


13.1. Definition and Main Properties . . . . . . . . . . . . . . . . . . 261
13.2. Embeddings and Their Usefulness . . . . . . . . . . . . . . . . 263
13.3. Embedding of Arrays and Trees . . . . . . . . . . . . . . . . . . 264
13.4. A Few Simple Algorithms . . . . . . . . . . . . . . . . . . . . . 269
13.5. Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . 272
13.6. Inverting a Lower Triangular Matrix . . . . . . . . . . . . . . . 274
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 278

14. Sorting and Routing on Hypercubes . . . . . . . . . . . . . . . . 279


14.1. Defining the Sorting Problem . . . . . . . . . . . . . . . . . . . 281
14.2. Bitonic Sorting on a Hypercube . . . . . . . . . . . . . . . . . . 284
14.3. Routing Problems on a Hypercube . . . . . . . . . . . . . . . . 285
14.4. Dimension-Order Routing . . . . . . . . . . . . . . . . . . . . . 288
14.5. Broadcasting on a Hypercube . . . . . . . . . . . . . . . . . . . 292
14.6. Adaptive and Fault-Tolerant Routing . . . . . . . . . . . . . . . 294
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 298

15. Other Hypercubic Architectures . . . . . . . . . . . . . . . . . . 301


15.1. Modified and Generalized Hypercubes . . . . . . . . . . . . . . 303
15.2. Butterfly and Permutation Networks . . . . . . . . . . . . . . . 305
15.3. Plus-or-Minus-2'Network . . . . . . . . . . . . . . . . . . . . . 309
15.4. The Cube-Connected Cycles Network . . . . . . . . . . . . . . 310
15.5. Shuffle and Shuffle–Exchange Networks . . . . . . . . . . . . . 313
15.6. That’s Not All, Folks! . . . . . . . . . . . . . . . . . . . . . . . 316
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 320

16. A Sampler of Other Networks . . . . . . . . . . . . . . . . . . . 321


16.1. Performance Parameters for Networks . . . . . . . . . . . . . . 323
16.2. Star and Pancake Networks . . . . . . . . . . . . . . . . . . . . 326
16.3. Ring-Based Networks . . . . . . . . . . . . . . . . . . . . . . . 329
CONTENTS xix

16.4. Composite or Hybrid Networks . . . . . . . . . . . . . . . . . . 335


16.5. Hierarchical (Multilevel) Networks . . . . . . . . . . . . . . . . 337
16.6. Multistage Interconnection Networks . . . . . . . . . . . . . . . 338
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 343

Part V. Some Broad Topics . . . . . . . . . . . . . . . . . . . . . . . .. 345

17. Emulation and Scheduling . . . . . . . . . . . . . . . . . . . . . 347


17.1. Emulations among Architectures . . . . . . . . . . . . . . . . . 349
17.2. Distributed Shared Memory . . . . . . . . . . . . . . . . . . . . 351
17.3. The Task Scheduling Problem . . . . . . . . . . . . . . . . . . . 355
17.4. A Class of Scheduling Algorithms . . . . . . . . . . . . . . . . 357
17.5. Some Useful Bounds for Scheduling . . . . . . . . . . . . . . . 360
17.6. Load Balancing and Dataflow Systems . . . . . . . . . . . . . . 362
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 367

18. Data Storage, Input, and Output . . . . . . . . . . . . . . . . . . 369


18.1. Data Access Problems and Caching . . . . . . . . . . . . . . . . 371
18.2. Cache Coherence Protocols . . . . . . . . . . . . . . . . . . .. 374
18.3. Multithreading and Latency Hiding . . . . . . . . . . . . . . . . 377
18.4. Parallel I/O Technology . . . . . . . . . . . . . . . . . . . . . . 379
18.5. Redundant Disk Arrays . . . . . . . . . . . . . . . . . . . . . . 382
18.6. Interfaces and Standards . . . . . . . . . . . . . . . . . . . . . . 384
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 388

19. Reliable Parallel Processing .................... 391


19.1. Defects, Faults, . . . , Failures . . . . . . . . . . . . . . . . . . . 393
19.2. Defect-Level Methods . . . . . . . . . . . . . . . . . . . . . . . 396
19.3. Fault-Level Methods . . . . . . . . . . . . . . . . . . . . . . . . 399
19.4. Error-Level Methods . . . . . . . . . . . . . . . . . . . . . . . 402
19.5. Malfunction-Level Methods . . . . . . . . . . . . . . . . . . . . 404
19.6. Degradation-Level Methods . . . . . . . . . . . . . . . . . . . . . . . 407
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 413

20. System and Software Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415


20.1. Coordination and Synchronization . . . . . . . . . . . . . . . . 417
20.2. Parallel Programming . . . . . . . . . . . . . . . . . . . . . . . . . 421
20.3. Software Portability and Standards . . . . . . . . . . . . . . . . . . . . 425
20.4. Parallel Operating Systems . . . . . . . . . . . . . . . . . . . . 427
20.5. Parallel File Systems . . . . . . . . . . . . . . . . . . . . . . . 430
xx INTRODUCTION TO PARALLEL PROCESSING

20.6. Hardware/Software Interaction . . . . . . . . . . . . . . . . . 431


P r o b l e m s . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 433
References and Suggested Reading . . . . . . . . . . . . . . . . . 435

Part VI. Implementation Aspects . . . . . . . . . . . . . . . . . . . . . 437

21. Shared-Memory MIMD Machines . . . . . . . . . . . . . . . . .. . . . 439


21.1. Variations in Shared Memory . . . . . . . . . . . . . . . . . . . 441
21.2. MIN-Based BBN Butterfly . . . . . . . . . . . . . . . . . . . . 444
21.3. Vector-Parallel Cray Y-MP . . . . . . . . . . . . . . . . . . . . 445
21.4. Latency-Tolerant Tera MTA . . . . . . . . . . . . . . . . . . . . 448
21.5. CC-NUMA Stanford DASH . . . . . . . . . . . . . . . . . . . 450
21.6. SCI-Based Sequent NUMA-Q . . . . . . . . . . . . . . . . . . 452
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 457

22. Message-Passing MIMD Machines . . . . . . . . . . . . . . . . . . . 459


22.1. Mechanisms for Message Passing . . . . . . . . . . . . . . . . 461
22.2. Reliable Bus-Based Tandem Nonstop . . . . . . . . . . . . . . 464
22.3. Hypercube-Based nCUBE3 . . . . . . . . . . . . . . . . . . . . 466
22.4. Fat-Tree-Based Connection Machine 5 . . . . . . . . . . . . . . 469
22.5. Omega-Network-Based IBM SP2 . . . . . . . . . . . . . . . . . 471
22.6. Commodity-Driven Berkeley NOW . . . . . . . . . . . . . . . . 473
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 477

23. Data-Parallel SIMD Machines . . . . . . . . . . . . . . . . . . . 479


23.1. Where Have All the SIMDs Gone? . . . . . . . . . . . . . . . . 481
23.2. The First Supercomputer: ILLIAC IV . . . . . . . . . . . . . . . 484
23.3. Massively Parallel Goodyear MPP . . . . . . . . . . . . . . . . . 485
23.4. Distributed Array Processor (DAP) . . . . . . . . . . . . . . . . 488
23.5. Hypercubic Connection Machine 2 . . . . . . . . . . . . . . . . 490
23.6. Multiconnected MasPar MP-2 . . . . . . . . . . . . . . . . . . . 492
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 497

24. Past, Present, and Future .............. . , . . . . . . 499


24.1. Milestones in Parallel Processing . . . . . . . . . . . . . . . . . 501
24.2. Current Status, Issues, and Debates . . . . . . . . . . . . . . . . . 503
24.3. TFLOPS, PFLOPS, and Beyond . . . . . . . . . . . . . . . . . 506
24.4. Processor and Memory Technologies . . . . . . . . . . . . . . . 508
24.5. Interconnection Technologies . . . . . . . . . . . . . . . . . . . 510
CONTENTS xxi

24.6. The Future of Parallel Processing . . . . . . . . . . . . . . . . . 513


Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
References and Suggested Reading . . . . . . . . . . . . . . . . . . . . 517

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
This page intentionally left blank.
Introduction to
Parallel Processing
Algorithms and Architectures
This page intentionally left blank.
I
Fundamental
Concepts

The field of parallel processing is concerned with architectural and algorithmic


methods for enhancing the performance or other attributes (e.g., cost-effective-
ness, reliability) of digital computers through various forms of concurrency. Even
though concurrent computation has been around since the early days of digital
computers, only recently has it been applied in a manner, and on a scale, that
leads to better performance, or greater cost-effectiveness, compared with vector
supercomputers. Like any other field of science/technology, the study of parallel
architectures and algorithms requires motivation, a big picture showing the
relationships between problems and the various approaches to solving them,
and models for comparing, connecting, and evaluating new ideas. This part,
which motivates us to study parallel processing, paints the big picture, and
provides some needed background, is composed of four chapters:

• Chapter 1: Introduction to Parallelism


• Chapter 2: A Taste of Parallel Algorithms
• Chapter 3: Parallel Algorithm Complexity
• Chapter 4: Models of Parallel Processing

1
This page intentionally left blank.
1
Introduction to
Parallelism

This chapter sets the context in which the material in the rest of the book will
be presented and reviews some of the challenges facing the designers and users
of parallel computers. The chapter ends with the introduction of useful metrics
for evaluating the effectiveness of parallel systems. Chapter topics are

• 1.1. Why parallel processing?


• 1.2. A motivating example
• 1.3. Parallel processing ups and downs
• 1.4. Types of parallelism: A taxonomy
• 1.5. Roadblocks to parallel processing
• 1.6. Effectiveness of parallel processing

3
This page intentionally left blank.
INTRODUCTION TO PARALLELISM 5

1.1. WHY PARALLEL PROCESSING?


The quest for higher-performance digital computers seems unending. In the past two
decades, the performance of microprocessors has enjoyed an exponential growth. The growth
of microprocessor speed/performance by a factor of 2 every 18 months (or about 60% per
year) is known as Moore’s law. This growth is the result of a combination of two factors:

1. Increase in complexity (related both to higher device density and to larger size) of
VLSI chips, projected to rise to around 10 M transistors per chip for microproces-
sors, and 1B for dynamic random-access memories (DRAMs), by the year 2000
[SIA94]
2. Introduction of, and improvements in, architectural features such as on-chip cache
memories, large instruction buffers, multiple instruction issue per cycle, multi-
threading, deep pipelines, out-of-order instruction execution, and branch prediction

Moore’s law was originally formulated in 1965 in terms of the doubling of chip complexity
every year (later revised to every 18 months) based only on a small number of data points
[Scha97]. Moore’s revised prediction matches almost perfectly the actual increases in the
number of transistors in DRAM and microprocessor chips.
Moore’s law seems to hold regardless of how one measures processor performance:
counting the number of executed instructions per second (IPS), counting the number of
floating-point operations per second (FLOPS), or using sophisticated benchmark suites
that attempt to measure the processor's performance on real applications. This is because
all of these measures, though numerically different, tend to rise at roughly the same rate.
Figure 1.1 shows that the performance of actual processors has in fact followed Moore’s
law quite closely since 1980 and is on the verge of reaching the GIPS (giga IPS = 109
IPS) milestone.
Even though it is expected that Moore's law will continue to hold for the near future,
there is a limit that will eventually be reached. That some previous predictions about when
the limit will be reached have proven wrong does not alter the fact that a limit, dictated by
physical laws, does exist. The most easily understood physical limit is that imposed by the
finite speed of signal propagation along a wire. This is sometimes referred to as the
speed-of-light argument (or limit), explained as follows.

The Speed-of-Light Argument. The speed of light is about 30 cm/ns. Signals travel
on a wire at a fraction of the speed of light. If the chip diameter is 3 cm, say, any computation
that involves signal transmission from one end of the chip to another cannot be executed
faster than 1010 times per second. Reducing distances by a factor of 10 or even 100 will only
increase the limit by these factors; we still cannot go beyond 1012 computations per second.
To relate the above limit to the instruction execution rate (MIPS or FLOPS), we need to
estimate the distance that signals must travel within an instruction cycle. This is not easy to
do, given the extensive use of pipelining and memory-latency-hiding techniques in modern
high-performance processors. Despite this difficulty, it should be clear that we are in fact not
very far from limits imposed by the speed of signal propagation and several other physical
laws.
6 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.1. The exponential growth of microprocessor performance, known as Moore’s law,
shown over the past two decades.

The speed-of-light argument suggests that once the above limit has been reached, the
only path to improved performance is the use of multiple processors. Of course, the same
argument can be invoked to conclude that any parallel processor will also be limited by the
speed at which the various processors can communicate with each other. However, because
such communication does not have to occur for every low-level computation, the limit is less
serious here. In fact, for many applications, a large number of computation steps can be
performed between two successive communication steps, thus amortizing the communica-
tion overhead.
Here is another way to show the need for parallel processing. Figure 1.2 depicts the
improvement in performance for the most advanced high-end supercomputers in the same
20-year period covered by Fig. 1.1. Two classes of computers have been included: (1)
Cray-type pipelined vector supercomputers, represented by the lower straight line, and (2)
massively parallel processors (MPPs) corresponding to the shorter upper lines [Bell92].
We see from Fig. 1.2 that the first class will reach the TFLOPS performance benchmark
around the turn of the century. Even assuming that the performance of such machines will
continue to improve at this rate beyond the year 2000, the next milestone, i.e., PFLOPS (peta
FLOPS = 1015 FLOPS) performance, will not be reached until the year 2015. With massively
parallel computers, TFLOPS performance is already at hand, albeit at a relatively high cost.
PFLOPS performance within this class should be achievable in the 2000–2005 time frame,
again assuming continuation of the current trends. In fact, we already know of one serious
roadblock to continued progress at this rate: Research in the area of massively parallel
computing is not being funded at the levels it enjoyed in the 1980s.
But who needs supercomputers with TFLOPS or PFLOPS performance? Applications
of state-of-the-art high-performance computers in military, space research, and climate
modeling are conventional wisdom. Lesser known are applications in auto crash or engine
combustion simulation, design of pharmaceuticals, design and evaluation of complex ICs,
scientific visualization, and multimedia. In addition to these areas, whose current computa-
tional needs are met by existing supercomputers, there are unmet computational needs in
INTRODUCTION TO PARALLELISM 7

Figure 1.2. The exponential growth in supercomputer performance over the past two decades
[Bell92].

aerodynamic simulation of an entire aircraft, modeling of global climate over decades, and
investigating the atomic structures of advanced materials.
Let us consider a few specific applications, in the area of numerical simulation for
validating scientific hypotheses or for developing behavioral models, where TFLOPS
performance is required and PFLOPS performance would be highly desirable [Quin94].
To learn how the southern oceans transport heat to the South Pole, the following model
has been developed at Oregon State University. The ocean is divided into 4096 regions E–W,
1024 regions N–S, and 12 layers in depth (50 M 3D cells). A single iteration of the model
simulates ocean circulation for 10 minutes and involves about 30B floating-point operations.
To carry out the simulation for 1 year, about 50,000 iterations are required. Simulation for
6 years would involve 1016 floating-point operations.
In the field of fluid dynamics, the volume under study may be modeled by a 10³ × 10³
× 10³ lattice, with about 10³ floating-point operations needed per point over 104 time steps.
This too translates to 1016 floating-point operations.
As a final example, in Monte Carlo simulation of a nuclear reactor, about 1011 particles
must be tracked, as about 1 in 108 particles escape from a nuclear reactor and, for accuracy,
we need at least 10³ escapes in the simulation. With 104 floating-point operations needed per
particle tracked, the total computation constitutes about 1015 floating-point operations.
From the above, we see that 1015 –10 16 floating-point operations are required for many
applications. If we consider 10³ –104 seconds a reasonable running time for such computa-
8 INTRODUCTION TO PARALLEL PROCESSING

tions, the need for TFLOPS performance is evident. In fact, researchers have already begun
working toward the next milestone of PFLOPS performance, which would be needed to run
the above models with higher accuracy (e.g., 10 times finer subdivisions in each of three
dimensions) or for longer durations (more steps).
The motivations for parallel processing can be summarized as follows:

1. Higher speed, or solving problems faster. This is important when applications have
“hard” or “soft” deadlines. For example, we have at most a few hours of computation
time to do 24-hour weather forecasting or to produce timely tornado warnings.
2. Higher throughput, or solving more instances of given problems. This is important
when many similar tasks must be performed. For example, banks and airlines,
among others, use transaction processing systems that handle large volumes of data.
3. Higher computational power, or solving larger problems. This would allow us to
use very detailed, and thus more accurate, models or to carry out simulation runs
for longer periods of time (e.g., 5-day, as opposed to 24-hour, weather forecasting).

All three aspects above are captured by a figure-of-merit often used in connection with
parallel processors: the computation speed-up factor with respect to a uniprocessor. The
ultimate efficiency in parallel systems is to achieve a computation speed-up factor of p with
p processors. Although in many cases this ideal cannot be achieved, some speed-up is
generally possible. The actual gain in speed depends on the architecture used for the system
and the algorithm run on it. Of course, for a task that is (virtually) impossible to perform on
a single processor in view of its excessive running time, the computation speed-up factor can
rightly be taken to be larger than p or even infinite. This situation, which is the analogue of
several men moving a heavy piece of machinery or furniture in a few minutes, whereas one
of them could not move it at all, is sometimes referred to as parallel synergy.
This book focuses on the interplay of architectural and algorithmic speed-up tech-
niques. More specifically, the problem of algorithm design for general-purpose parallel
systems and its “converse,” the incorporation of architectural features to help improve
algorithm efficiency and, in the extreme, the design of algorithm-based special-purpose
parallel architectures, are considered.

1.2. A MOTIVATING EXAMPLE

A major issue in devising a parallel algorithm for a given problem is the way in which
the computational load is divided between the multiple processors. The most efficient scheme
often depends both on the problem and on the parallel machine’s architecture. This section
exposes some of the key issues in parallel processing through a simple example [Quin94].
Consider the problem of constructing the list of all prime numbers in the interval [1, n]
for a given integer n > 0. A simple algorithm that can be used for this computation is the
sieve of Eratosthenes. Start with the list of numbers 1, 2, 3, 4, . . . , n represented as a “mark”
bit-vector initialized to 1000 . . . 00. In each step, the next unmarked number m (associated
with a 0 in element m of the mark bit-vector) is a prime. Find this element m and mark all
multiples of m beginning with m ². When m² > n, the computation stops and all unmarked
elements are prime numbers. The computation steps for n = 30 are shown in Fig. 1.3.
INTRODUCTION TO PARALLELISM 9
10 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.4. Schematic representation of single-processor solution for the sieve of Eratosthenes.

Figure 1.4 shows a single-processor implementation of the algorithm. The variable


“current prime” is initialized to 2 and, in later stages, holds the latest prime number found.
For each prime found, “index” is initialized to the square of this prime and is then
incremented by the current prime in order to mark all of its multiples.
Figure 1.5 shows our first parallel solution using p processors. The list of numbers and
the current prime are stored in a shared memory that is accessible to all processors. An idle
processor simply refers to the shared memory, updates the current prime, and uses its private
index to step through the list and mark the multiples of that prime. Division of work is thus
self-regulated. Figure 1.6 shows the activities of the processors (the prime they are working
on at any given instant) and the termination time for n = 1000 and 1 ≤ p ≤ 3. Note that using
more than three processors would not reduce the computation time in this control-parallel
scheme.
We next examine a data-parallel approach in which the bit-vector representing the n
integers is divided into p equal-length segments, with each segment stored in the private
memory of one processor (Fig. 1.7). Assume that p < so that all of the primes whose
multiples have to be marked reside in Processor 1, which acts as a coordinator: It finds the
next prime and broadcasts it to all other processors, which then proceed to mark the numbers
in their sublists. The overall solution time now consists of two components: the time spent
on transmitting the selected primes to all processors (communication time) and the time spent
by individual processors marking their sublists (computation time). Typically, communica-
tion time grows with the number of processors, though not necessarily in a linear fashion.
Figure 1.8 shows that because of the abovementioned communication overhead, adding more
processors beyond a certain optimal number does not lead to any improvement in the total
solution time or in attainable speed-up.

Figure 1.5. Schematic representation of a control-parallel solution for the sieve of Eratosthenes.
INTRODUCTION TO PARALLELISM 11
12 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.7. Data-parallel realization of the sieve of Eratosthenes.

Finally, consider the data-parallel solution, but with data I/O time also included in the
total solution time. Assuming for simplicity that the I/O time is constant and ignoring
communication time, the I/O time will constitute a larger fraction of the overall solution time
as the computation part is speeded up by adding more and more processors. If I/O takes 100
seconds, say, then there is little difference between doing the computation part in 1 second
or in 0.01 second. We will later see that such “sequential” or “unparallelizable” portions of
computations severely limit the speed-up that can be achieved with parallel processing.
Figure 1.9 shows the effect of I/O on the total solution time and the attainable speed-up.

Figure 1.8. Trade-off between communication time and computation time in the data-parallel
realization of the sieve of Eratosthenes.
INTRODUCTION TO PARALLELISM 13

Figure 1.9. Effect of a constant I/O time on the data-parallel realization of the sieve of
Eratosthenes.

1.3. PARALLEL PROCESSING UPS AND DOWNS


L. F. Richardson, a British meteorologist, was the first person to attempt to forecast the
weather using numerical computations. He started to formulate his method during the First
World War while serving in the army ambulance corps. He estimated that predicting the
weather for a 24-hour period would require 64,000 slow “computers” (humans + mechanical
calculators) and even then, the forecast would take 12 hours to complete. He had the
following idea or dream:

Imagine a large hall like a theater. . . . The walls of this chamber are painted to form a
map of the globe. . . . A myriad of computers are at work upon the weather on the part
of the map where each sits, but each computer attends to only one equation or part of an
equation. The work of each region is coordinated by an official of higher rank. Numerous
little ‘night signs’ display the instantaneous values so that neighbouring computers can
read them. . . . One of [the conductor’s] duties is to maintain a uniform speed of progress
in all parts of the globe. . . . But instead of waving a baton, he turns a beam of rosy light
upon any region that is running ahead of the rest, and a beam of blue light upon those
that are behindhand. [See Fig. 1.10.]

Parallel processing, in the literal sense of the term, is used in virtually every modern
computer. For example, overlapping I/O with computation is a form of parallel processing,
as is the overlap between instruction preparation and execution in a pipelined processor.
Other forms of parallelism or concurrency that are widely used include the use of multiple
functional units (e.g., separate integer and floating-point ALUs or two floating-point multi-
pliers in one ALU) and multitasking (which allows overlap between computation and
memory load necessitated by a page fault). Horizontal microprogramming, and its higher-
level incarnation in very-long-instruction-word (VLIW) computers, also allows some paral-
lelism. However, in this book, the term parallel processing is used in a restricted sense of
having multiple (usually identical) processors for the main computation and not for the I/O
or other peripheral activities.
The history of parallel processing has had its ups and downs (read company formations
and bankruptcies!) with what appears to be a 20-year cycle. Serious interest in parallel
processing started in the 1960s. ILLIAC IV, designed at the University of Illinois and later
14 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.10. Richardson’s circular theater for weather forecasting calculations.

built and operated by Burroughs Corporation, was the first large-scale parallel computer
implemented; its 2D-mesh architecture with a common control unit for all processors was
based on theories developed in the late 1950s. It was to scale to 256 processors (four
quadrants of 64 processors each). Only one 64-processor quadrant was eventually built, but
it clearly demonstrated the feasibility of highly parallel computers and also revealed some
of the difficulties in their use.
Commercial interest in parallel processing resurfaced in the 1980s. Driven primarily by
contracts from the defense establishment and other federal agencies in the United States,
numerous companies were formed to develop parallel systems. Established computer ven-
dors also initiated or expanded their parallel processing divisions. However, three factors led
to another recess:

1. Government funding in the United States and other countries dried up, in part related
to the end of the cold war between the NATO allies and the Soviet bloc.
2. Commercial users in banking and other data-intensive industries were either satu-
rated or disappointed by application difficulties.
3. Microprocessors developed so fast in terms of performance/cost ratio that custom-
designed parallel machines always lagged in cost-effectiveness.

Many of the newly formed companies went bankrupt or shifted their focus to developing
software for distributed (workstation cluster) applications.
Driven by the Internet revolution and its associated “information providers,” a third
resurgence of parallel architectures is imminent. Centralized, high-performance machines
may be needed to satisfy the information processing/access needs of some of these providers.
INTRODUCTION TO PARALLELISM 15

1.4. TYPES OF PARALLELISM: A TAXONOMY

Parallel computers can be divided into two main categories of control flow and data
flow. Control-flow parallel computers are essentially based on the same principles as the
sequential or von Neumann computer, except that multiple instructions can be executed at
any given time. Data-flow parallel computers, sometimes referred to as “non-von Neumann,”
are completely different in that they have no pointer to active instruction(s) or a locus of
control. The control is totally distributed, with the availability of operands triggering the
activation of instructions. In what follows, we will focus exclusively on control-flow parallel
computers.
In 1966, M. J. Flynn proposed a four-way classification of computer systems based on
the notions of instruction streams and data streams. Flynn’s classification has become
standard and is widely used. Flynn coined the abbreviations SISD, SIMD, MISD, and MIMD
(pronounced “sis-dee,” “sim-dee,” and so forth) for the four classes of computers shown in
Fig. 1.11, based on the number of instruction streams (single or multiple) and data streams
(single or multiple) [Flyn96]. The SISD class represents ordinary “uniprocessor” machines.
Computers in the SIMD class, with several processors directed by instructions issued from
a central control unit, are sometimes characterized as “array processors.” Machines in the
MISD category have not found widespread application, but one can view them as generalized
pipelines in which each stage performs a relatively complex operation (as opposed to
ordinary pipelines found in modern processors where each stage does a very simple
instruction-level operation).
The MIMD category includes a wide class of computers. For this reason, in 1988, E. E.
Johnson proposed a further classification of such machines based on their memory structure
(global or distributed) and the mechanism used for communication/synchronization (shared
variables or message passing). Again, one of the four categories (GMMP) is not widely used.
The GMSV class is what is loosely referred to as (shared-memory) multiprocessors. At the

Figure 1.11. The Flynn–Johnson classification of computer systems.


16 INTRODUCTION TO PARALLEL PROCESSING

other extreme, the DMMP class is known as (distributed-memory) multicomputers. Finally,


the DMSV class, which is becoming popular in view of combining the implementation ease
of distributed memory with the programming ease of the shared-variable scheme, is some-
times called distributed shared memory. When all processors in a MIMD-type machine
execute the same program, the result is sometimes referred to as single-program multiple-
data [SPMD (spim-dee)].
Although Fig. 1.11 lumps all SIMD machines together, there are in fact variations
similar to those suggested above for MIMD machines. At least conceptually, there can be
shared-memory and distributed-memory SIMD machines in which the processors commu-
nicate by means of shared variables or explicit message passing.
Anecdote. The Flynn–Johnson classification of Fig. 1.11 contains eight four-letter
abbreviations. There are many other such abbreviations and acronyms in parallel processing,
examples being CISC, NUMA, PRAM, RISC, and VLIW. Even our journals (JPDC, TPDS)
and conferences (ICPP, IPPS, SPDP, SPAA) have not escaped this fascination with four-letter
abbreviations. The author has a theory that an individual cannot be considered a successful
computer architect until she or he has coined at least one, and preferably a group of two or
four, such abbreviations! Toward this end, the author coined the acronyms SINC and FINC
(Scant/Full Interaction Network Cell) as the communication network counterparts to the
popular RISC/CISC dichotomy [Parh95]. Alas, the use of these acronyms is not yet as
widespread as that of RISC/CISC. In fact, they are not used at all.

1.5. ROADBLOCKS TO PARALLEL PROCESSING

Over the years, the enthusiasm of parallel computer designers and researchers has been
counteracted by many objections and cautionary statements. The most important of these are
listed in this section [Quin87]. The list begins with the less serious, or obsolete, objections
and ends with Amdahl’s law, which perhaps constitutes the most important challenge facing
parallel computer designers and users.

1. Grosch’s law (economy of scale applies, or computing power is proportional to the


square of cost). If this law did in fact hold, investing money in p processors would
be foolish as a single computer with the same total cost could offer p² times the
performance of one such processor. Grosch’s law was formulated in the days of
giant mainframes and actually did hold for those machines. In the early days of
parallel processing, it was offered as an argument against the cost-effectiveness of
parallel machines. However, we can now safely retire this law, as we can buy more
MFLOPS computing power per dollar by spending on micros rather than on supers.
Note that even if this law did hold, one could counter that there is only one “fastest”
single-processor computer and it has a certain price; you cannot get a more powerful
one by spending more.
2. Minsky’s conjecture (speed-up is proportional to the logarithm of the number p of
processors). This conjecture has its roots in an analysis of data access conflicts
assuming random distribution of addresses. These conflicts will slow everything
down to the point that quadrupling the number of processors only doubles the
performance. However, data access patterns in real applications are far from
INTRODUCTION TO PARALLELISM 17

random. Most applications have a pleasant amount of data access regularity and
locality that help improve the performance. One might say that the log p speed-up
rule is one side of the coin that has the perfect speed-up p on the flip side. Depending
on the application, real speed-up can range from log p to p (p /log p being a
reasonable middle ground).
3. The tyranny of IC technology (because hardware becomes about 10 times faster
every 5 years, by the time a parallel machine with 10-fold performance is designed
and implemented, uniprocessors will be just as fast). This objection might be valid
for some special-purpose systems that must be built from scratch with “old”
technology. Recent experience in parallel machine design has shown that off-the-
shelf components can be used in synthesizing massively parallel computers. If the
design of the parallel processor is such that faster microprocessors can simply be
plugged in as they become available, they too benefit from advancements in IC
technology. Besides, why restrict our attention to parallel systems that are designed
to be only 10 times faster rather than 100 or 1000 times?
4. The tyranny of vector supercomputers (vector supercomputers, built by Cray,
Fujitsu, and other companies, are rapidly improving in performance and addition-
ally offer a familiar programming model and excellent vectorizing compilers; why
bother with parallel processors?). Figure 1.2 contains a possible answer to this
objection. Besides, not all computationally intensive applications deal with vectors
or matrices; some are in fact quite irregular. Note, also, that vector and parallel
processing are complementary approaches. Most current vector supercomputers do
in fact come in multiprocessor configurations for increased performance.
5. The software inertia (billions of dollars worth of existing software makes it hard to
switch to parallel systems; the cost of converting the “dusty decks” to parallel
programs and retraining the programmers is prohibitive). This objection is valid in
the short term; however, not all programs needed in the future have already been
written. New applications will be developed and many new problems will become
solvable with increased performance. Students are already being trained to think
parallel. Additionally, tools are being developed to transform sequential code into
parallel code automatically. In fact, it has been argued that it might be prudent to
develop programs in parallel languages even if they are to be run on sequential
computers. The added information about concurrency and data dependencies would
allow the sequential computer to improve its performance by instruction prefetch-
ing, data caching, and so forth.
6. Amdahl’s law (speed-up ≤ 1/[ƒ+ (1 – ƒ)/p ] = p/[1 + ƒ(p – 1)]; a small fraction ƒ of
inherently sequential or unparallelizable computation severely limits the speed-up
that can be achieved with p processors). This is by far the most important of the six
objections/warnings. A unit-time task, for which the fraction ƒ is unparallelizable
(so it takes the same time ƒ on both sequential and parallel machines) and the
remaining 1 – ƒ is fully parallelizable [so it runs in time (1 – ƒ)/p on a p -processor
machine], has a running time of ƒ + (1 – ƒ)/ p on the parallel machine, hence
Amdahl’s speed-up formula.

Figure 1.12 plots the speed-up as a function of the number of processors for different values
of the inherently sequential fraction ƒ. The speed-up can never exceed 1/ƒ, no matter how
18 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.12. The limit on speed-up according to Amdahl’s law.

Figure 1.13. Task graph exhibiting limited inherent parallelism.


INTRODUCTION TO PARALLELISM 19

many processors are used. Thus, for ƒ = 0.1, speed-up has an upper bound of 10. Fortunately,
there exist applications for which the sequential overhead is very small. Furthermore, the
sequential overhead need not be a constant fraction of the job independent of problem size.
In fact, the existence of applications for which the sequential overhead, as a fraction of the
overall computational work, diminishes has been demonstrated.
Closely related to Amdahl’s law is the observation that some applications lack inherent
parallelism, thus limiting the speed-up that is achievable when multiple processors are used.
Figure 1.13 depicts a task graph characterizing a computation. Each of the numbered nodes
in the graph is a unit-time computation and the arrows represent data dependencies or the
prerequisite structure of the graph. A single processor can execute the 13-node task graph
shown in Fig. 1.13 in 13 time units. Because the critical path from input node 1 to output
node 13 goes through 8 nodes, a parallel processor cannot do much better, as it needs at least
8 time units to execute the task graph. So, the speed-up associated with this particular task
graph can never exceed 1.625, no matter how many processors are used.

1.6. EFFECTIVENESS OF PARALLEL PROCESSING

Throughout the book, we will be using certain measures to compare the effectiveness
of various parallel algorithms or architectures for solving desired problems. The following
definitions and notations are applicable [Lee80]:

p Number of processors
W(p) Total number of unit operations performed by the p processors; this is often
referred to as computational work or energy
T(p) Execution time with p processors; clearly, T(1) = W(1) and T(p) ≤ W (p)

S(p) Speed-up =

E(p) Efficiency =

R (p ) Redundancy =

U(p) Utilization =

Q(p) Quality =

The significance of each measure is self-evident from its name and defining equation given
above. It is not difficult to establish the following relationships between these parameters.
The proof is left as an exercise.

1 ≤ S(p ) ≤ p
U(p) = R (p)E(p)
20 INTRODUCTION TO PARALLEL PROCESSING

Figure 1.14. Computation graph for finding the sum of 16 numbers.

Example. Finding the sum of 16 numbers can be represented by the binary-tree


computation graph of Fig. 1.14 with T(1) = W(1) = 15. Assume unit-time additions and ignore
all else. With p = 8 processors, we have

W(8) = 15 T(8) = 4 E(8) = 15/(8 × 4) = 47%


S(8) = 15/4 = 3.75 R(8) = 15/15 = 1 Q(8) = 1.76
Essentially, the 8 processors perform all of the additions at the same tree level in each time
unit, beginning with the leaf nodes and ending at the root. The relatively low efficiency is
the result of limited parallelism near the root of the tree.
Now, assuming that addition operations that are vertically aligned in Fig. 1.14 are to be
performed by the same processor and that each interprocessor transfer, represented by an
oblique arrow, also requires one unit of work (time), the results for p = 8 processors become
INTRODUCTION TO PARALLELISM 21

W(8) = 22 T(8) = 7 E(8) = 15/(8 × 7) = 27%


S(8) = 15/7 = 2.14 R(8) = 22/15 = 1.47 Q(8) = 0.39

The efficiency in this latter case is even lower, primarily because the interprocessor transfers
constitute overhead rather than useful operations.

PROBLEMS

1.1. Ocean heat transport modeling


Assume continuation of the trends in Figs. 1.1 and 1.2:
a . When will a single microprocessor be capable of simulating 10 years of global ocean
circulation, as described in Section 1.1, overnight (5:00 PM to 8:00 AM the following day),
assuming a doubling of the number of divisions in each of the three dimensions? You can
assume that a microprocessor’s FLOPS rating is roughly half of its MIPS rating.
b. When will a vector supercomputer be capable of the computation defined in part (a)?
c . When will a $240M massively parallel computer be capable of the computation of part (a)?
d. When will a $30M massively parallel computer be capable of the computation of part (a)?

1.2. Micros versus supers


Draw the performance trend line for microprocessors on Fig. 1.2, assuming that a microproc-
essor’s FLOPS rating is roughly half of its MIPS rating. Compare and discuss the observed
trends.

1.3. Sieve of Eratosthenes


Figure 1.6 shows that in the control-parallel implementation of the sieve of Eratosthenes
algorithm, a single processor is always responsible for sieving the multiples of 2. For n = 1000,
this is roughly 35% of the total work performed. By Amdahl’s law, the maximum possible
speed-up for p = 2 and ƒ = 0.35 is 1.48. Yet, for p = 2, we note a speed-up of about 2 in Fig.
1.6. What is wrong with the above reasoning?

1.4. Sieve of Eratosthenes


Consider the data-parallel implementation of the sieve of Eratosthenes algorithm for n = 106.
Assume that marking of each cell takes 1 time unit and broadcasting a value to all processors
takes b time units.
a . Plot three speed-up curves similar to Fig. 1.8 for b = 1, 10, and 100 and discuss the results.
b. Repeat part (a), this time assuming that the broadcast time is a linear function of the number
of processors: b = α p + β , with ( α, β) = (5, 1), (5, 10), (5, 100).

1.5. Sieve of Eratosthenes


Consider the data-parallel implementation of the sieve of Eratosthenes algorithm for n = 106.
Assume that marking of each cell takes 1 time unit and broadcasting m numbers to all processors
takes b + cm time units, where b and c are constants. For each of the values 1, 10, and 100 for
the parameter b, determine the range of values for c where it would be more cost-effective for
Processor 1 to send the list of all primes that it is holding to all other processors in a single
message before the actual markings begin.
22 INTRODUCTION TO PARALLEL PROCESSING

1.6. Sieve of Eratosthenes


a. Noting that 2 is the only even prime, propose a modification to the sieve of Eratosthenes
algorithm that requires less storage.
b. Draw a diagram, similar to Fig. 1.6, for the control-parallel implementation of the improved
algorithm. Derive the speed-ups for two and three processors.
c. Compute the speed-up of the data-parallel implementation of the improved algorithm over
the sequential version.
d. Compare the speed-ups of parts (b) and (c) with those obtained for the original algorithm.

1.7. Amdahl’s law


Amdahl’s law can be applied in contexts other than parallel processing. Suppose that a
numerical application consists of 20% floating-point and 80% integer/control operations (these
are based on operation counts rather than their execution times). The execution time of a
floating-point operation is three times as long as other operations. We are considering a redesign
of the floating-point unit in a microprocessor to make it faster.
a. Formulate a more general version of Amdahl’s law in terms of selective speed-up of a
portion of a computation rather than in terms of parallel processing.
b. How much faster should the new floating-point unit be for 25% overall speed improve-
ment?
c. What is the maximum speed-up that we can hope to achieve by only modifying the
floating-point unit?

1.8. Amdahl’s law


a. Represent Amdahl’s law in terms of a task or computation graph similar to that in Fig. 1.13.
Hint: Use an input and an output node, each with computation time ƒ/2, where ƒ is the
inherently sequential fraction.
b. Approximate the task/computation graph of part (a) with one having only unit-time nodes.

1.9. Parallel processing effectiveness


Consider two versions of the task graph in Fig. 1.13. Version U corresponds to each node
requiring unit computation time. Version E/O corresponds to each odd-numbered node being
unit-time and each even-numbered node taking twice as long.
a. Convert the E/O version to an equivalent V version where each node is unit-time.
b. Find the maximum attainable speed-up for each of the U and V versions.
c. What is the minimum number of processors needed to achieve the speed-ups of part (b)?
d. What is the maximum attainable speed-up in each case with three processors?
e. Which of the U and V versions of the task graph would you say is “more parallel” and
why?

1.10. Parallel processing effectiveness


Prove the relationships between the parameters in Section 1.6.

1.11. Parallel processing effectiveness


An image processing application problem is characterized by 12 unit-time tasks: (1) an input
task that must be completed before any other task can start and consumes the entire bandwidth
of the single-input device available, (2) 10 completely independent computational tasks, and
(3) an output task that must follow the completion of all other tasks and consumes the entire
bandwidth of the single-output device available. Assume the availability of one input and one
output device throughout.
INTRODUCTION TO PARALLELISM 23

a . Draw the task graph for this image processing application problem.
b. What is the maximum speed-up that can be achieved for this application with two
processors?
c . What is an upper bound on the speed-up with parallel processing?
d. How many processors are sufficient to achieve the maximum speed-up derived in part (c)?
e . What is the maximum speed-up in solving five independent instances of the problem on
two processors?
f . What is an upper bound on the speed-up in parallel solution of 100 independent instances
of the problem?
g. How many processors are sufficient to achieve the maximum speed-up derived in part (f)?
h. What is an upper bound on the speed-up, given a steady stream of independent problem
instances?

1.12. Parallelism in everyday life


Discuss the various forms of parallelism used to speed up the following processes:
a. Student registration at a university.
b. Shopping at a supermarket.
c. Taking an elevator in a high-rise building.

1.13. Parallelism for fame or fortune


In 1997, Andrew Beale, a Dallas banker and amateur mathematician, put up a gradually
increasing prize of up to U.S. $50,000 for proving or disproving his conjecture that if a q + b r
= c s (where all terms are integers and q, r, s > 2), then a, b, and c have a common factor. Beale’s
conjecture is, in effect, a general form of Fermat’s Last Theorem, which asserts that a n + b n =
c n has no integer solution for n > 2. Discuss how parallel processing can be used to claim the
prize.

REFERENCES AND SUGGESTED READING


[Bell92] Bell, G., “Ultracomputers: A Teraflop Before Its Time,” Communications of the ACM, Vol. 35, No.
8, pp. 27–47, August 1992.
[Flyn96] Flynn, M. J., and K. W. Rudd, “Parallel Architectures,” ACM Computing Surveys, Vol. 28, No. 1, pp.
67–70, March 1996.
[John88] Johnson, E. E., “Completing an MIMD Multiprocessor Taxonomy,” Computer Architecture News,
Vol. 16, No. 3, pp. 44–47, June 1988.
[Lee80] Lee, R. B.-L., “Empirical Results on the Speed, Efficiency, Redundancy, and Quality of Parallel
Computations,” Proc. Int. Conf. Parallel Processing, 1980, pp. 91–96.
[Parh95] Parhami, B., “The Right Acronym at the Right Time” (The Open Channel), IEEE Computer, Vol. 28,
No. 6, p. 120, June 1995.
[Quin87] Quinn, M. J., Designing Efficient Algorithm for Parallel Computers, McGraw-Hill, 1987.
[Quin94] Quinn, M. J., Parallel Computing: Theory and Practice, McGraw-Hill, 1994.
[Scha97] Schaller, R. R., “Moore’s Law: Past, Present, and Future,” IEEE Spectrum, Vol. 34, No. 6, pp. 52-59,
June 1997.
[SIA94] Semiconductor Industry Association, The National Roadmap for Semiconductors, 1994.
This page intentionally left blank.
2
A Taste of Parallel
Algorithms

In this chapter, we examine five simple building-block parallel operations


(defined in Section 2.1) and look at the corresponding algorithms on four simple
parallel architectures: linear array, binary tree, 2D mesh, and a simple shared-
variable computer (see Section 2.2). This exercise will introduce us to the nature
of parallel computations, the interplay between algorithm and architecture, and
the complexity of parallel computations (analyses and bounds). Also, the build-
ing-block computations are important in their own right and will be used
throughout the book. We will study some of these architectures and algorithms
in more depth in subsequent chapters. Chapter topics are

• 2.1. Some simple computations


• 2.2. Some simple architectures
• 2.3. Algorithms for a linear array
• 2.4. Algorithms for a binary tree
• 2.5. Algorithms for a 2D mesh
• 2.6. Algorithms with shared variables

25
Another Random Document on
Scribd Without Any Related Topics
on this fateful occasion, was rumored to be carrying nearly one 87
hundred thousand dollars in ingots from the Homestake, as
well as from other works; and although shipments of such size were
not altogether rare, they were sufficiently out of the ordinary to
suggest the services of additional shotgun messengers. It may well
have been the mere fact of the scheduling of additional guards that
called the attention of the bandits to this particular manifest.

The holdup took place in midafternoon, as the driver was stopping


the coach to water the horses. In the gunplay three men were killed,
and the bandits escaped with the loot from the treasure chest, which
they apparently managed to chisel open. Ten miles away one of the
guards came upon a party of horsemen, who returned to the scene
of the carnage; but upon their arrival they found the coach despoiled
of its gold.

In many such cases the bandits would have been recognized as local
or near-local citizens; but in this instance all of the desperadoes
appeared to be strangers to the Hills, and consequently the law
officers had very little except guesswork to guide them in their
pursuit. Guesswork coupled with just plain snooping soon uncovered
a trail, however, for one of the stage agents turned up a ranch 88
owner who gave the information that a small group of men
had, on the very evening of the holdup, bought a light spring wagon
from him. Such a transaction was unusual enough to indicate that the
purchasers were by no means individuals of legitimate calling, and in
all probability were the actual bandits. Setting out on this trail, the
agent managed to trace them and their wagon all the way to
Cheyenne, where the group had apparently turned to the east.

By that time persons who had seen them in passing had recognized
them, and their names were broadcast to the marshals and sheriffs
of all the eastern regions of the plains. Day after day the stage agent
followed their trail east, across the Missouri, across the border of
Nebraska, and on to the pleasant town of Atlantic, in Iowa. By that
time the wagon had been discarded and the gang had broken up,
and the agent was following only one spoor—the track of a young
man who was always seen with a strange, heavy pack on his back.

In the town of Atlantic the trail came to an abrupt end, and indeed
the mystery might never have been solved had it not been for a
strange display in the street window of a local bank. Pausing to see
what it might be that was engaging the attention of a crowd at 89
the window, the agent was astounded to behold part of the
very loot he was pursuing—two bullion bricks stamped with serial
numbers which identified them beyond a doubt as part of the Canyon
Springs treasure.

Upon questioning, the banker proudly asserted that his son had only
the day before returned from a successful adventure in the Black
Hills, and had, as a matter of fact, found a gold mine, which he had
sold for the very bricks making up the exhibit.

Gently the agent disabused the banker of this sad misapprehension,


and, enlisting the aid of the local Sheriff, had the prodigal son
arrested.

The unfortunate conclusion to this tale of detective expertness is that


although the gold was eventually returned to the Homestake, the
young bandit escaped from the train which was carrying him back to
Cheyenne, and was never thereafter apprehended. As for the other
four robbers and the rest of the treasure, no further trace of either
was ever discovered.

Although banditry and skulduggery played a very great part in the


tales of most of the other bonanza gold fields, the Black Hills story
was for the most part happily without extraordinary violence. 90
Much more conspicuous in the history of the Hills than the
desperate adventures of bandits are the exploits of the folk heroes
who rode the Deadwood legend into immortality. Wild Bill Hickok,
Calamity Jane, Deadwood Dick, Preacher Smith, all of these amazing
personalities achieved a lasting fame during the early days of this
later frontier, not for any deeds of derring-do in the Sam Bass
fashion, but for the old American custom of living and dying in a high
and wide manner.

In all probability Deadwood Dick has carried the saga of the Hills
farther and to a greater audience, both in this country and abroad,
than any of the others, and for that reason, as well as for the strange
circumstance that he never existed, it is perhaps well to tell his story
first.

Dick, who never had a last name, was nothing more nor less than the
happy creation of an overworked literary side-liner eking out a living
in the late seventies. Having exhausted the possible plot complexities
of such heroes as Seth Jones and Duke Darrall, Messrs. Beadle and
Adams, proprietors of that stupendous literary zoo, The Pocket
Library (published weekly at 98 William Street, New York, price 91
five cents), urged their hack, Edward L. Wheeler, to crank out a
new character. This Shakespeare of the sensational, having recently
heard of the brave doings in Deadwood, of the Black Hills, promptly
created a latter-day Leatherstocking, Deadwood Dick.

Dick’s success was instantaneous, for there was a sense of truth in


these stories which had theretofore been missing. Dispatches in
every post brought the news of Deadwood as it was happening, and
thus the weekly appearance of another Deadwood story was able to
hang itself firmly on the coattails of reality. In one episode Dick
courts Calamity Jane, who actually existed at the time, and finally
marries her. In another, Our Hero is a frontier detective, fighting
bravely on the side of law and order. In still another he has turned to
robbery, and at one point is actually strung by his neck from a
cottonwood gallows.

After exhausting the many plot possibilities of the Black Hills, Dick,
who had become as real to his readers as George Washington, began
to work both backward and forward in time and space. In one set of
adventures he is shown to be an active Indian fighter; in another he
turns up with Calamity Jane in the town of Leadville, Colorado, 92
which came into its glory not long after the strike in Deadwood.
At last the many loose ends of the story so entangled author Wheeler
that he gave up Deadwood Dick as a lost cause and out of nowhere
fetched him a son, Deadwood Dick, Jr., who marched on to the turn
of the century and down into our own time. Indeed, his noble
features can still occasionally be found staring gravely up from a pile
of old and dusty magazines in attic corners.

With such a heritage it is little wonder that as the town of Deadwood


grew away from its infancy, and as its modern Chamber of Commerce
turned to summer pageants as a source of tourist interest, Deadwood
Dick should be revived and paraded. Deadwood’s summer festivals,
the gay “Days of ’76,” are built around a town-wide re-creation of the
gold rush, with the natives chin-whiskered, booted, and costumed
within an inch of their lives. During this gusty week otherwise sober
and retiring citizens turn themselves out as stage coach drivers,
Indians, and pony express riders, and the nights are filled with such a
bubbling halloo that the tourists, who come in ever larger droves, are
able to go home and report that they have honestly spent time 93
in a frontier town.

To heighten the effect, the impresarios of this gay divertissement


many years ago decided to raise Deadwood Dick from Beadle’s pages
and put him on the street like all the other self-respecting Calamity
Janes and Wild Bills. Locating an oldster who looked not unlike the
artist’s original concept, they dressed him in an assortment of
western oddities and gave him time off from his duties as a stable
hand while the festival was in session. For several years this simple
pretense was carried on, and no sleep whatsoever was lost over the
fact that a mild fraud was being perpetrated on the visiting Iowans.

In 1927, though, when South Dakota was negotiating with Calvin


Coolidge to get him to spend his summer in the Hills, the stable
hand, whose name happened to be Dick Clarke, was sent to
Washington to extend a personal welcome to the President. Patently
a publicity stunt, it fooled nobody but old Dick himself. The rigors of
the trip and the succession of tongue-in-cheek honors heaped upon
him somehow tilted the old man’s mind, and from that day until his
death a decade later he fully believed that he was the original
Deadwood Dick. Frowning down any suggestions that he doff 94
his beaded finery and return to the care of the oat bins, he
betook himself far from the gentle safety of the Deadwood that he
knew and that knew him, and took to touring the backwoods with
fifth-rate medicine shows and Wild West pageants. Somewhere along
the line he got up a small pamphlet which he sold to the gawking
audiences who thought they were seeing a genuine frontiersman. In
this amazing tract he spelled out such of the facts of Wheeler’s
stories as were coherent and in logical time sequence. The rest,
including a date and place of birth, he soberly filled in for himself.

And that was Deadwood Dick. When he finally died, back in


Deadwood in the early forties, much of the town had come to believe
as he did that there had been a Deadwood Dick, just as there had
been a Calamity Jane, and that this gaffer had been the very person.
His cortege was solemnly followed, and to this day flowers are
sprinkled on his grave by confused but loyal residents of the Hills.

Wild Bill Hickok, on the other hand, actually lived, and actually died
exactly as the legend goes, with aces and eights in his hand. It was
this unfortunate occurrence, as a matter of fact, that gave to 95
that particular poker hand its gruesome name, Dead Man’s
Hand.

Somewhat in the manner of Deadwood Dick, Wild Bill achieved a


large part of his fame through the earnest efforts of Beadle & Adams.
That is to say, much of his renown came after his untimely demise,
and much of it was deliberately generated to satisfy the great
western-yearnings of the avid book-buying public. In addition to the
publishers’ efforts on Bill’s behalf, great impetus was given to his
posthumous repute by Calamity Jane. Nevertheless, in all probability
Hickok was actually the fearless and sterling character his legendeers
have depicted, and had he not been brutally done to death by
feckless Jack McCall he would doubtless have earned even greater
fame through his own efforts in later years.

James Hickok was born into a farming family in Illinois in the year
1837, and passed a quiet and respectable boyhood in the ordinary
pursuits of such an existence. In his nineteenth year he, like so many
other young men of that day, felt the urgent call of the Far West. He
hired himself out forthwith as a teamster in a wagon train to the
Pacific Coast.

Returning at the end of this one visit to the golden shores, he


managed to land in the Platte Valley of the eastern Rockies in 96
the very year when gold was being discovered in that region.
The following two years he spent in odd jobs around Denver and on
the high plains to the east of that new city. During all this time,
however, it seemed as if his heart were hungering for the lower
country. He let his drifting carry him slowly back into Kansas where,
at the beginning of the Civil War, he managed a station for Hinckley’s
Overland Express Company, which was then staging from St. Joseph,
Missouri, to Denver and into Central City.

All these adventures gave ample opportunity for any young man of
spleen to entangle himself a dozen times over in killings, brawls, and
assorted rough businesses, but through this entire period James
Hickok gave evidence of being nothing but a stalwart and well-
intentioned individual.

The harmlessness of his pursuits, though, came to an explosive end


after one year in this genial work, when he indulged in pistolry with a
certain McCanles gang. One version of the Wild Bill legend states that
the “gang” were cutthroats, and that Hickok was only defending his
company’s property. Another version, equally trustworthy, has it that
the McCanleses were Confederate sympathizers, attempting to 97
raise a cavalry unit in the region and thus offending Hickok
with his Unionist leanings. Whatever the reason, the outcome was
bloody. No one today knows for certain how many men were killed,
for eyewitness accounts have included reports ranging all the way
from one to six, all of them presumably slain by Hickok.

The doughty station manager, his helper, and another stage company
employee were speedily brought to trial for the affair, and just as
speedily acquitted of any crime. Shortly after that Hickok resigned his
express company affiliation and joined the Union army, fighting the
war out as a trusted though undistinguished scout.

After his discharge in 1865, he seems to have forsaken forever his


once peaceful way of life, and thereafter blood was more than
occasionally to be found upon his hands. His first postwar killing took
place in Springfield, Missouri, in a duel with a gambler; and later that
same year he was reported to have mortally wounded another card
player in Julesburg, Colorado Territory. In the next year another
report, unofficial like all the rest, had him killing three more men in
Missouri, and in 1867—this was official—he went to the booming cow
town of Hays City, Kansas, where he was shortly offered the 98
post of marshal.

That his reputation, whether truthful or legendary, was growing there


can be no question. By 1867 he was accounted to be one of the best
gunmen of his time and place, quite possibly for the simple reason
that he had survived so many fights. For all the shadowy overtones of
his story, he was also reputed to be a devotee of righteousness and
order, although this facet of his character may or may not actually
have existed. He was well known to be a gambler, and his victims
were all (except the McCanleses) supposed to be cheaters at cards.
Whether his vivacity with Mr. Colt’s revolver was intended to rid the
earth of dishonest men or merely to avenge a lost hand is beside the
point, for his acceptance of the position of marshal of Hays City
indicates that for a time, at least, his inclinations lay in the direction
of law and order.

From Hays City he went to a similar post in Abilene, where he bore


the star until 1872. During all this time he was forced to kill but a
bare minimum of unworthy citizens, his ever growing repute as a
dangerous man with a gun apparently frightening would-be
desperadoes out of his orbit. Three notches were all that he placed
upon his weapon during his service in those two hell cities of the
prairies—definitely a world’s record in reverse.
One of the Black Hills’ many streams
The Badlands: Desolate, empty, and seared

Apparently the inactivity came to bore him, for he soon gave 99


up police work to return to the army for two years as a scout.
This harsh calling also failed to satisfy whatever inner wants were
making themselves felt, and in 1874 he resigned to join a traveling
show with Buffalo Bill.

In 1875, however, he was to be found no longer behind the chemical


lights, but idling his time away in Cheyenne. During this restless
interlude he married a circus rider named Agnes Lake. Shortly after
the ceremony, which took place in 1876, he followed the trail to
Deadwood, arriving in April and setting up camp with another ex-
army scout. The motives which drew him to that thriving boom town
were, in all probability, those which drew the thousands of others—
mere curiosity and the hope that something might turn up. Indeed,
during the four months of his Deadwood hiatus he did very little but
play poker in the famed saloon known as Number Ten. That he was
as accomplished a gambler as he was a gunman was doubted by no
one, and through his ability with the pasteboards he apparently kept
himself in such funds as he needed. He did not attempt to 100
look for gold, nor did he seek any official post in the town. He
merely played the long hours away at cards.

One might expect such a man as Wild Bill Hickok to meet his nemesis
in open battle with a murderous cutthroat seeking to pay off an old
score. Western legend is filled with such fitting come-uppances. But
in this rare case our hero was killed in a peaceful moment by a total
stranger and for reasons which nobody was ever thereafter able to
discern.

On the fateful day of August 2, 1876, he entered Number Ten shortly


after the lunch hour to take up his everlasting hand of cards.
Normally, being a prudent man, he insisted on a seat with its back to
a wall, from which vantage point he could keep his eye cocked for
trouble; but on this day, for some reason, he arrived just too late to
take his customary position and had to accept a chair with its back to
the door. The game proceeded amiably enough for a while, and there
was nothing in the afternoon air to suggest violence of any sort. At
last a normally inoffensive deadbeat, one Jack McCall, turned from
the bar where he had been enjoying a quiet drink and, passing the
gaming table on his way to the door, suddenly and without a 101
word pulled his revolver from his vest and put a shot through
Wild Bill’s skull.

The effect was instantaneous. When the news spread that Wild Bill
had been killed, all work stopped in the city and men streamed in
from every corner, expecting at the very least to find a major battle in
progress. When finally the crowds were quieted down and it was
learned that the killing was nothing more than a mere murder, the
populace speedily hunted up the terrified McCall, whom they found
huddled in a near-by stable, and arranged a formal trial. The facts
that Deadwood was at that time still out of bounds to American
citizens and therefore under no legitimate civil jurisdiction and that
the judge, jury, and prosecuting attorney were elected on the spot by
a show of hands, having therefore no official standing, did not
dampen the ardor of the crowd. A trial was a trial, and its results
would presumably be fair and honest.

As a matter of fact, Jack McCall must have been the most surprised
individual of all at the ultimate fairness of the legal machinery which
had been set up in his honor. With the acceptance of his fumbling
plea that Hickok had, at a place unnamed and at a time unnamed,
killed his brother, McCall was acquitted and turned free, and 102
Wild Bill was sorrowfully buried by the admiring populace.

As soon as he was freed, McCall hurried back to Cheyenne to escape


the reach of any of Hickok’s friends. Unfortunately the story of the
killing followed him there, and under the mistaken impression that he
had undergone a legitimate trial and was therefore no longer subject
to additional jeopardy, McCall took no pains to deny the murder. This
was a most foolish tactic on his part, for he was speedily rearrested
and shipped to Yankton, the capital of South Dakota Territory, where
he was held for a session of the proper court. Inasmuch as he had
admitted before witnesses not only that he had killed Wild Bill, but
also that his earlier plea had been fabricated from whole cloth, he
had a very slender defense indeed, and was quickly found guilty and
banged.

To the very end no clue could be found to any sort of sound reason
for his having fired the fatal shot. It was quite definitely proved that
he had never had any dealings with his victim and had never been in
any way offended by him, and that he no more than knew vaguely
who he was. It was apparently a completely aimless killing, 103
the unhappy inspiration of the moment.

On the other hand, Justice seems forever determined to get to the


bottom of the matter, for The Trial of Jack McCall has become an
institution of the Black Hills, played, like Ten Nights in a Barroom, all
the summers long in a popular tavern. Where audiences elsewhere
hiss their Legrees and other purely fictional villains, the proud
residents of Deadwood have their very own and very real scoundrel
for the target of their malisons—the miserable McCall. Tourists are
cordially invited to join in the fun and thereby to spread ever farther
the legend of Wild Bill Hickok.

On June 21, 1951, the legend was further enhanced and improved
upon by the presentation to the city of Deadwood of a brand new
statue of Hickok carved out of a massive chunk of native granite by
the ebullient sculptor, Ziolkowski. An all-day celebration attended the
unveiling of this statue upon its pedestal at the foot of Mt. Moriah,
and the zenith of the day’s gaudy reverence was the reading of an
“epic” poem to the hushed populace of the town over a loud-speaker
system from the top of the mount. The statue is plain to be seen
about a block from the Adams Memorial Museum, and copies 104
of the epic can no doubt be had by soliciting the Deadwood
Chamber of Commerce.

Of a somewhat different character from Wild Bill, but, it is good to


report, no less revered in the Hills, was Preacher Smith.

Frontier towns have been notorious for their hallowing of persons,


both male and female, who were either expertly good or expertly
bad. This strange compounding of affections would suggest that the
vice or godliness in itself was unimportant, but that the rough and
crude citizens who populated our earlier settlements held a genuine
admiration and regard for anyone of any calling who demonstrated
authority and accomplishment.

And thus it was with the Reverend Henry W. Smith. A man of


exceptionally little luck in life, he gave up his dwindling congregations
in the States and journeyed into the frontier in 1875, partly because
of a zeal in his heart to bring the Word into the lawless and godless
gold camps, but also, it must be conjectured, to find some form of
weekday employment which would enable him to care for his wife
and two daughters. The wolf had been howling at many doors during
those years, and parsonages which carried even a bare 105
subsistence stipend were few and far between.
Smith went first to Custer, where he stayed but a short while, finding
little in the way of work and less in the way of souls to save, since
the rush to Deadwood was then in full force. Hiring onto a
merchandise train as a cook’s helper, he made his way to that newer
city, arriving early in May of 1876. In a town of such activity it was
not difficult to locate work, and shortly his hide began to fill out and
his purse to thicken. That purse, it was discovered after his death,
was to be used for the purpose of bringing his family out to join him.

Working diligently and, of course, soberly at his menial tasks from


Monday through Saturday, and bravely setting up his pulpit on the
main street on Sundays, Preacher Smith soon won the respect and
even the genial admiration of the roisterous townspeople. At first his
congregations contained more wandering dogs than people, but week
after week, as he determinedly kept after his work, an increasingly
large crowd gathered of a Sunday morning to listen to his sermons.

Thus the entire town was shocked when he was brutally killed by
Indians while walking to a near-by settlement to preach a sermon.
Indians were bad enough at best, but killing a harmless and 106
unarmed preacher was an act of violence which shook the
consciences of the whole citizen body. It was on those consciences
that the guilt began to press—the guilt of the knowledge that they
had driven him to his death by their slowness to accept him in their
own community and that he had gone to his rendezvous seeking a
congregation, no matter how small, that would house him and the
Master he served.

Belatedly gathering to his support, the citizens passed a sizable hat


for the benefit of the unfortunate man’s widow and daughters. In
addition to the gift of cash, the woman received an invitation to bring
her grieving family to the Hills, where care would be arranged for
them, including a teaching post for the eldest daughter.
Unfortunately, neither the widow nor the daughters were in good
enough health to be able to make the rigorous trip, and in
consequence they could not avail themselves of the hospitality and
generosity which were so late in coming.

Although they had failed to bring the parson’s family to Deadwood,


the worthy citizens were undaunted in their efforts to memorialize
this modest itinerant who had stumbled unwittingly into glory. 107
A great chunk of sandstone was quarried and a local artist of
more verve than ability proceeded to hack out the parson’s likeness.
The statue was eventually propped over his grave atop Mount
Moriah, the cemetery-museum where he lies alongside Wild Bill and
Calamity Jane. Unfortunately souvenir hunters carried on their
unworthy custom over the years, until finally the battered monument,
no longer even recognizable, collapsed. In the Adams Memorial Hall
of Deadwood, however, there can be seen a certificate signed in
Preacher Smith’s very writing, and thus his handiwork lives along with
his legend.

All stories of Deadwood in the Black Hills come, eventually, to the


great riddle of Martha Jane Cannary (sometimes spelled Canary),
known as Calamity Jane.

This gusty female, who rolled around the West for nearly half a
century, has been the subject of more controversy and speculation
than almost any other early-day character. In her lifetime she
circulated a brief autobiography which successfully managed to hide
the truth about practically every aspect of her history. In addition,
she manipulated her drab story in such a way that a whole
generation of legend-mongers accepted her as the “true love” 108
of Wild Bill Hickok, and thus by no means to be thought of as
the drunken harlot she most certainly was.

By dint of careful searching, however, some few definite facts of her


early life and adventures have been isolated, and upon them at least
the framework of her true story has been constructed. She appears
to have been born in the neighborhood of 1850—add or subtract a
year—in Missouri. Some accounts have it that her father was a
Baptist minister, which is an unimportant sidelight, for young Martha
Jane did not stay at home long enough for any such influence to
[2]
gnaw its way into her personality.
How she managed to get from Missouri to Wyoming while 109
still in her early teens remains a mystery, but nonetheless her
career as a camp follower started when, at the tender age of
fourteen, she arrived in the roaring outpost of Rawlins. Some tales
have it that she had gone west as the consort of a young army
lieutenant, and that her mother, remarried to a pioneer, found her in
that boisterous military town and took her to Utah. In any event she
came back into circulation two years later, for in 1866 she was duly
married to one George White in Cheyenne. Following this felicitous
turn of affairs she and her husband journeyed to Denver, where he
was able to support her in a fine, high style. Unfortunately, she did
not take to this pleasant existence, but shortly began to yearn after
the cavalry. Leaving her husband to his Denver duties, she 110
appeared all during 1867 and 1868 in various forts
throughout Wyoming. It was at this particular time in her career that
she was supposed to have earned the nickname of Calamity Jane.
Undoubtedly the title was bestowed upon her by barroom
companions who had learned the sad truth that Martha Jane’s
appearance on the scene boded a long and arduous night of
drinking; but in her maudlin and confused autobiography she tells of
assisting in an Indian fight and for her splendid services being
gratefully given the name by a Captain Pat Egan. In a later interview
Egan denied this, claiming that the only time he had ever seen the
woman was while escorting her out of a barracks so that the men
could get some sleep.

From Wyoming she went to Hays City, Kansas, still following the
Seventh Cavalry, her chosen military unit. Six years later she turned
up again, this time disguised as a man and marching with General
Crook’s police force, which was trying to keep settlers out of the
Hills. Her autobiography claims that she also accompanied Custer’s
command on its famous exploratory march, but this does not appear
to be true.
After the discovery of gold in Deadwood, she found the high 111
life in that town so completely to her liking that she made it
her home base. In time she fastened herself so securely among the
legends of the metropolis that she was thereafter known solely as
Calamity Jane of Deadwood.

Taking advantage of the high romance which surrounded Wild Bill’s


name after his death, Calamity made haste to pass the story around
that he had been her only true love; and although there was no
evidence of any sort that he even knew who she was, her last
words, when she died in 1903, were a plea to be buried next to him.

In the eighties she became restless again and forsook her beloved
Deadwood for two decades, roving as far south as El Paso, and on
one occasion being seen in California. Her activities at this time of
her life are mostly lost from sight, but it may be presumed that as
whatever charms she may earlier have had faded, her interest to
and in the soldiers waned. During this period she married again, this
time wedding a man named Burke, to whom she bore a daughter.
She soon tired of Burke, however, and drifted slowly north again,
passing considerable time in Colorado and then returning briefly to
Deadwood in 1895. Even at that late date the citizens of the 112
gold town had not forgotten her, nor had the esteem in which
she had earlier been held dwindled; when it was discovered that she
lacked funds to care for her daughter, the townspeople passed the
ever present hat and arranged for the care of the child. This act of
generosity was purportedly to repay a great sacrifice which Calamity
Jane had made in the earlier days, braving the dangers of the
smallpox scourge of 1878 to nurse whoever was ill and without help.
This particular legend has had wide currency in the West, its closest
variant being the tale of Silver Heels, a dancing girl who visited the
mining camps of Colorado’s South Park in the sixties. Silver Heels is
popularly supposed to have ministered to the miners during a similar
plague, for which bravery a near-by mountain was named for her.
After placing her child in a school, Calamity, who was destitute,
betook herself to the vaudeville circuit. Inasmuch as through the
dime novels she had already become a well-known national figure,
she was able for a while to draw large crowds. Had it not been for
her unfortunate habit of getting dead drunk before show time, she
might well have amassed a competence over the years. But 113
her first contract was not renewed, and after a brief whirl at
the Buffalo Exposition she returned to the West, spending the next
several years in Montana.

At last she came home to Deadwood, a sick and broken old


roustabout. By this time she was nothing more than a bar-fly, and
she lived out her last days panhandling food and liquor money from
strangers. At last, on August 2, 1903, she died of pneumonia.

Deadwood turned out in force for her funeral. As she had requested,
she was buried near Wild Bill Hickok on Mount Moriah, overlooking
the town. That she had never really known Wild Bill was quite beside
the point, and anyway, there was none present who knew whether
she had or not. The shoddy story of her “love” for Hickok was
nothing that interested the old timers, but was saved for historians
to untangle. That she was no more than an alcoholic old harlot was
of no consequence, either, to the good citizens, for with her passing
the last of the great names of the frontier was coming home to rest.
That the townspeople were proud of her, and genuinely so, was not
to be denied, although there was most certainly nobody 114
present at that melancholy service who could have told why.
The truth of the matter was that they were burying not a broken old
woman, but the last of the Black Hills legends.

115
CHAPTER SIX

The White River Badlands


Any visit to the Black Hills must also be the occasion of a tour, at
least for a few hours, of the famous South Dakota Badlands. This
fantastic National Monument is not a part of the Hills, either
geographically or historically, but because the two regions lie so
close together—a scant fifty miles apart—they are expediently linked
as two great natural wonders in the same region.

The term “badlands” has a loose scientific acceptance, meaning any


region where a specific type of heavy erosion has taken place. Such
regions usually have subnormal rainfall and sparse vegetation. Those
rains that do occur, then, find little on the earth’s surface to prevent
almost complete runoff, which is so vigorous as to act as a powerful
cutting agent. The final ingredients of a badlands are rock 116
formations known as unconsolidated—lacking any general
unity of structure which might tend to withstand erosion. When all
these conditions exist, the devastation of the rushing flood waters is
without pattern, a great gash being carved in one spot while no
damage is visible on a near-by outcropping. The end result is an
almost frightening collection of gruesome stone monuments rising to
the sky and marking the heights once reached by a general plateau.

Actually, much of the high western plain abutting on the Rocky


Mountains is basic badland formation, and small pockets of distinct
erosion can be seen all through eastern Colorado, western Nebraska,
and eastern Wyoming, in addition to the vast depression in the
valleys of the White and Cheyenne rivers in South Dakota. This one
region, though, sixty-five miles long and five to fifteen miles wide, is
the largest and from the geologist’s point of view the most important
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookball.com

You might also like