Download Parallel Programming with Microsoft NET Design Patterns for Decomposition and Coordination on Multicore Architectures Patterns Practices 1st Edition Colin Campbell ebook All Chapters PDF
Download Parallel Programming with Microsoft NET Design Patterns for Decomposition and Coordination on Multicore Architectures Patterns Practices 1st Edition Colin Campbell ebook All Chapters PDF
https://ebookultra.com
https://ebookultra.com/download/professional-asp-net-design-patterns-
scott-millett/
ebookultra.com
https://ebookultra.com/download/software-modeling-and-design-uml-use-
cases-patterns-and-software-architectures-1st-edition-hassan-gomaa/
ebookultra.com
https://ebookultra.com/download/foundations-of-object-oriented-
programming-using-net-2-0-patterns-1st-edition-christian-gross-auth/
ebookultra.com
Go Design Patterns 1st Edition Contreras
https://ebookultra.com/download/go-design-patterns-1st-edition-
contreras/
ebookultra.com
https://ebookultra.com/download/applied-xml-programming-for-microsoft-
net-1st-edition-dino-esposito/
ebookultra.com
https://ebookultra.com/download/programming-microsoft-sql-
server-2000-with-microsoft-visual-basic-net-1st-edition-edition-rick-
dobson/
ebookultra.com
https://ebookultra.com/download/design-patterns-1st-edition-
christopher-g-lasater/
ebookultra.com
https://ebookultra.com/download/design-patterns-explained-a-new-
perspective-on-object-oriented-design-2-ed-edition-shalloway/
ebookultra.com
Parallel Programming with Microsoft NET Design
Patterns for Decomposition and Coordination on
Multicore Architectures Patterns Practices 1st Edition
Colin Campbell Digital Instant Download
Author(s): Colin Campbell, Ralph Johnson, Ade Miller, Stephen Toub
ISBN(s): 9780735651593, 0735651590
Edition: 1
File Details: PDF, 3.74 MB
Year: 2010
Language: english
001
PARALLEL
PROGRAM M ING
WITH
M I C R O S O F T .N E T
®
Colin Campbell
Ralph Johnson
Ade Miller
Stephen Toub
Foreword by
Tony Hey
• • • • • •
• • • • • • • •
• • • • • • •
• • • • •
a guide to parallel programming
Parallel Programming
with Microsoft .NET ®
Colin Campbell
Ralph Johnson
Ade Miller
Stephen Toub
ISBN 9780735640603
Microsoft, MSDN, Visual Basic, Visual C#, Visual Studio, Windows, Windows
Live, Windows Server, and Windows Vista are trademarks of the Microsoft
group of companies.
Foreword xi
Tony Hey
Preface xiii
Who This Book Is For xiii
Why This Book Is Pertinent Now xiv
What You Need to Use the Code xiv
How to Use This Book xv
Introduction xvi
Parallelism with Control Dependencies Only xvi
Parallelism with Control and Data Dependencies xvi
Dynamic Task Parallelism and Pipelines xvi
Supporting Material xvii
What Is Not Covered xviii
Goals xviii
Acknowledgments xix
1 Introduction 1
The Importance of Potential Parallelism 2
Decomposition, Coordination,
and Scalable Sharing 3
Understanding Tasks 3
Coordinating Tasks 4
Scalable Sharing of Data 5
Design Approaches 6
Selecting the Right Pattern 7
A Word About Terminology 7
The Limits of Parallelism 8
A Few Tips 10
Exercises 11
For More Information 11
vi
2 Parallel Loops 13
The Basics 14
Parallel for Loops 14
Parallel for Each 15
Parallel Linq (PLINQ) 16
What to Expect 16
An Example 18
Sequential Credit Review Example 19
Credit Review Example Using
Parallel.For Each 19
Credit Review Example with PLINQ 20
Performance Comparison 21
Variations 21
Breaking Out of Loops Early 21
Parallel Break 21
Parallel Stop 23
External Loop Cancellation 24
Exception Handling 26
Special Handling of Small Loop Bodies 26
Controlling the Degree of Parallelism 28
Using Task-Local State in a Loop Body 29
Using a Custom Task Scheduler
For a Parallel Loop 31
Anti-Patterns 32
Step Size Other than One 32
Hidden Loop Body Dependencies 32
Small Loop Bodies with Few Iterations 32
Processor Oversubscription
And Undersubscription 33
Mixing the Parallel Class and PLINQ 33
Duplicates in the Input Enumeration 34
Design Notes 34
Adaptive Partitioning 34
Adaptive Concurrency 34
Support for Nested Loops and Server Applications 35
Related Patterns 35
Exercises 35
Further Reading 37
3 Parallel Tasks 39
The Basics 40
An Example 41
vii
Variations 43
Canceling a Task 43
Handling Exceptions 44
Ways to Observe an Unhandled Task Exception 45
Aggregate Exceptions 45
The Handle Method 46
The Flatten Method 47
Waiting for the First Task to Complete 48
Speculative Execution 49
Creating Tasks with Custom Scheduling 50
Anti-Patterns 51
Variables Captured by Closures 51
Disposing a Resource Needed by a Task 52
Avoid Thread Abort 53
Design Notes 53
Tasks and Threads 53
Task Life Cycle 53
Writing a Custom Task Scheduler 54
Unobserved Task Exceptions 55
Relationship Between Data Parallelism
and Task Parallelism 56
The Default Task Scheduler 56
The Thread Pool 57
Decentralized Scheduling Techniques 58
Work Stealing 59
Top-Level Tasks in the Global Queue 60
Subtasks in a Local Queue 60
Inlined Execution of Subtasks 60
Thread Injection 61
Bypassing the Thread Pool 63
Exercises 64
Further Reading 65
4 Parallel Aggregation 67
The Basics 68
An Example 69
Variations 73
Using Parallel Loops for Aggregation 73
Using A Range Partitioner for Aggregation 76
Using Plinq Aggregation with Range Selection 77
Design Notes 80
Related Patterns 82
Exercises 82
Further Reading 83
viii
5 Futures 85
The Basics 86
Futures 86
Continuation Tasks 88
Example: The Adatum Financial Dashboard 89
The Business Objects 91
The Analysis Engine 92
Loading External Data 95
Merging 95
Normalizing 96
Analysis and Model Creation 96
Processing Historical Data 96
Comparing Models 96
View And View Model 97
Variations 97
Canceling Futures and Continuation Tasks 97
Continue When “At Least One” Antecedent Completes 97
Using .Net Asynchronous Calls with Futures 97
Removing Bottlenecks 98
Modifying the Graph at Run Time 98
Design Notes 99
Decomposition into Futures
And Continuation Tasks 99
Functional Style 99
Related Patterns 100
Pipeline Pattern 100
Master/Worker Pattern 100
Dynamic Task Parallelism Pattern 100
Discrete Event Pattern 100
Exercises 101
Further Reading 101
7 Pipelines 113
The Basics 113
An Example 117
Sequential Image Processing 117
The Image Pipeline 119
Performance Characteristics 120
Variations 122
Canceling a Pipeline 122
Handling Pipeline Exceptions 124
Load Balancing Using Multiple Producers 126
Pipelines and Streams 129
Asynchronous Pipelines 129
Anti-Patterns 129
Thread Starvation 129
Infinite Blocking Collection Waits 130
Forgetting GetConsumingEnumerable() 130
Using Other Producer/Consumer
Collections 130
Design Notes 131
Related Patterns 131
Exercises 132
Further Reading 132
Appendices
a Adapting Object-Oriented Patterns 133
Structural Patterns 133
Façade 134
Example 134
Guidelines 134
Decorators 134
Example 135
Guidelines 136
Adapters 136
Example 137
Guidelines 138
Repositories And Parallel Data Access 138
Example 139
Guidelines 139
Singletons and Service Locators 139
Implementing a Singleton with the Lazy<T> Class 140
Notes 141
Guidelines 141
x
Model-View-ViewModel 142
Example 143
The Dashboard’s User Interface 144
Guidelines 147
Immutable Types 148
Example 149
Immutable Types as Value Types 150
Compound Values 152
Guidelines 152
Shared Data Classes 153
Guidelines 153
Iterators 154
Example 154
Lists and Enumerables 155
Further Reading 156
Structural Patterns 156
Singleton 156
Model-View-ViewModel 157
Immutable Types 158
Glossary 177
References 187
Other Online Sources 189
Index 191
Foreword
xi
xii for ewor d
Tony Hey
Corporate Vice President, Microsoft Research
Preface
xiii
xiv pr eface
1 Introduction
Coordinated by
control flow only
Coordinated by control
flow and data flow
figure 1
Parallel programming patterns
After the introduction, the book has one branch that discusses data
parallelism and another that discusses task parallelism.
Both parallel loops and parallel tasks use only the program’s
control flow as the means to coordinate and order tasks. The other
patterns use both control flow and data flow for coordination.
Control flow refers to the steps of an algorithm. Data flow refers to
the availability of inputs and outputs.
xvi pr eface
introduction
Chapter 1 introduces the common problems faced by developers
who want to use parallelism to make their applications run faster. It
explains basic concepts and prepares you for the remaining chapters.
There is a table in the “Design Approaches” section of Chapter 1 that
can help you select the right patterns for your application.
supporting material
In addition to the patterns, there are several appendices:
• Appendix A, “Adapting Object-Oriented Patterns.”
This appendix gives tips for adapting some of the common
object-oriented patterns, such as facades, decorators, and
repositories, to multicore architectures.
• Appendix B, “Debugging and Profiling Parallel Applications.”
This appendix gives you an overview of how to debug and
profile parallel applications in Visual Studio 2010.
• Appendix C, “Technology Roadmap.” This appendix describes
the various Microsoft technologies and frameworks for parallel
programming.
• Glossary. The glossary contains definitions of the terms used
in this book.
• References. The references cite the works mentioned in this
book.
Everyone should read Chapters 1, 2, and 3 for an introduction and
overview of the basic principles. Although the succeeding material is
presented in a logical order, each chapter, from Chapter 4 on, can be
read independently. Don’t apply the patterns
Callouts in a distinctive style, such as the one shown in the margin, in this book blindly to your
alert you to things you should watch out for. applications.
It’s very tempting to take a new tool or technology and try and
use it to solve whatever problem is confronting you, regardless of the
tool’s applicability. As the saying goes, “when all you have is a hammer,
everything looks like a nail.” The “everything’s a nail” mentality can
lead to very unfortunate results, which one hopes the bunny in Figure
2 will be able to avoid.
You also want to avoid unfortunate results in your parallel pro-
grams. Adding parallelism to your application costs time and adds
complexity. For good results, you should only parallelize the parts of
your application where the benefits outweigh the costs.
figure 2
“When all you have is a hammer, everything looks like a nail.”
xviii pr eface
Goals
After reading this book, you should be able to:
• Answer the questions at the end of each chapter.
• Figure out if your application fits one of the book’s patterns
and, if it does, know if there’s a good chance of implementing
a straightforward parallel implementation.
• Understand when your application doesn’t fit one of these
patterns. At that point, you either have to do more reading
and research, or enlist the help of an expert.
• Have an idea of the likely causes, such as conflicting
dependencies or erroneously sharing data between tasks,
if your implementation of a pattern doesn’t work.
• Use the “Further Reading” sections to find more material.
Acknowledgments
xix
xx acknowledgments
The CPU meter shows the problem. One core is running at 100 per-
cent, but all the other cores are idle. Your application is CPU-bound,
but you are using only a fraction of the computing power of your
multicore system. What next?
The answer, in a nutshell, is parallel programming. Where you once Parallel programming
would have written the kind of sequential code that is familiar to all uses multiple cores at
programmers, you now find that this no longer meets your perfor- the same time to improve
mance goals. To use your system’s CPU resources efficiently, you need your application’s speed.
to split your application into pieces that can run at the same time.
This is easier said than done. Parallel programming has a
reputation for being the domain of experts and a minefield of subtle,
hard-to-reproduce software defects. Everyone seems to have a favor-
ite story about a parallel program that did not behave as expected
because of a mysterious bug.
These stories should inspire a healthy respect for the difficulty Writing parallel programs
of the problems you face in writing your own parallel programs. has the reputation of being
Fortunately, help has arrived. The Microsoft® .NET Framework 4 in- hard, but help has arrived.
troduces a new programming model for parallelism that significantly
simplifies the job. Behind the scenes are supporting libraries with
sophisticated algorithms that dynamically distribute computations on
multicore architectures. In addition, Microsoft Visual Studio® 2010
development system includes debugging and analysis tools to support
the new parallel programming model.
Proven design patterns are another source of help. This guide
introduces you to the most important and frequently used patterns
of parallel programming and gives executable code samples for them,
using the Task Parallel Library (TPL) and Parallel LINQ (PLINQ). When
thinking about where to begin, a good place to start is to review the
patterns in this book. See if your problem has any attributes that
match the six patterns presented in the following chapters. If it does,
delve more deeply into the relevant pattern or patterns and study the
sample code.
1
2 ch a pter one
the programs you write today will run on computers with many more Hardware trends predict
cores within a few years. Focusing on potential parallelism helps to more cores instead of
“future proof” your program. faster clock speeds.
Finally, you must plan for these contingencies in a way that does
not penalize users who might not have access to the latest hardware.
You want your parallel application to run as fast on a single-core com-
puter as an application that was written using only sequential code. In
other words, you want scalable performance from one to many cores.
Allowing your application to adapt to varying hardware capabilities, A well-written parallel
both now and in the future, is the motivation for potential parallelism. program runs at approxi-
An example of potential parallelism is the parallel loop pattern mately the same speed
described in Chapter 2, “Parallel Loops.” If you have a for loop that as a sequential program
performs a million independent iterations, it makes sense to divide when there is only one
those iterations among the available cores and do the work in parallel. core available.
It’s easy to see that how you divide the work should depend on the
number of cores. For many common scenarios, the speed of the loop
will be approximately proportional to the number of cores.
Decomposition, Coordination,
and Scalable Sharing
The patterns in this book contain some common themes. You’ll see
that the process of designing and implementing a parallel application
involves three aspects: methods for decomposing the work into dis-
crete units known as tasks, ways of coordinating these tasks as they
run in parallel, and scalable techniques for sharing the data needed to
perform the tasks.
The patterns described in this guide are design patterns. You can
apply them when you design and implement your algorithms and
when you think about the overall structure of your application.
Although the example applications are small, the principles they dem-
onstrate apply equally well to the architectures of large applications.
understanding tasks
Tasks are sequential operations that work together to perform a
larger operation. When you think about how to structure a parallel
program, it’s important to identify tasks at a level of granularity that
results in efficient use of hardware resources. If the chosen granular-
ity is too fine, the overhead of managing tasks will dominate. If it’s too
coarse, opportunities for parallelism may be lost because cores that
could otherwise be used remain idle. In general, tasks should be Tasks are sequential units of
as large as possible, but they should remain independent of each work. Tasks should be large,
other, and there should be enough tasks to keep the cores busy. You independent, and numerous
may also need to consider the heuristics that will be used for task enough to keep all cores busy.
4 ch a pter one
coordinating tasks
It’s often possible that more than one task can run at the same time.
Tasks that are independent of one another can run in parallel, while
some tasks can begin only after other tasks complete. The order of
execution and the degree of parallelism are constrained by the appli-
cation’s underlying algorithms. Constraints can arise from control
flow (the steps of the algorithm) or data flow (the availability of inputs
and outputs).
Various mechanisms for coordinating tasks are possible. The way
tasks are coordinated depends on which parallel pattern you use. For
example, the pipeline pattern described in Chapter 7, “Pipelines,” is
distinguished by its use of concurrent queues to coordinate tasks.
Regardless of the mechanism you choose for coordinating tasks, in
order to have a successful design, you must understand the dependen-
cies between tasks.
introduction 5
design approaches
It’s common for developers to identify one problem area, parallelize
the code to improve performance, and then repeat the process for the
next bottleneck. This is a particularly tempting approach when you
parallelize an existing sequential application. Although this may give
you some initial improvements in performance, it has many pitfalls,
and it may not produce the best results. A far better approach is to
understand your problem or application and look for potential
parallelism across the entire application as a whole. What you dis-
Think in terms of data cover may lead you to adopt a different architecture or algorithm that
structures and algorithms; better exposes the areas of potential parallelism in your application.
don’t just identify bottlenecks.
Don’t simply identify bottlenecks and parallelize them. Instead, pre-
pare your program for parallel execution by making structural changes.
Techniques for decomposition, coordination, and scalable sharing
are interrelated. There’s a circular dependency. You need to consider
all of these aspects together when choosing your approach for a
particular application.
After reading the preceding description, you might complain that
it all seems vague. How specifically do you divide your problem into
tasks? Exactly what kinds of coordination techniques should you use?
Questions like these are best answered by the patterns described
Use patterns. in this book. Patterns are a true shortcut to understanding. As you
begin to see the design motivations behind the patterns, you will also
develop your intuition about how the patterns and their variations can
be applied to your own applications. The following section gives more
details about how to select the right pattern.
introduction 7
One way to become familiar with the possibilities of the six patterns
is to read the first page or two of each chapter. This gives you an
overview of approaches that have been proven to work in a wide va-
riety of applications. Then go back and more deeply explore patterns
that may apply in your situation.
2.5
2
1.5
1
0.5
0
0 6 11 16
Number of processors
introduction 9
1.5
0.5
0
1 2 3 4 5
Number of cores
KEY
% Parallel
% Sequential
A Few Tips
Always try for the simplest approach. Here are some basic precepts:
• Whenever possible, stay at the highest possible level of abstrac-
tion and use constructs or a library that does the parallel work
for you.
• Use your application server’s inherent parallelism; for example,
use the parallelism that is incorporated into a web server or
database.
• Use an API to encapsulate parallelism, such as Microsoft Parallel
Extensions for .NET (TPL and PLINQ). These libraries were
written by experts and have been thoroughly tested; they help
you to avoid many of the common problems that arise in parallel
programming.
• Consider the overall architecture of your application when
thinking about how to parallelize it. It’s tempting to simply look
for the performance hotspots and focus on improving them.
While this may improve things, it does not necessarily give you
the best results.
• Use patterns, such as the ones described in this book.
• Often, restructuring your algorithm (for example, to eliminate
the need for shared data) is better than making low-level
improvements to code that was originally designed to run
serially.
• Don’t share data among concurrent tasks unless absolutely
necessary. If you do share data, use one of the containers
provided by the API you are using, such as a shared queue.
• Use low-level primitives, such as threads and locks, only as
a last resort. Raise the level of abstraction from threads to
tasks in your applications.
introduction 11
Exercises
1. What are some of the tradeoffs between decomposing
a problem into many small tasks versus decomposing it
into larger tasks?
2. What is the maximum potential speedup of a program
that spends 10 percent of its time in sequential processing
when you move it from one to four cores?
3. What is the difference between parallelism and
concurrency?
Use the Parallel Loop pattern when you need to perform the same
independent operation for each element of a collection or for a fixed
number of iterations. The steps of a loop are independent if they
don’t write to memory locations or files that are read by other steps.
The syntax of a parallel loop is very similar to the for and foreach
loops you already know, but the parallel loop runs faster on a com-
puter that has available cores. Another difference is that, unlike a se-
quential loop, the order of execution isn’t defined for a parallel loop.
Steps often take place at the same time, in parallel. Sometimes, two
steps take place in the opposite order than they would if the loop
were sequential. The only guarantee is that all of the loop’s iterations
will have run by the time the loop finishes.
It’s easy to change a sequential loop into a parallel loop. However,
it’s also easy to use a parallel loop when you shouldn’t. This is because
it can be hard to tell if the steps are actually independent of each
other. It takes practice to learn how to recognize when one step is
dependent on another step. Sometimes, using this pattern on a loop The Parallel Loop pattern
with dependent steps causes the program to behave in a completely independently applies an
unexpected way, and perhaps to stop responding. Other times, it in- operation to multiple data
troduces a subtle bug that only appears once in a million runs. In elements. It’s an example
other words, the word “independent” is a key part of the definition of of data parallelism.
this pattern, and one that this chapter explains in detail.
For parallel loops, the degree of parallelism doesn’t need to be
specified by your code. Instead, the run-time environment executes
the steps of the loop at the same time on as many cores as it can. The
loop works correctly no matter how many cores are available. If there
is only one core, the performance is close to (perhaps within a few
percentage points of) the sequential equivalent. If there are multiple
cores, performance improves; in many cases, performance improves
proportionately with the number of cores.
13
14 ch a pter t wo
The Basics
The .NET Framework includes both parallel For and parallel ForEach
loops and is also implemented in the Parallel LINQ (PLINQ) query
To make for and foreach language. Use the Parallel.For method to iterate over a range of inte-
loops with independent ger indices and the Parallel.ForEach method to iterate over user-
iterations run faster on provided values. Use PLINQ if you prefer a high-level, declarative style
multicore computers, use for describing loops or if you want to take advantage of PLINQ’s
their parallel counterparts. convenience and flexibility.
Parallel.For uses multiple Parallel.For is a static method with overloaded versions. Here’s the
cores to operate over an index signature of the version of Parallel.For that’s used in the example.
range.
Parallel.For(int fromInclusive,
int toExclusive,
Action<int> body);
In the example, the first two arguments specify the iteration limits.
The Parallel.For method does The first argument is the lowest index of the loop. The second argu-
not guarantee any particular ment is the exclusive upper bound, or the largest index plus one. The
order of execution. Unlike a third argument is an action that’s invoked once per iteration. The ac-
sequential loop, some tion takes the iteration’s index as its argument and executes the loop
higher-valued indices may be body once for each index.
processed before some
lower-valued indices. The Parallel.For method has additional overloaded versions.
These are covered in the section, “Variations,” later in this chapter and
in Chapter 4, “Parallel Aggregation.”
The example includes a lambda expression in the form args =>
body as the third argument to the Parallel.For invocation. Lambda
expressions are unnamed methods that can capture variables from
pa r a llel loops 15
Parallel.ForEach is a static method with overloaded versions. Here’s Parallel.ForEach runs the
the signature of the version of Parallel.ForEach that was used in the loop body for each element in
example. a collection.
ForEach<TSource>(IEnumerable<TSource> source,
Action<TSource> body);
// LINQ
var query1 = from i in source select Normalize(i);
// PLINQ
var query2 = from i in source.AsParallel()
select Normalize(i);
This code example creates two queries that transform values of the
enumerable object source. The PLINQ version uses multiple cores if
they’re available.
You can also use PLINQ’s ForAll extension method in cases
It’s important to use PLINQ’s
ForAll extension method where you want to iterate over the input values but you don’t want
instead of giving a PLINQ to select output values to return. This is shown in the following code.
query as an argument to the
Parallel.ForEach method. For IEnumerable<MyObject> myEnumerable = ...
more information, see the
section, “Mixing the Parallel myEnumerable.AsParallel().ForAll(obj => DoWork(obj));
Class and PLINQ,” later in
this chapter. The ForAll extension method is the PLINQ equivalent of Parallel.
ForEach.
what to expect
By default, the degree of parallelism (that is, how many iterations run
Adding cores makes your loop
at the same time in hardware) depends on the number of available
run faster; however, there’s
cores. In typical scenarios, the more cores you have, the faster your
always an upper limit.
loop executes, until you reach the point of diminishing returns that
Amdahl’s Law predicts. How much faster depends on the kind of
work your loop does.
The .NET implementation of the Parallel Loop pattern ensures
You must choose the correct that exceptions that are thrown during the execution of a loop body
granularity. Too many small
parallel loops can reach a point
are not lost. For both the Parallel.For and Parallel.ForEach methods
of over-decomposition where as well as for PLINQ, exceptions are collected into an AggregateEx-
the multicore speedup is more ception object and rethrown in the context of the calling thread. All
than offset by the parallel exceptions are propagated back to you. To learn more about excep-
loop’s overhead. tion handling for parallel loops, see the section, “Variations,” later in
this chapter.
pa r a llel loops 17
Parallel loops have many variations. There are 12 overloaded Robust exception handling
methods for Parallel.For and 20 overloaded methods for Parallel. is an important aspect of
ForEach. PLINQ has close to 200 extension methods. Although there parallel loop processing.
are many overloaded versions of For and ForEach, you can think of
the overloads as providing optional configuration options. Two ex-
amples are a maximum degree of parallelism and hooks for external
cancellation. These options allow the loop body to monitor the prog-
ress of other steps (for example, to see if exceptions are pending) and
to manage task-local state. They are sometimes needed in advanced
scenarios. To learn about the most important cases, see the section,
“Variations,” later in this chapter.
Check carefully for dependen-
If you convert a sequential loop to a parallel loop and then find
cies between loop iterations!
that your program does not behave as expected, the mostly likely Not noticing dependencies
problem is that the loop’s steps are not independent. Here are some between steps is by far the
common examples of dependent loop bodies: most common mistake you’ll
• Writing to shared variables. If the body of a loop writes to make with parallel loops.
a shared variable, there is a loop body dependency. This is a
common case that occurs when you are aggregating values.
Here is an example, where total is shared across iterations.
for(int i = 1; i < n; i++)
total += data[i];
In this example, it’s likely that the loop iterations are not independent.
For all values of i, SomeObject[i].Parent is a reference to a single
shared object.
18 ch a pter t wo
• Referencing data types that are not thread safe. If the body of
You must be extremely the parallel loop uses a data type that is not thread safe, the
cautious when getting data
from properties and methods. loop body is not independent (there is an implicit dependency
Large object models are known on the thread context). An example of this case, along with a
for sharing mutable state in solution, is shown in “Using Task-Local State in a Loop Body” in
unbelievably devious ways. the section, “Variations,” later in this chapter.
• Loop-carried dependence. If the body of a parallel for loop
performs arithmetic on the loop index, there is likely to be a
dependency that is known as loop-carried dependence. This is
shown in the following code example. The loop body references
data[i] and data[i – 1]. If Parallel.For is used here, there’s no
guarantee that the loop body that updates data[i – 1] has
executed before the loop for data[i].
for(int i = 1; i < N; i++)
data[i] = data[i] + data[i - 1];
An Example
Here’s an example of when to use a parallel loop. Fabrikam Shipping
extends credit to its commercial accounts. It uses customer credit
trends to identify accounts that might pose a credit risk. Each cus-
tomer account includes a history of past balance-due amounts. Fabri-
kam has noticed that customers who don’t pay their bills often have
histories of steadily increasing balances over a period of several
months before they default.
To identify at-risk accounts, Fabrikam uses statistical trend analy-
sis to calculate a projected credit balance for each account. If the
analysis predicts that a customer account will exceed its credit limit
within three months, the account is flagged for manual review by one
of Fabrikam’s credit analysts.
pa r a llel loops 19
performance comparison
Running the credit review example on a quad-core computer shows
that the Parallel.ForEach and PLINQ versions run slightly less than
four times as fast as the sequential version. Timing numbers vary; you
may want to run the online samples on your own computer.
Variations
The credit analysis example shows a typical way to use parallel loops,
but there can be variations. This section introduces some of the most
important ones. You won’t always need to use these variations, but
you should be aware that they are available.
Parallel Break
The Parallel.For method has an overload that provides a Parallel Use Break to exit a loop
LoopState object as a second argument to the loop body. You can ask early while ensuring that
the loop to break by calling the Break method of the ParallelLoop lower-indexed steps complete.
State object. Here’s an example.
22 ch a pter t wo
int n = ...
Parallel.For(0, n, (i, loopState) =>
{
// ...
if (/* stopping condition is true */)
{
loopState.Break();
return;
}
});
if (!loopResult.IsCompleted &&
loopResult.LowestBreakIteration.HasValue)
{
Console.WriteLine(“Loop encountered a break at {0}”,
loopResult.LowestBreakIteration.Value); Be aware that some steps with
} index values higher than the
step that called the Break
The Break method ensures that data up to a particular iteration index method might be run. There’s
value will be processed. Depending on how the iterations are sched- no way of predicting when or
uled, it may be possible that some steps with a higher index value than if this might happen.
the one that called the Break method may have been started before
the call to Break occurs.
The Parallel.ForEach method also supports the loop state Break
method. The parallel loop assigns items a sequence number, starting The Parallel.ForEach
from zero, as it pulls them from the enumerable input. This sequence method also supports the
number is used as the iteration index for the LowestBreakIteration loop state Break method.
property.
Parallel Stop
There are also situations, such as unordered searches, where you want
the loop to stop as quickly as possible after the stopping condition is Use Stop to exit a loop early
met. The difference between “break” and “stop” is that, with stop, no when you don’t need all
attempt is made to execute loop iterations less than the stopping in- lower-indexed iterations
dex if they have not already run. To stop a loop in this way, call the to run before terminating
ParallelLoopState class’s Stop method instead of the Break method. the loop.
Here is an example of parallel stop.
24 ch a pter t wo
var n = ...
var loopResult = Parallel.For(0, n, (i, loopState) =>
{
if (/* stopping condition is true */)
{
loopState.Stop();
return;
}
result[i] = DoWork(i);
});
if (!loopResult.IsCompleted &&
!loopResult.LowestBreakIteration.HasValue)
{
Console.WriteLine(“Loop was stopped”);
}
When the Stop method is called, the index value of the iteration
that caused the stop isn’t available.
You cannot call both Break and Stop during the same parallel
loop. You have to choose which of the two loop exit behaviors you
want to use. If you call both Break and Stop in the same parallel loop,
an exception will be thrown.
Parallel programs use Stop more often than Break. Processing all
iterations with indices less than the stopping iteration is usually not
You’ll probably use Stop necessary when the loop bodies are independent of each other. It’s
more often than Break. also true that Stop shuts down a loop more quickly than Break.
There’s no Stop method for a PLINQ query, but you can use the
WithCancellation extension method and then use cancellation as a
way to stop PLINQ execution. For more information, see the next
section, “External Loop Cancellation.”
“Ah, prow of Argo and the brine that flashed into whiteness! ah,
my two sons!” Her talk with them towards the end is a pathetic and
lovely passage equal to anything Euripides ever wrote in this kind.
Melanippe the Wise[809] appears to have been a drama of unusual
personal interest. Æolus espoused Hippe, whose daughter Melanippe
became by Poseidon mother of twin sons. The god bade her hide
them from Æolus, and they were discovered by grooms in the care
of a bull and a cow. They, supposing the children miraculous
offspring of these animals, reported their discovery to Æolus, who
decided to expiate the portent by burning the infants alive.
Melanippe was instructed to shroud them for death. In order to save
her children without revealing her own secret she denied the
possibility of such portentous births, but seems to have found herself
forced at length to confess in order to prove the natural origin of the
infants. Æolus condemned her to be blinded and imprisoned, her
offspring to be exposed. Her mother Hippe appeared as dea ex
machina[810] and saved her kin.
The great feature of this play was the heroine’s speech in which
she sought to convince her father that such a portent was
impossible. Lines from the opening of this argument are preserved:
“The story is not mine—from my mother have I learned how Heaven
and earth were once mingled in substance; when they separated
into twain they engendered and brought into the light of day all
creatures, the trees, birds, beasts, nurslings of the sea, and the race
of men”. The speech was an elaborate scientific sermon to disprove
the possibility of miracles. Similarly, according to a famous story, the
drama opened originally with the line: “Zeus, whoever Zeus may be,
for only by stories do I know of him ...”; but this open agnosticism
gave such offence that Euripides produced the play again with the
words: “Zeus, as Truth relates....” A different but closely-connected
source of interest is the fact that here Euripides veiled his own
personality less thinly than usual. That Melanippe was only his
mouthpiece appears to have been a recognized fact. Dionysius of
Halicarnassus[811] observes that it presents a double character, that
of the poet, and that of Melanippe; and Lucian[812] selects the
remark on Zeus in the prologue as a case where the poet is speaking
his own views. The “mother” from whom “Melanippe” learned her
philosophy has been identified with the great metaphysician and
scientist Anaxagoras, who was banished from Athens in 430 b.c.; and
it is natural to suppose that this Melanippe is not much later than
that year, perhaps much earlier[813] in view of the strongly didactic
manner.[814] Hartung refers to this play the splendid fragment:—
Probably it was Merope again who uttered the famous lines which
advise lament over the newly-born and a glad procession to
accompany the dead. The recognition-scene is singled out for
especial praise by Aristotle.[816]
The fragments of this tragedy include a perfect jewel of lyric
poetry, a prayer to Peace:—
Εἰρήνα βαθύπλουτε καὶ
καλλίστα μακάρων θεῶν,
ζῆλός μοι σέθεν, ὡς χρονίζεις.
δέδοικα δὲ μὴ πρὶν πόνοις
ὑπερβάλῃ με γῆρας,
πρὶν σὰν χαρίεσσαν ὥραν προσιδεῖν
καὶ καλλιχόρους ἀοιδὰς
φιλοστεφάνους τε κώμους.
ἴθι μοι, πότνα, πόλιν.
τὰν δ’ ἐχθρὰν στάσιν εἶργ’ ἀπ’ οἴ-
κων τὰν μαινομέναν τ’ ἔριν
θηκτῷ τερπομέναν σιδάρῳ.
But the student must at his leisure explore the marvels of these
rock-pools left by the retiring ocean. One majestic passage[821] from
the Cretans shall suffice to close this survey. The lines are from a
march sung by the Curetes or priests of the Cretan Zeus, and show
that even in the Hellenic world the monastic spirit was not unknown:
—
Æsch.: And now, by Jove, I’ll not smash each phrase word
by word, but with heaven’s aid I’ll ruin your prologues with—a
little oil-flask.
Eur.: An oil-flask? You ... my prologues?
Æsch.: Just one little flask. You write so that anything will
fit into your iambics—a little fleece, a little flask, a little bag.
I’ll show you on the spot.
Eur.: Oh! you will?
Æsch.: Yes.
Dion.: Now you must recite something.
Eur.:
“And she told me that the lady was a daughter of Zeus! What! is
there some person called Zeus living beside the Nile? There’s one in
Heaven, to be sure, but that’s another story.” Such a translation
gives perhaps the intention of the words and colloquial rhythm of
the last sentence. Here is comedy, but that of Congreve, not of
Aristophanes. The distinction is important. Euripides is less comic
than witty. As we turn his pages we rarely laugh, but a thousand
times we break into the slight smile of intellectual enjoyment; one
delight in reading an Euripidean play—tragedy though it be—is the
same as that aroused by the work of Meredith. Euripides’ sense of
the ludicrous is a part of his restlessness in conception. Again and
again he startles us by placing at some tragic moment a little
episode which passes the pathetic and becomes absurd. When
Clytæmnestra and Achilles bring each other into awkward perplexity
over the espousal of Iphigenia the effect is amusing, and the
intervention of the old slave who puts his head out of the tent-door
must provoke a smile, even though we realize that he has misery
and death on his lips.[847] After Creusa has given her instructions for
the assassination of Ion, it is, though natural, yet quaint for the
prospective murderer to reply: “Now do you retire to your hotel”.
[848] In the Medea the whole episode of Ægeus, to which Aristotle
objected as “irrational,” is tinged with the grotesque. That the
horrible story of Medea’s revenge must hang upon a slow-witted
amiable person like Ægeus is natural to the topsy-turviness of life as
the dramatist saw it. In fact, just as Euripides on the linguistic side
practically invents the prose-drama, so in the strictly dramatic sphere
he invents tragicomedy. Nothing can induce him to keep tears and
laughter altogether apart. The world is not made like that, and he
studies facts, depicting the phases of great happenings not as they
“ought to be” but “as they are”. He would have read with amused
delight that quaint sentence of Dostoevsky: “All these choruses sing
about something very indefinite, for the most part about somebody’s
curse, but with a tinge of the higher humour”.[849] It is indeed
significant that sparkles of incidental mirth are (so far as a modern
student can tell) commonest in that most heartbreaking play
Orestes. One dialogue between Orestes and Menelaus, to take a
single passage, is a blaze of wit—it exemplifies every possible grade
of witticism, from the downright pun[850] to subtle varieties of
iambic rhythm. Perhaps the most light-hearted and entertaining
example[851] is provided by Helen who (of all casuists!) evolves a
theory of sin as a method of putting her tigerish niece into good
humour and so inducing her to perform for Helen an awkward task.
Even more skilful, but ghastly in its half-farcical horror, is the
dialogue between Orestes and the escaped Phrygian slave.
Later ages of Greek civilization looked upon Euripides as a mighty
leader of thought, a great voice expressing all the wisdom of their
fathers, all the pains and perplexities familiar to themselves. After
generations had passed it was easy to dwell upon one side only of
his genius, and for Plutarch or Stobæus to regard him as the poet of
sad wisdom:—
Amongst us one,
Who most has suffer’d, takes dejectedly
His seat upon the intellectual throne;
And all his store of sad experience he
Lays bare of wretched days![852]
“I will not cease to mingle the Graces with the Muses—the sweetest
of fellowships. When the Muses desert me, let me die; may the
flower-garlands never fail me.” The Graces and the Muses—such is
his better way of invoking Beauty and Truth, the two fixed stars of
his life-long allegiance.
CHAPTER VI
METRE AND RHYTHM IN GREEK TRAGEDY
§ I. Introduction
Poetry is illuminating utterance consisting of words the successive
sounds of which are arranged according to a recurrent pattern. The
soul of poetry is this illumination, its body this recurrent pattern of
sounds; and it is with the body that we are now to deal. At the
outset we must distinguish carefully between rhythm and metre.
Rhythm is the recurrence just mentioned—the structure; metre is
the gathering together of sounds into masses upon which rhythm
shall do its work. Metre, so to put it, makes the bricks, while rhythm
makes the arch.
Greek metre is based, not upon stress-accent,[858] but upon
quantity—the length of time needed for the pronunciation of a
syllable. In English the line
⏑ – ⏑ – ⏑ – ⏑ – ⏑ – ⏑ –
τι δ ου | γυναιξ|ι ταυτ|α πρωτ|α παντ|αχου.
When is a syllable long and when short? A few rules will settle all
but a minority. All syllables are long—
(i) Which contain a necessarily long vowel (η or ω), e.g. μη̄ν, τω̄ν.
(ii) Which contain a diphthong or iota subscript, e.g. ο̅ι̅νος,
α̅ι̅νο̅υ̅μεν, ρᾳ̅διως; save that the first syllable of ποιῶ and τοιοῦτος
(and their parts) is often short.
(iii) Which end with a double consonant (ζ, ξ, ψ), e.g. ο̄ζος, ε̄ξω,
ε̄ψαυσα.
(iv) Which have the circumflex accent, e.g. υμῖ̅ν, μῦ̅ς.
Most syllables are long the vowel of which is followed by two
consonants. But there is some difficulty about this very frequent
case. It can arise in three ways:—
(a) Both consonants may be in the same word as the vowel. Then
the syllable is long, save when the consonants are (i) a voiced stop
(β, γ, δ) followed by ρ; or (ii) a voiceless stop or spirant (κ, π, τ; θ,
φ, χ) followed by a liquid or nasal (λ, ρ, μ, ν)—in both of which
cases the syllable can be counted long or short at pleasure. Thus
ε̄σμεν, μο̄ρφη, ᾱνδρος; but the first syllables of ιδρις, τεκνον, ποτμος
are “doubtful”—they can be either long or short as suits the poet.
(b) One of the consonants may end its word and the other begin
the next. Such syllables are all long. Thus, τηκτο̄ς μολυβδος, ανδρε̄ς
σοφοι, although both these long syllables are “short by nature” (see
below).
(c) Both consonants may occur at the beginning of the second
word. If the vowel is naturally short, the syllable is almost always
short, though such scansions as σε̄ κτενω are occasionally found.
But if the second word begins with a double consonant or σ followed
by another consonant, the syllable is always long. Thus ο̄ ξενος, τῑ
ζητεις, ταυτᾱ σκοπουμεν.
A vowel, naturally short, when thus lengthened is said to be
“lengthened by position.”
The following types of syllable are always short:—
(i) Those containing a naturally short vowel (ε or ο) not
lengthened by position, e.g. ε̆κων, ο̆λος.
(ii) Final α of the third declension neuter singular (σωμᾰ), third
declension accusative singular (ελπιδᾰ, δρασαντᾰ), and all neuters
plural (τᾰ, σωματᾰ, τοιαυτᾰ).
(iii) Final ι (e.g. εστῐ, τῐ), save, of course, when it is part of a
diphthong.
(iv) The accusative -ας of the third declension (ανδρᾰς,
πονουντᾰς). But μουσᾱς (first declension). The quantity in both
cases is that of the corresponding nominative.
Hiatus is practically unknown. That is, a word ending in a vowel is
not to be followed by a word beginning with a vowel, unless one
vowel or the other disappears. Almost always it is the first vowel
which is thus cut off, the process being called “elision.” In verse one
would not write πάντα εἶπε, but πάντ’ εἶπε; not ἔτι εἶναι, but ἔτ’ εἶναι.
When the first vowel is long and the second short, the latter is cut
off by “prodelision,” a much rarer occurrence. Thus τούτῳ ἀνεῖπε
would become τούτῳ ’νεῖπε. Two long vowels, as in καλὴ ἡμέρα, are
not used together at all. But the rule as to hiatus does not normally
apply at the end of a verse; usually one can end a verse with an
unelided vowel and begin the next with a vowel. If in any metrical
scheme this liberty is not allowed, it is said that “synapheia[859]
prevails.”
We are now in a position to discuss the various metres to be found
in Greek Tragedy.
– – ⏑– ⏑ – ⏑ – – – ⏑ –
δησαι | βιᾳ | φαραγγ|ι προς | δυσχειμ|ερῳ (Prom.
Vinctus, 15).
– – ⏑ – – – ⏑ – ⏑– ⏑ –
ω τεκν|α Καδμ|ου του | παλαι | vεα | τροφη (Œd.
Tyr., I).
Next, the lightness and variety is often greatly increased by the
use of “resolved”[860] (or broken-up) feet. Each long syllable being
regarded as equal to two “shorts,” it follows that the iambus can be
“resolved” into ⏑⏑⏑, the spondee into –⏑⏑, ⏑⏑– (and ⏑⏑⏑⏑, but this
last is not employed in iambics).
Of these three the tribrach (⏑⏑⏑) is much the most frequent. As it
corresponds to the iambus, it can occur in any place, save the sixth;
it is exceedingly rare in the fifth place:—
– – ⏑ ⏑ ⏑ – – ⏑ – – – ⏑ –
φαιδρωπ|ον εδιδ|ου τοισ|ιν Αιγ|ισθου | φιλοις
(Orestes, 894).
⏑ – ⏑ – ⏑ – ⏑ ⏑ ⏑ – – ⏑–
περιξ | εγω | καλυψ|α βοτρυ|ωδει | χλοῃ (Bacchæ,
12).
– – ⏑ – – ⏑ ⏑ ⏑ – – – ⏑ –
ου φασ|ι πρωτ|ον Δανα|ον Αιγ|υπτῳ | δικας
(Orestes, 872).
⏑ – ⏑ – – ⏑ ⏑ ⏑ – – – ⏑ –
λογους | ελισσ|ων οτι | καθιστ|αιη | νομους
(Ibid., 892).
– – ⏑ – ⏑ – ⏑ – ⏑⏑ – ⏑ –
δεσποιν|α γαρ | κατ οικ|ον Ερμ|ιονην | λεγω
(Androm., 804).
– – ⏑ ⏑ ⏑ – ⏑ ⏑ ⏑ – ⏑ – ⏑ –
λουτροισ|ιν αλοχ|ου περι|πεσων | πανυστ|ατοις
(Orestes, 367).
– ⏑ ⏑ ⏑ – – ⏑ ⏑ ⏑ – ⏑ – ⏑ –
μητερα | το σωφρ|ον τ ελαβ|εν αντ|ι συμφ|ορας
(Ibid., 502).
⏑ ⏑ – ⏑ ⏑ ⏑ – ⏑ ⏑ ⏑ – ⏑ – ⏑ –
αναδελφ|ος απατ|ωρ αφιλ|ος ει | δε σοι | δοκει
(Ibid., 310).
Two licenses should be noted. The last syllable of the line may be
short; no doubt the pause[861] at the end was felt to help it out.
Lines of this kind are innumerable, e.g.:—
⏑⏑
Κρατος Βια τε σφῳν μεν εντολη | Διος (Prom.
Vinctus, 12)
– ⏑ – – ⏑ ⏑ ⏑ – ⏑ – ⏑ –
ως μ̅η̅ ̅ε̅ι̅δ̅|οθ ητ|ις μ ετεκ|εν εξ | οτου τ|
εφυν (Ion, 313).
– – ⏑ – – – ⏑ – ⏑ – ⏑ ⏑
σφαζ αιμ|ατου | θεας βωμ|ον η | μετεισ|ι σε
(Andromache, 260).
⏑ – ⏑ – – – ⏑ – – – ⏑ ⏑
απανθ | ο μακρ|ος ‖ καν|αριθμ|ητος | χρονος (Ajax,
646).
– – ⏑ – ⏑ – ⏑ – – – ⏑ ⏑
προς τησδ|ε της | γυναικ|ος ‖ οικτ|ειρω | δε νιν
(Ibid., 652).
– – ⏑ – – – ⏑ – – – ⏑ ⏑
φρουρας | ετει|ας ‖ μηκ|ος ‖ ην | κοιμωμ|ενος