Concepts of Concurrent Programming by David W. Bustard
Concepts of Concurrent Programming by David W. Bustard
Concepts of
Concurrent
Programming
............
David W. Bustard
University of Ulster
April 1990
The module is divided into three sections. The first deals with basic
concepts in concurrent programming, covering characteristic
attributes, formal properties, standard design problems, and execution
details. The second section discusses the steps in constructing
concurrent programs from specification to coding. The final section
briefly examines concurrency from the point of view of some common
application areas.
Acknowledge- I am very grateful to Gary Ford for his guidance and encouragement
ments through every stage in the production of this module. Linda Pesante
also earns my sincere thanks for her infectious enthusiasm and her
ability to turn apparently neat phrases into much neater ones. I am
also indebted to Karola Fuchs, Sheila Rosenthal, and Andy Shimp, who
provided an excellent library service that managed to be both efficient
and friendly.
Finally, I would like to thank Norm Gibbs for the opportunity to produce
this module and for his concern that the experience should be enjoyable.
It was!
Author’s Comments on this module are solicited, and may be sent to the SEI
Address Software Engineering Curriculum Project or to the author:
David W. Bustard
Department of Computing Science
University of Ulster
Coleraine BT52 1SA
Northern Ireland
3. Common Applications
3.1. Real-Time Systems
3.2. General-Purpose Operating Systems
3.3. Simulation Systems
3.4. Database Systems
1.1.5. Nondeterminism
A sequential program imposes a total ordering on the actions it
specifies. A concurrent program imposes a partial ordering, which
means that there is uncertainty over the precise order of occurrence of
some events; this property is referred to as nondeterminism. A
consequence of nondeterminism is that when a concurrent program
is executed repeatedly it may take different execution paths even
when operating on the same input data.
The classification of a program as deterministic or nondeterministic
will depend on which of its actions are considered significant. Most
programs are nondeterministic when viewed at a low enough level in
their execution but it is generally external behavior that dictates the
overall classification.
1.2.2. Deadlock
A process is said to be in a state of deadlock if it is waiting for an
event that will not occur. Deadlock usually involves several
processes and may lead to the termination of the program. A
deadlock can occur when processes communicate (e.g., two processes
attempt to send messages to each other simultaneously and
synchronously) but is a problem more frequently associated with
resource management. In this context there are four necessary
conditions for a deadlock to exist [Coffman71]:
1. Processes must claim exclusive access to resources.
2. Processes must hold some resources while waiting for others (i.e.,
acquire resources in a piecemeal fashion).
3. Resources may not be removed from waiting processes (no pre-
emption).
1.2.4. Unfairness
It is generally (but not universally) believed that where competition
exists among processes of equal status in a concurrent program, some
attempt should be made to ensure that the processes concerned make
even progress; that is, to ensure that there is no obvious unfairness
when meeting the needs of those processes. Fairness in a concurrent
system can be considered at both the design and system
implementation levels. For the designer, it is simply a guideline to
observe when developing a program; any neglect of fairness may
lead to indefinite postponement, leaving the program incorrect.
For a system implementor it is again a guideline. Most concurrent
programming languages do not address fairness. Instead, the issue
is left in the hands of the compiler writers and the developers of the
run-time support software.
Generally, when the same choice of action is offered repeatedly in a
concurrent program it must not be possible for any particular action to
be ignored indefinitely. This is a weak condition for fairness. A
stronger condition is that when an open choice of action is offered,
any selection should be equally likely.
1.3.1. Safety
Safety properties assert what a program is allowed to do, or
equivalently, what it may not do. Examples include:
• Mutual exclusion: no more than one process is ever present in a
critical region.
• No deadlock: no process is ever delayed awaiting an event that
cannot occur.
• Partial correctness: if a program terminates, the output is what is
required.
A safety property is expressed as an invariant of a computation; this is
a condition that is true at all points in the execution of a program.
Safety properties are proved by induction. That is, the invariant is
shown to hold true for the initial state of the computation and for every
transition between states of the computation.
1.3.2. Liveness
Liveness (or progress [Chandy88]) properties assert what a program
must do; they state what will happen (eventually) in a computation.
Examples include:
• Fairness (weak): a process that can execute will be executed.
• Reliable communication: a message sent by one process to
another will be received.
• Total correctness: a program terminates and the output is what is
required.
Liveness properties are expressed as a set of liveness axioms, and the
properties are proved by verifying these axioms. Safety properties can
be proved separately from liveness properties, but proofs of liveness
generally build on safety proofs.
2. Program Construction
Most aspects of concurrent program construction are covered by other
curriculum modules. This section provides an introduction to that material
and adds supplementary information where appropriate.
2.4.1.2. Semaphores
A semaphore can be regarded as a high-level abstraction for the
status variable mechanism described in Section 2.4.1.1. Entry to
and exit from a critical region is controlled by P and V
operations, respectively. The notation was proposed by Dijkstra
[Dijkstra68], and the operations can be read as “wait if necessary”
and “signal” (the letters actually represent Dutch words meaning
pass and release). Some semaphores are defined to give access to
competing processes in arrival order. The original definition,
however, does not stipulate an order; and even some appearing
recently [BSI89] are defined that way. The less strict definition
gives greater flexibility to the implementor but forces the program
designer to find other means of managing queues of waiting
processes.
Semaphores are the most commonly used mechanism for
controlling mutual exclusion. They are, however, insecure and
also too restrictive because they cannot be inspected directly [Ben-
Ari82].
2.4.1.4. Monitors
The monitor concept was proposed by Brinch Hansen [Brinch
Hansen77] and Hoare [Hoare74]. A monitor improves on the
conditional critical region construct by combining regions that
Each definition given in the glossary has a reference to the section(s) in Glossary
the outline where the term appears. Definitions of other terms and, in
some cases, alternative interpretations of the terms defined below may
be found in [IEEE87].
agenda parallelism
a design technique for parallel programs that introduces
parallelism in the sequence of steps in a computation [2.3].
array processor
a set of identical processing elements synchronized to perform the
same instruction simultaneously on different data [1.4.2].
asleep
a process state in which the process is suspended awaiting a
particular event [1.4.4].
asynchronous communication
communication among processes that is achieved by the sender of
the information leaving it in a buffer for the receiver to collect
[1.1.6.2].
awake
a process state in which the process is able to execute [1.4.4].
blocked
a synonym for asleep.
busy waiting
the action of a process that executes a loop awaiting a change of
program state [1.2.5].
close coupling
a point-to-point interprocessor linkbetween computers [1.4.2].
communication channel
a logical link established between pairs of processes to enable them to
communicate [2.4.2].
concurrent program
(1) a program specifying two or more sequential programs [1.1.2];
(2) a program specifying actions that may be performed simultane-
ously [1.1.3].
concurrent system
a set of programs communicating through an agreed protocol [1.1.4].
critical region
a section of code that performs an operation that must not be
executed by more than one process at a time [1.2.1].
critical section
a synonym for critical region.
dataflow machine
a set of processing elements that are triggered to produce a result by
the presence of the required operands [1.4.2].
deadlock
a program state in which a process is delayed awaiting an event that
will not occur [1.2.2].
degree of granularity
in databases, the size of the data item on which locks are imposed
[3.4].
distributed processing
the sharing of the processors of a multicomputer among a set of
competing processes [1.4.2].
distributed program
a parallel program designed to be executed on a network of
autonomous processors that do not share main memory [1.1.3].
distributed system
a parallel system executed on a network of autonomous processors
that do not share main memory [1.1.4].
embedded system
a hardware/software system with a real-time control program as a
component [3.1].
event-driven program
a synonym for reactive program.
explicit interaction
the interaction among processes of a concurrent program that is
specified explicitly [1.1.6].
fairness
(1) (weak) a mechanism for ensuring that when a choice among
possible actions is made repeatedly, no action is ignored
indefinitely (i.e., there is no indefinite postponement present)
[1.2.4, 1.3.2].
(2) (strong) a mechanism for ensuring that when a choice among
possible actions is made, each action has an equal probability of
selection [1.2.4, 1.4.5].
fork-and-join concurrency
a model of process creation and termination in which a sequential
computation divides into parallel threads of execution that later
recombine [1.4.3].
grain of concurrency
the mean computation time between communications during the
execution of a concurrent program [1.4.1].
implicitly concurrency
the concurrency present in a program that is designed to be
sequential but which includes actions that can be executed in
parallel [1.1.1].
implicit interaction
the interaction among processes of a concurrent program that
occurs implicitly as a consequence of sharing resources needed for
execution [1.1.6].
indefinite postponement
a program state in which a process is delayed awaiting an event that
may never occur [1.2.3].
induction
a method of proof in which a property is shown to hold for an initial
state of a computation and for each change of state within that
computation [1.3.1].
invariant
a condition that is true at all points in a computation [1.3.1].
kernel
(1) that portion of an operating system that is kept in main memory
at all times;
(2) a software module that encapsulates an elementary function or
functions of a system [1.4.5].
level of concurrency
the mean number of active processes present during the execution of
a program [1.4.1].
liveness
a program property that holds at some point in a computation [1.3.2].
lock
in databases, a mechanism for controlling read and write access to
data items; several processes may read a data item simultaneously
but only one at a time may modify it [3.4].
lockout
a synonym for indefinite postponement.
loose coupling
a network connection between computers [1.4.2].
message passing
a mechanism for enabling one process to make information
available to others by directing it to the processes concerned [1.1.6].
multicomputer
a computer with several processors, each of which has private main
memory [1.4.2].
multiprocessing
the sharing of the processors of a multiprocessor among a set of
competing processes [1.4.2].
multiprocessor
a computer with several processors sharing a common main
memory [1.4.2].
multiprogramming
the sharing of a single processor among a set of competing processes
[1.4.2].
multitasking
a synonym for multiprogramming.
mutual exclusion
a mechanism for ensuring that only one process at a time performs
a specified action [1.2.1].
nondeterminism
a property of a program, in which that there is a partial ordering,
rather than a total ordering, on the actions it specifies [1.1.5].
nucleus
a synonym for kernel.
optimistic scheduling
a mechanism for controlling concurrent access to a database; free
access is permitted and any transaction that causes conflict is rolled
back; the database is modified through a two-phase commit
procedure [3.4].
parallel program
a concurrent program designed for execution on parallel hardware
[1.1.3].
performance-oriented concurrency
the concurrency present in a program constructed to take advantage
of hardware supporting parallel processing [Preface].
Petri net
an abstract formal model of information flow, showing static and
dynamic properties of a system. A Petri net is usually represented
as a graph having two types of node (called places and transitions)
connected by arcs, and markings (called tokens) indicating dynamic
properties [2.2.2].
pipeline processor
a set of processing elements dedicated to performing the separate
low-level steps of an arithmetic operation in sequence [1.4.2].
process
(1) the execution of a sequential program [1.1.2];
(2) the name of a program construct used to describe the behavior of
a process [1.1.2].
pseudoparallel program
a concurrent program designed for execution by a single processor
[1.1.3].
quasiparallel program
a pseudoparallel program in which processes execute cooperatively
by transferring control to each other using a coroutine mechanism
[1.1.3].
reactive program
a program that carries out operations in response to the input it
receives [3.1].
ready
a process state in which the process is awake but unable to proceed
until it is assigned a processor [1.4.4].
reduction machine
a set of processing elements, each of which is triggered to obtain
operands by a request for a result [1.4.2].
resource
(1) a facility in a computing system;
(2) in the conditional critical region mechanism, a construct in
which the variables referenced in a critical region are identified
[2.4.1.3].
result parallelism
a design technique for parallel programs that introduces
parallelism in the construction of the data structure produced by a
program [2.3].
roll-back
the mechanism for undoing a partial change to a database to restore
it to a consistent state [3.4].
running
a process state in which the process is awake and has been assigned
a processor [1.4.4].
safety
a program property that holds at every point in a computation [1.3.1].
scale of concurrency
the mean duration of processes in the execution of a program [1.4.1].
scheduler
the software responsible for administering the allocation of
processors to processes during the execution of a program [1.4.1].
semaphore
a variable used to control access to a critical region [2.4.1.2].
sequential program
a program specifying statements that are intended to be executed in
sequence [1.1.2].
shared resource
a facility of a concurrent program that is accessible to two or more
processes of that program [1.1.6.1].
specialist parallelism
a design technique for parallel programs that introduces
parallelism at the level of autonomous program components [2.3].
spin-lock
the state variables on which busy waiting is performed.
starvation
a synonym for indefinite postponement.
state formula
a predicate evaluated for a particular state of a computation [2.2.1].
synchronization
(1) the control of the execution of two or more interacting processes
so that they perform the same operation simultaneously;
(2) the control of the execution of two or more interacting processes
so that they exclude each other from performing the same
operation simultaneously [1.1.6].
synchronous communication
message passing between processes achieved by synchronizing their
execution to perform the transfer of information from one to another
[1.1.6.2].
task
the name of a program construct used to describe the behavior of a
process [1.1.2].
temporal operator
a qualification of the range over which an assertion about the state of
a computation applies [2.2.1].
terminated
a process state in which the execution of the process is complete
[1.4.4].
time-dependent error
see transient error.
time slicing
a mode of concurrent program execution in which ready processes of
equal priority are allowed to execute in rotation for a small, fixed
period of processing time [1.4.5].
trace
a sequence of states and associated events in a computation [2.2.1].
transaction
the set of actions performed on a set of data items, transforming a
database from one consistent state to another [3.4].
transient error
an error that may not be repeatable because of nondeterministic
program behavior [1.2.6].
two-phase commit
a database transaction mechanism in which the transaction
changes are first determined and then committed if they are not in
conflict with any other transaction that has completed [3.4].
unit of concurrency
the language component on which program behavior is defined
[1.4.1].
vector processor
a pipeline processor that can execute the same instruction on a
vector of operands [1.4.2].
Using the This module deals with fundamental material that should be covered in
Outline total, regardless of where it is used. The presentation time will vary
according to the type of course involved, the depth of coverage required,
3. Language-Specific Course
4. Application-Specific Course
Resources There is no single textbook that covers all of the material outlined in this
module. Most textbooks have a particular focus, such as an application
area, a programming language, the formal expression of program
properties, machine architectures, software analysis and design, and so
on. References to textbooks with such emphases may be found in the
body of the outline, with some further detail presented in the
bibliography.
Andrews83
Andrews, G. R., and Schneider, F. B. “Concepts and Notations for
Concurrent Programming.” ACM Computing Surveys 15, 1 (Mar.
1983), 3-43. Reprinted in [Gehani88].
Abstract: Much has been learned in the last decade about concurrent
programming. This paper identifies the major concepts of concurrent
programming and describes some of the more important language notations
for writing concurrent programs. The roles of processes, communication
and synchronization are discussed. Language notations for expressing
concurrent execution and for specifying process interaction are surveyed.
Synchronization primitives based on shared variables and on message
passing are described. Finally, three general classes of concurrent
programming languages are identified and compared.
Bal89
Bal, H. E., Steiner, J. G., and Tanenbaum, A. S. “Programming
Languages for Distributed Computing Systems.” ACM Computing
Surveys 21, 3 (Sept. 1989), 261-322.
This is a very useful reference document. Note that the paper deals with
concurrency in general, despite its title. The only area not covered is
shared memory systems, details of which may be found in [Andrews83].
Bamberger89
Bamberger, J., Colket, C., Firth, R., Klein, D., and Van Scoy, V.
Kernel Facilities Definition (Version 3.0). Technical Report
CMU/SEI-88-TR-16, Software Engineering Institute, Carnegie
Mellon University, Pittsburgh, Pa., Dec. 1989.
Table of Contents:
1. Kernel Background: rationale, definitions, kernel functional areas; 2.
Requirements: general, processor, process, semaphore, scheduling,
communication, interrupt, time, alarm, tool; 3. Kernel Primitives (matching
requirements).
Ben-Ari82
Ben-Ari, M. Principles of Concurrent Programming. Prentice-Hall
International, 1982.
Table of Contents:
1. What is Concurrent Programming? (17); 2. The Concurrent
Programming Abstraction (11); 3. The Mutual Exclusion Problem (21);
4. Semaphores (23); 5. Monitors (20); 6. The Ada Rendezvous (16); 7. The
Dining Philosophers (10).
Ben-Ari90
Ben-Ari, M. Principles of Concurrent & Distributed Programming.
Prentice-Hall International, 1990.
Berztiss87
Berztiss, A. Formal Specification of Software. Curriculum Module
SEI-CM-8-1.0, Software Engineering Institute, Carnegie Mellon
University, Pittsburgh, Pa., Oct. 1987.
Birtwistle79
Birtwistle, G. M. Discrete Event Modeling on SIMULA. London:
Macmillan, 1979.
Booch86
Booch, G. “Object-Oriented Development.” IEEE Trans. Software
Eng. SE-12, 2 (Feb. 1986), 211-221.
Brackett90
Brackett, J. W. Software Requirements. Curriculum Module SEI-
CM-19-1.2, Software Engineering Institute, Carnegie Mellon
University, Pittsburgh, Pa., Jan. 1990.
Brinch Hansen77
Brinch Hansen, P. The Architecture of Concurrent Programs.
Englewood Cliffs, N. J.: Prentice-Hall, 1977.
Brinksma88
Brinksma, E., ed. Information Processing Systems - OSI - LOTOS - A
Formal Technique Based on Temporal Ordering of Observational
Behavior. Standard ISO IS 8807, 1988.
BSI89
BSI. Draft British Standard for Programming Language Modula 2:
Third Working Draft. Standard ISO/IEC DP 10514, British Standards
Institute, Nov. 1989.
Burns85
Burns, A. Concurrent Programming in Ada. Cambridge:
Cambridge University Press, 1985.
Table of Contents:
1. The Ada Language (16); 2. The Nature and Uses of Concurrent
Programming (10); 3. Inter-Process Communication (22); 4. Ada Task
Types and Objects (12); 5. Ada Inter-Task Communication (16); 6. The
Select Statement (22); 7. Task Termination, Exceptions and Attributes (12);
8. Tasks and Packages (14); 9. Access Types for Tasks (10); 10. Resource
Management (20); 11. Task Scheduling (12); 12. Low-Level Programming
(16); 13. Implementation of Ada Tasking (16); 14. Portability (4); 15.
Programming Style for Ada Tasking (12); 16. Formal Specifications (8);
17. Conclusion (6).
Bustard88
Bustard, D. W., Elder, J. W. G., and Welsh, J. Concurrent Program
Structures. Prentice-Hall International, 1988.
Bustard90
Bustard, D. W. “An Experience of Teaching Concurrency: Looking
Forward, Looking Back.” CSEE '90 Fourth SEI Conference on
Software Engineering Education, Lionel Deimel, ed. Springer-
Verlag, Apr. 1990.
Carriero89
Carriero, N., and Gelernter, D. “How to Write Parallel Programs: A
Guide to the Perplexed.” ACM Computing Surveys 21, 3 (Sept. 1989),
323-357.
Chandy88
Chandy, K. M., and Misra, J. Parallel Program Design: A
Foundation. Addison-Wesley, 1988.
The book has the following stated goal: "The thesis of this book is that the
unity of the programming task transcends differences between the
Coffman71
Coffman, E. G., Elphick, M. J., and Shoshani, A. “System Deadlocks.”
ACM Computing Surveys 3, 2 (June 1971), 67-78.
Collofello88
Collofello, J. Introduction to Software Verification and Validation.
Curriculum Module SEI-CM-13-1.1, Software Engineering Institute,
Carnegie Mellon University, Pittsburgh, Pa., Dec. 1988.
Crichlow88
Crichlow, J. M. An Introduction to Distributed and Parallel
Computing. Prentice-Hall International, 1988.
Table of Contents:
1. Introduction (14); 2. Computer Organization for Parallel and Distributed
Computing (31); 3. Communications and Computer Networks (36); 4.
Operating Systems for Distributed and Parallel Computing (30); 5. Servers
in the Client-Server Network Model (24); 6. Distributed Database Systems
(30); 7. Parallel Programming Languages (25).
Date86
Date, C. J. An Introduction to Database Systems. Addison-Wesley,
1986. (2 volumes).
Deitel84
Deitel, H. M. An Introduction to Operating Systems. Reading, Mass.:
Addison-Wesley, 1984.
Table of Contents:
There are 22 chapters divided into 8 parts. The chapters most relevant to
concurrency are: 2. Process Concepts (20); 3. Asynchronous Concurrent
Processes (21); 4. Concurrent Programming: monitors, the Ada rendezvous
(24); 6. Deadlock (28); 10. Job and Processor Scheduling (22); 11.
Multiprocessing (32); 16. Network Operating Systems (30).
Dijkstra68
Dijkstra, E. W. “Cooperating Sequential Processes.” Programming
Languages, F. Genuys, ed. Academic Press, 1968, 43-112.
Dijkstra71
Dijkstra, E. W. “Hierarchical Ordering of Sequential Processes.”
Acta Informatica 1 (1971), 115-138.
This paper deals largely with operating system design but also examines
the general issue of mutual exclusion, with examples. It is also where the
classic problem of the Dining Philosophers was first presented.
Feldman90
Feldman, M. Language and System Support for Concurrent
Programming. Curriculum Module SEI-CM-25, Software
Engineering Institute, Carnegie Mellon University, Pittsburgh, Pa.,
Apr. 1990.
Galton87
Temporal Logics and Their Applications. Anthony Galton, ed.
Academic Press, 1987.
This book was derived from a conference on Temporal Logic and Its
Applications, held in 1986. The first chapter gives a broad introduction to
the use of temporal logic in computer science and the second deals with
the use of temporal logic in the specification of concurrent systems.
Gehani84
Gehani, N. Ada: Concurrent Programming. Englewood Cliffs:
Prentice-Hall, 1984.
Table of Contents:
1. Concurrent Programming: a quick survey (28); 2. Tasking Facilities (52);
3. Task Types (18); 4. Exceptions and Tasking (10); 5. Device Drivers (12);
6. Real-Time Programming (20); 7. Some Issues in Concurrent
Programming (20); 8. More Examples (20); 9. Some Concluding Remarks
(6).
Gehani88
Concurrent Programming. N. Gehani and A. D. McGettrick, eds.
Addison-Wesley, 1988.
Table of Contents:
Organized into five sections, each of which is introduced briefly. There is no
index. The summary that follows shows the number of papers in each
section and the total page length involved.
1. Survey of Concurrent Programming (1:70); 2. Concurrent Programming
Languages (4:90); 3. Concurrent Programming Models (8:188); 4.
Assessment of Concurrent Programming Languages (9:216); 5. Concurrent
Programming Issues (2:21).
This book brings together a collection of papers that deal mostly with
language issues in concurrent programming.
Table of Contents:
1. Basics (32); 2. Advanced Facilities (46); 3. Run-time Environment (14); 4.
Large Examples (40); 5. Concurrent C++ (20); 6. Concurrent Programming
Models (14); 7. Concurrent Programming Issues (22); 8. Discrete Event
Simulation (22); Appendix: Comparison with Ada (and other topics).
Gelernter88
Gelernter, D. “Parallel Programming: Experiences with Applica-
tions, Languages and Systems.” Sigplan Notices 23, 9 (Sept. 1988).
ACM/Sigplan PPEALS.
Gomma87
Gomma, H. “Using the DARTS Software Design Method for Real-
Time Systems.” Proc. 12th Structured Methods Conf. Chicago:
Structured Techniques Association, Aug. 1987, 76-90.
This paper describes how DARTS can be used in conjunction with Real-
Time Structured Analysis [Ward85].
Gomma89
Gomma, H. Software Design Methods for Real-Time Systems.
Curriculum Module SEI-CM-22-1.0, Software Engineering Institute,
Carnegie Mellon University, Pittsburgh, Pa., Dec. 1989.
Abstract: This module describes the concepts and methods used in the
software design of real-time systems. It outlines the characteristics of real-
time systems, describes the role of software design in real-time system
development, surveys and compares some software design methods for
real-time systems, and outlines techniques for the verification and validation
of real-time designs. For each design method treated, its emphasis, concepts
on which it is based, steps used in its application, and an assessment of the
methods are provided.
Harel88
Harel, D. “On Visual Formalisms.” Comm. ACM 31, 5 (May 1988),
514-530.
Hoare74
Hoare, C. A. R. “Monitors: An Operating System Structuring
Concept.” Comm. ACM 17, 10 (1974), 549-557. Reprinted in
[Gehani88].
This paper introduces the monitor concept and demonstrates its use in a
series of examples.
Hoare78
Hoare, C. A. R. “Communicating Sequential Processes.” Comm.
ACM 21, 8 (Aug. 1978), 666-677. Reprinted in [Gehani88].
Hoare85
Hoare, C. A. R. Communicating Sequential Processes. Prentice-
Hall International, 1985.
Table of Contents:
1. Processes (42); 2. Concurrency (36); 3. Nondeterminism (32); 4.
Communication (38); 5. Sequential Processes (26); 6. Shared Resources
(26); 7. Discussion (28).
Holt83
Holt, R. C. Concurrent Euclid, The UNIX System and TUNIS.
Reading, Mass.: Addison-Wesley, 1983.
Table of Contents:
1. Concurrent Programming and Operating Systems (16); 2. Concurrency
Hughes88
Hughes, J. G. Database Technology: A Software Engineering
Approach. Prentice-Hall International, 1988.
Hwang84
Hwang, K., and Briggs, F. A. Computer Architecture and Parallel
Processing. McGraw-Hill, 1984.
Table of Contents:
1. Introduction to Parallel Processing (51); 2. Memory and Input-Output
Subsystems (93); 3. Principles of Pipelining and Vector Processing (88); 4.
Pipeline Computers and Vectorization Methods (92); 5. Structures and
Algorithms for Array Processors (68); 6. SIMD Computers and
Performance Enhancement (66); 7. Multiprocessor Architecture and
Programming (98); 8. Multiprocessor Control and Algorithms (86); 9.
Example Multiprocessor Systems (89); 10. Data Flow Computers and VLSI
Computations (81).
IEEE83
IEEE, IEEE Standard Glossary of Software Engineering
Terminology. ANSI/IEEE Std 729-1983, 1983.
INMOS88
occam 2 Reference Manual. Prentice-Hall International, 1988.
Joseph84
Joseph, M., Prasad, V. R., and Natarajan, N. A Multiprocessor
Operating System. Prentice-Hall International, 1984.
Kreutzer86
Kreutzer, W. System Simulation Programming Styles and
Languages. Addison-Wesley, 1986.
Kuhn81
Tutorial on Parallel Processing. R. H. Kuhn and D. A. Padua, eds.
IEEE Computer Society Press, 1981.
Lamport83
Lamport, L. “Specifying Concurrent Program Modules.” ACM
Trans. Prog. Lang. Syst. 5, 2 (Apr. 1983), 190-222.
Lamport89
Lamport, L. “A Simple Approach to Specifying Concurrent
Systems.” Comm. ACM 32, 1 (Jan. 1989), 32-45.
Abstract: Over the past few years, I have developed an approach to the
formal specification of concurrent systems that I now call the transition
axiom method. The basic formalism is described in [Lamport83], but the
formal details tend to obscure the important concepts. Here, I attempt to
explain these concepts without discussing the details of the underlying
formalism.
Lister84
Lister, A. M. Fundamentals of Operating Systems (3rd Edition).
Springer-Verlag, 1984.
McDowell89
McDowell, C. E., and Helmbold, D. P. “Debugging Concurrent
Programs.” ACM Computing Surveys 21, 4 (Dec. 1989), 593-622.
Milner89
Milner, R. Communication and Concurrency. Prentice-Hall Inter-
national, 1989.
Morell89
Morell, L. J. Unit Testing and Analysis. Curriculum Module SEI-
CM-9-1.2, Software Engineering Institute, Carnegie Mellon
University, Pittsburgh, Pa., Apr. 1989.
Parnas84
Parnas, D., Clements, P., and Weiss, D. “The Modular Structure of
Complex Systems.” Proc. 7th Intl. Conf. Software. Eng., Long Beach,
California. IEEE Computer Society, 1984, 408-416.
Perrott88
Perrott, R. H. Parallel Programming. Addison-Wesley, 1988.
Table of Contents:
HISTORY AND DEVELOPMENT 1. Hardware Technology Developments
(13); 2. Software Technology Developments (9). ASYCHRONOUS
PARALLEL PROGRAMMING 3. Mutual Exclusion (12); 4. Process
Synchronization (11); 5. Message Passing Primitives (13); 6. Modula-2
(13); 7. Pascal Plus (15); 8. Ada (14); 9. occam: a distributed computing
language (16). SYNCHRONOUS PARALLEL PROGRAMMING Detection
of Parallelism Languages: 10. Cray-1 FORTRAN Translator (20); 11. CDC
Cyber Fortran (13); Expression of Machine Parallelism Languages: 12.
Illiac4 CDF FORTRAN (13); 13. Distributed Array Processor FORTRAN
(13). Expression of Problem Parallelism Languages: 14. ACTUS: a Pascal
based language (24). DATA FLOW PROGRAMMING 15. Data Flow
Programming (21).
Peterson81
Peterson, J. Petri Net Theory and the Modeling of Systems.
Englewood Cliffs, N. J.: Prentice-Hall, 1981.
Pneuli86
Pneuli, A. “Applications of Temporal Logic to the Specification and
Verification of Reactive Systems: A Survey of Recent Trends.”
Current Trends in Concurrency, J. W. de Bakker et al, ed. New York:
Springer-Verlag, 1986, 510-584.
This long paper summarizes the work by Pneuli and others relating to
the specification and verification of reactive systems using temporal
logic. The paper is organized in four parts. Temporal logic is introduced
in the second part, the first dealing with various abstract and concrete
computational models.
Polychronopoulos88
Polychronopoulos, C. D. Parallel Programming and Compilers.
Boston: Kluwer Academic Publishers, 1988.
Table of Contents:
1. Parallel Architectures and Compilers (14); 2. Program Restructuring for
Parallel Execution (67); 3. A Comprehensive Environment for Automatic
Raynal86
Raynal, M. Algorithms for Mutual Exclusion. Cambridge, Mass.:
The MIT Press, 1986.
Table of Contents:
1. The Nature of Control Problems in Parallel Processing (16); 2. The
Mutual Exclusion Problem in a Centralized Framework: Software
Solutions (22); 3. The Mutual Exclusion Problem in a Centralized
Framework: Hardware Solutions (10); 4. The Mutual Exclusion Problem
in a Distributed Framework: Solutions Based on State Variables (16); 5.
The Mutual Exclusion Problem in a Distributed Framework: Solutions
Based on Message Communication (22); 6. Two Further Control Problems
(12).
Reisig85
Reisig, W. Petri Nets: An Introduction. Springer-Verlag, 1985.
Rombach90
Rombach, H. D. Software Specifications: A Framework.
Curriculum Module SEI-CM-11-2.1, Software Engineering Institute,
Carnegie Mellon University, Pittsburgh, Pa., Jan. 1990.
Schiper89
Schiper, A. Concurrent Programming. Halsted Press; John Wiley &
Sons Inc., 1989.
Table of Contents:
1. Introduction (7); 2. Input/Output and Interrupts (9); 3. The Process
Concept (10); 4. Mutual Exclusion (18); 5. Cooperation Between Processes
(18); 6. Portal and Monitors (20); 7. Modula-2 and Kernels (23); 8. Ada
and Rendezvous (22); 9. An Example of Designing a Concurrent Program
(24).
Sutcliffe88
Sutcliffe, A. Jackson System Development. Prentice-Hall
International, 1988.
Swartout86
Swartout, W., and Balzer, R. “On the Inevitable Intertwining of
Specification and Implementation.” Software Specification
Techniques, Narhain Gehani; Andrew D. McGettrick, eds. Addison-
Wesley, 1986, 41-45.
Tanenbaum85
Tanenbaum, A. S., and van Renesse, R. “Distributed Operating
Systems.” ACM Computing Surveys 17, 4 (Dec. 1985), 419-470.
Treleaven82
Treleaven, P. C., Brownbridge, D. R., and Hopkins, R. P. “Data-
Driven and Demand-Driven Computer Architectures.” A C M
Computing Surveys 14, 1 (Mar. 1982), 93-143.
Ullman82
Ullman, J. D. Principles of Database Systems, 2nd Edition. London:
Pitman, 1982.
Ward85
Ward, P. T., and Mellor, S. J. Structured Development for Real-Time
Systems. Yourdon Press, 1985. (three volumes).
Table of Contents:
Three volumes: 1. Introduction & Tools; 2. Essential Modelling Techniques;
3. Implementation Modelling Techniques.
Concurrency emerged as the common theme across all four papers in this
special issue. Comments on the first two papers are given separately in
[Bal89; Carriero89].
Whiddett87
Whiddett, D. Concurrent Programming for Software Engineers.
Chichester: Ellis Horwood, 1987.
Table of Contents:
THE BASICS 1. The Concept of a Process (31); 2. Process Coordination
(35). THE MODELS 3. Procedure Based Interaction: monitors (24); 4.
Message Based Interaction (26); 5. Operation Oriented Programs (25); 6.
Comparison of Methods (11). THE PRAGMATICS 7. Interfacing to
Peripheral Devices (26); 8. Programming Distributed Systems (24).
Wood89
Wood, D. P., and Wood, W. G. Comparative Evaluations of Four
Specification Methods for Real-Time Systems. Technical Report
CMU/SEI-89-TR-36, Software Engineering Institute, Carnegie
Mellon University, Pittsburgh, Pa., Dec. 1989.
Yourdon89
Yourdon, E. Modern Structured Analysis. Prentice-Hall, 1989.