0% found this document useful (0 votes)
36 views

Software Crisis:-: Dictionary As "A Turning Point in The Course of Anything Decisive or Crucial Time, Stage or Event."

The document discusses characterizing the problems with software development. It argues that describing it as a "crisis" is no longer accurate, as successes are more frequent than failures. Instead, it proposes the problems are better characterized as a "chronic affliction" - something that causes long-term pain and persists indefinitely due to issues with how software is developed, supported over time, and how demand outpaces capabilities. The author acknowledges the issues still exist today and progress has come through slow, evolutionary changes rather than decisive turning points.

Uploaded by

Sahil Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Software Crisis:-: Dictionary As "A Turning Point in The Course of Anything Decisive or Crucial Time, Stage or Event."

The document discusses characterizing the problems with software development. It argues that describing it as a "crisis" is no longer accurate, as successes are more frequent than failures. Instead, it proposes the problems are better characterized as a "chronic affliction" - something that causes long-term pain and persists indefinitely due to issues with how software is developed, supported over time, and how demand outpaces capabilities. The author acknowledges the issues still exist today and progress has come through slow, evolutionary changes rather than decisive turning points.

Uploaded by

Sahil Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Software Crisis:-

Many industry observers (including this author) have characterized the problems associated with
software development as a "crisis." More than a few books (e.g.,[GLA97], [FLO97], [YOU98a])
have recounted the impact of some of the more spectacular software failures that have occurred over
the past decade. Yet, the great successes achieved by the software industry have led many to question
whether the term software crisis is still appropriate. Robert Glass, the author of a number of books on
software failures, is representative of those who have had a change of heart. He states [GLA98]: “I
look at my failure stories and see exception reporting, spectacular failures in the midst of many
successes, a cup that is [now] nearly full.” It is true that software people succeed more often than
they fail. It also true that the software crisis predicted 30 years ago never seemed to materialize.
What we really have may be something rather different. The word crisis is defined in Webster's
Dictionary as “a turning point in the course of anything; decisive or crucial time, stage or event.”
Yet, in terms of overall software quality and the speed with which computer-based systems and
products are developed, there has been no "turning point," no "decisive time," only slow,
evolutionary change, punctuated by explosive technological changes in disciplines associated with
software. The word crisis has another definition: "the turning point in the course of a disease,
when it becomes clear whether the patient will live or die." This definition may give us
a clue about the real nature of the problems that have plagued software development.
What we really have might be better characterized as a chronic affliction. 2 The
word affliction is defined as "anything causing pain or distress." But the definition of
the adjective chronic is the key to our argument: "lasting a long time or recurring
often; continuing indefinitely." It is far more accurate to describe the problems we
have endured in the software business as a chronic affliction than a crisis.
Regardless of what we call it, the set of problems that are encountered in the development
of computer software is not limited to software that "doesn't function properly." Rather, the affliction
encompasses problems associated with how we develop software, how we support a growing volume
of existing software, and how we can expect to keep pace with a growing demand for more software.
We live with this affliction to this day—in fact, the industry prospers in spite of it.
And yet, things would be much better if we could find and broadly apply a cure.

1.2.1 Software Characteristics


To gain an understanding of software (and ultimately an understanding of software
engineering), it is important to examine the characteristics of software that make it
different from other things that human beings build. When hardware is built, the
human creative process (analysis, design, construction, testing) is ultimately translated
into a physical form. If we build a new computer, our initial sketches, formal
design drawings, and breadboarded prototype evolve into a physical product (chips,
circuit boards, power supplies, etc.).
Software is a logical rather than a physical system element. Therefore, software
has characteristics that are considerably different than those of hardware:
1. Software is developed or engineered, it is not manufactured in the classical
sense.
Although some similarities exist between software development and hardware manufacture,
the two activities are fundamentally different. In both activities, high qual ity is achieved through good
design, but the manufacturing phase for hardware can
introduce quality problems that are nonexistent (or easily corrected) for software.
Both activities are dependent on people, but the relationship between people applied
and work accomplished is entirely different (see Chapter 7). Both activities require
the construction of a "product" but the approaches are different.
Software costs are concentrated in engineering. This means that software projects
cannot be managed as if they were manufacturing projects.
2. Software doesn't "wear out."
Figure 1.1 depicts failure rate as a function of time for hardware. The relationship,
often called the "bathtub curve," indicates that hardware exhibits relatively high failure
rates early in its life (these failures are often attributable to design or manufacturing
defects); defects are corrected and the failure rate drops to a steady-state level
(ideally, quite low) for some period of time. As time passes, however, the failure rate
rises again as hardware components suffer from the cumulative affects of dust, vibration,
abuse, temperature extremes, and many other environmental maladies. Stated
simply, the hardware begins to wear out.
Software is not susceptible to the environmental maladies that cause hardware to
wear out. In theory, therefore, the failure rate curve for software should take the form of
the “idealized curve” shown in Figure 1.2. Undiscovered defects will cause high failure
rates early in the life of a program. However, these are corrected (ideally, without introducing
other errors) and the curve flattens as shown.The idealized curve is a gross oversimplification
of actual failure models (see Chapter 8 for more information) for software.
However, the implication is clear—software doesn't wear out. But it does deteriorate!
This seeming contradiction can best be explained by considering the “actual curve”
shown in Figure 1.2. During its life, software will undergo change (maintenance). As parts of the
design that represent something new. In the hardware world, component
reuse is a natural part of the engineering process. In the software world, it is something
that has only begun to be achieved on a broad scale.
A software component should be designed and implemented so that it can be
reused in many different programs. In the 1960s, we built scientific subroutine libraries
that were reusable in a broad array of engineering and scientific applications. These
subroutine libraries reused well-defined algorithms in an effective manner but had a
limited domain of application. Today, we have extended our view of reuse to encompass
not only algorithms but also data structure. Modern reusable components encapsulate
both data and the processing applied to the data, enabling the software engineer
to create new applications from reusable parts. For example, today's graphical user
interfaces are built using reusable components that enable the creation of graphics
windows, pull-down menus, and a wide variety of interaction mechanisms. The data
structure and processing detail required to build the interface are contained with a
library of reusable components for interface construction.

2.4 THE LINEAR SEQUENTIAL MODEL


Sometimes called the classic life cycle or the waterfall model, the linear sequential model
suggests a systematic, sequential approach5 to software development that begins at
the system level and progresses through analysis, design, coding, testing, and support.
Figure 2.4 illustrates the linear sequential model for software engineering. Modeled
after a conventional engineering cycle, the linear sequential model encompasses
the following activities:
System/information engineering and modeling. Because software is always
part of a larger system (or business), work begins by establishing requirements for
all system elements and then allocating some subset of these requirements to software.
This system view is essential when software must interact with other elements
such as hardware, people, and databases. System engineering and analysis encompass
requirements gathering at the system level with a small amount of top level design and analysis. Information
engineering encompasses requirements gathering
at the strategic business level and at the business area level.
Software requirements analysis. The requirements gathering process is intensified
and focused specifically on software. To understand the nature of the program(s)
to be built, the software engineer ("analyst") must understand the information domain
(described in Chapter 11) for the software, as well as required function, behavior, performance,
and interface. Requirements for both the system and the software are documented
and reviewed with the customer.
Design. Software design is actually a multistep process that focuses on four distinct
attributes of a program: data structure, software architecture, interface representations,
and procedural (algorithmic) detail. The design process translates requirements
into a representation of the software that can be assessed for quality before coding
begins. Like requirements, the design is documented and becomes part of the software
configuration.
Code generation. The design must be translated into a machine-readable form.
The code generation step performs this task. If design is performed in a detailed manner,
code generation can be accomplished mechanistically.
Testing. Once code has been generated, program testing begins. The testing process
focuses on the logical internals of the software, ensuring that all statements have
been tested, and on the functional externals; that is, conducting tests to uncover
errors and ensure that defined input will produce actual results that agree with required
results.
Support. Software will undoubtedly undergo change after it is delivered to the customer
(a possible exception is embedded software). Change will occur because errors
have been encountered, because the software must be adapted to accommodate
changes in its external environment (e.g., a change required because of a new operating
system or peripheral device), or because the customer requires functional or
performance enhancements. Software support/maintenance reapplies each of the
preceding phases to an existing program rather than a new one.

The linear sequential model is the oldest and the most widely used paradigm for
software engineering. However, criticism of the paradigm has caused even active
supporters to question its efficacy [HAN95]. Among the problems that are sometimes
encountered when the linear sequential model is applied are:
1. Real projects rarely follow the sequential flow that the model proposes.
Although the linear model can accommodate iteration, it does so indirectly.
As a result, changes can cause confusion as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The
linear sequential model requires this and has difficulty accommodating the
natural uncertainty that exists at the beginning of many projects.
3. The customer must have patience. A working version of the program(s) will
not be available until late in the project time-span. A major blunder, if undetected
until the working program is reviewed, can be disastrous.
In an interesting analysis of actual projects Bradac [BRA94], found that the linear
nature of the classic life cycle leads to “blocking states” in which some project team
members must wait for other members of the team to complete dependent tasks. In
fact, the time spent waiting can exceed the time spent on productive work! The blocking
state tends to be more prevalent at the beginning and end of a linear sequential
process.
Each of these problems is real. However, the classic life cycle paradigm has a definite
and important place in software engineering work. It provides a template into
which methods for analysis, design, coding, testing, and support can be placed. The
classic life cycle remains a widely used procedural model for software engineering.
While it does have weaknesses, it is significantly better than a haphazard approach
to software development.

THE PROTOTYPING MODEL


Often, a customer defines a set of general objectives for software but does not identify
detailed input, processing, or output requirements. In other cases, the developer
may be unsure of the efficiency of an algorithm, the adaptability of an operating system,
or the form that human/machine interaction should take. In these, and many
other situations, a prototyping paradigm may offer the best approach.
The prototyping paradigm (Figure 2.5) begins with requirements gathering. Developer
and customer meet and define the overall objectives for the software, identify
whatever requirements are known, and outline areas where further definition is
mandatory. A "quick design" then occurs. The quick design focuses on a representation
of those aspects of the software that will be visible to the customer/user (e.g.,
input approaches and output formats). The quick design leads to the construction of a prototype. The prototype
is evaluated by the customer/user and used to refine
requirements for the software to be developed. Iteration occurs as the prototype is
tuned to satisfy the needs of the customer, while at the same time enabling the developer
to better understand what needs to be done.
Ideally, the prototype serves as a mechanism for identifying software requirements.
If a working prototype is built, the developer attempts to use existing program fragments
or applies tools (e.g., report generators, window managers) that enable working
programs to be generated quickly.
But what do we do with the prototype when it has served the purpose just
described? Brooks [BRO75] provides an answer:
In most projects, the first system built is barely usable. It may be too slow, too big, awkward
in use or all three. There is no alternative but to start again, smarting but smarter, and build
a redesigned version in which these problems are solved . . . When a new system concept
or new technology is used, one has to build a system to throw away, for even the best planning
is not so omniscient as to get it right the first time. The management question, therefore,
is not whether to build a pilot system and throw it away. You will do that. The only
question is whether to plan in advance to build a throwaway, or to promise to deliver the
throwaway to customers . . .
The prototype can serve as "the first system." The one that Brooks recommends
we throw away. But this may be an idealized view. It is true that both customers and
developers like the prototyping paradigm. Users get a feel for the actual system and developers get to build
something immediately. Yet, prototyping can also be problematic
for the following reasons:
1. The customer sees what appears to be a working version of the software,
unaware that the prototype is held together “with chewing gum and baling
wire,” unaware that in the rush to get it working no one has considered overall
software quality or long-term maintainability. When informed that the
product must be rebuilt so that high levels of quality can be maintained, the
customer cries foul and demands that "a few fixes" be applied to make the
prototype a working product. Too often, software development management
relents.
2. The developer often makes implementation compromises in order to get a
prototype working quickly. An inappropriate operating system or programming
language may be used simply because it is available and known; an
inefficient algorithm may be implemented simply to demonstrate capability.
After a time, the developer may become familiar with these choices and forget
all the reasons why they were inappropriate. The less-than-ideal choice
has now become an integral part of the system.
Although problems can occur, prototyping can be an effective paradigm for software
engineering. The key is to define the rules of the game at the beginning; that is,
the customer and developer must both agree that the prototype is built to serve as a
mechanism for defining requirements. It is then discarded (at least in part) and the
actual software is engineered with an eye toward quality and maintainability.

EVOLUTIONARY SOFTWARE PROCESS MODELS


There is growing recognition that software, like all complex systems, evolves over a
period of time [GIL88]. Business and product requirements often change as development
proceeds, making a straight path to an end product unrealistic; tight market
deadlines make completion of a comprehensive software product impossible, but a
limited version must be introduced to meet competitive or business pressure; a set
of core product or system requirements is well understood, but the details of product
or system extensions have yet to be defined. In these and similar situations, software
engineers need a process model that has been explicitly designed to
accommodate a product that evolves over time.
The linear sequential model (Section 2.4) is designed for straight-line development.
In essence, this waterfall approach assumes that a complete system will be
delivered after the linear sequence is completed. The prototyping model (Section
2.5) is designed to assist the customer (or developer) in understanding requirements.
In general, it is not designed to deliver a production system. The evolutionary
nature of software is not considered in either of these classic software
engineering paradigms. Evolutionary models are iterative. They are characterized in a manner that enables
software engineers to develop increasingly more complete versions of the software.

2.7.2 The Spiral Model


The spiral model, originally proposed by Boehm [BOE88], is an evolutionary software
process model that couples the iterative nature of prototyping with the controlled and
systematic aspects of the linear sequential model. It provides the potential for rapid
development of incremental versions of the software. Using the spiral model, software
is developed in a series of incremental releases. During early iterations, the
incremental release might be a paper model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
A spiral model is divided into a number of framework activities, also called task
regions.6 Typically, there are between three and six task regions. Figure 2.8 depicts a
spiral model that contains six task regions:
• Customer communication—tasks required to establish effective communication
between developer and customer.
• Planning—tasks required to define resources, timelines, and other projectrelated
information.
• Risk analysis—tasks required to assess both technical and management
risks.
• Engineering—tasks required to build one or more representations of the
application.
• Construction and release—tasks required to construct, test, install, and
provide user support (e.g., documentation and training).

• Customer evaluation—tasks required to obtain customer feedback based


on evaluation of the software representations created during the engineering
stage and implemented during the installation stage.
Each of the regions is populated by a set of work tasks, called a task set, that are
adapted to the characteristics of the project to be undertaken. For small projects, the
number of work tasks and their formality is low. For larger, more critical projects,
each task region contains more work tasks that are defined to achieve a higher level
of formality. In all cases, the umbrella activities (e.g., software configuration management
and software quality assurance) noted in Section 2.2 are applied.
As this evolutionary process begins, the software engineering team moves around
the spiral in a clockwise direction, beginning at the center. The first circuit around
the spiral might result in the development of a product specification; subsequent
passes around the spiral might be used to develop a prototype and then progressively
more sophisticated versions of the software. Each pass through the planning region
results in adjustments to the project plan. Cost and schedule are adjusted based on
feedback derived from customer evaluation. In addition, the project manager adjusts
the planned number of iterations required to complete the software.
Unlike classical process models that end when software is delivered, the spiral
model can be adapted to apply throughout the life of the computer software. An alternative
view of the spiral model can be considered by examining the project entry point
axis, also shown in Figure 2.8. Each cube placed along the axis can be used to represent
the starting point for different types of projects. A “concept development project” starts at the core of the spiral
and will continue (multiple iterations occur
along the spiral path that bounds the central shaded region) until concept development
is complete. If the concept is to be developed into an actual product, the process
proceeds through the next cube (new product development project entry point) and
a “new development project” is initiated. The new product will evolve through a number
of iterations around the spiral, following the path that bounds the region that has
somewhat lighter shading than the core. In essence, the spiral, when characterized
in this way, remains operative until the software is retired. There are times when the
process is dormant, but whenever a change is initiated, the process starts at the appropriate
entry point (e.g., product enhancement).
The spiral model is a realistic approach to the development of large-scale systems
and software. Because software evolves as the process progresses, the developer and
customer better understand and react to risks at each evolutionary level. The spiral model
uses prototyping as a risk reduction mechanism but, more important, enables the developer
to apply the prototyping approach at any stage in the evolution of the product. It
maintains the systematic stepwise approach suggested by the classic life cycle but incorporates
it into an iterative framework that more realistically reflects the real world. The
spiral model demands a direct consideration of technical risks at all stages of the project
and, if properly applied, should reduce risks before they become problematic.
But like other paradigms, the spiral model is not a panacea. It may be difficult to
convince customers (particularly in contract situations) that the evolutionary approach
is controllable. It demands considerable risk assessment expertise and relies on this
expertise for success. If a major risk is not uncovered and managed, problems will
undoubtedly occur. Finally, the model has not been used as widely as the linear
sequential or prototyping paradigms. It will take a number of years before efficacy of
this important paradigm can be determined with absolute certainty.

12.7 THE DATA DICTIONARY


The analysis model encompasses representations of data objects, function, and control.
In each representation data objects and/or control items play a role. Therefore,
it is necessary to provide an organized approach for representing the characteristics
of each data object and control item. This is accomplished with the data dictionary.
The data dictionary has been proposed as a quasi-formal grammar for describing
the content of objects defined during structured analysis. This important modeling
notation has been defined in the following manner [YOU89]:
The data dictionary is an organized listing of all data elements that are pertinent to the system,
with precise, rigorous definitions so that both user and system analyst will have a common
understanding of inputs, outputs, components of stores and [even] intermediate
calculations.
Today, the data dictionary is always implemented as part of a CASE "structured analysis
and design tool." Although the format of dictionaries varies from tool to tool, most
contain the following information:
• Name—the primary name of the data or control item, the data store or an
external entity.
• Alias—other names used for the first entry.
• Where-used/how-used—a listing of the processes that use the data or control
item and how it is used (e.g., input to the process, output from the process,
as a store, as an external entity.
• Content description—a notation for representing content.
• Supplementary information—other information about data types, preset values
(if known), restrictions or limitations, and so forth.
Once a data object or control item name and its aliases are entered into the data
dictionary, consistency in naming can be enforced. That is, if an analysis team member
decides to name a newly derived data item xyz, but xyz is already in the dictionary,
the CASE tool supporting the dictionary posts a warning to indicate duplicate
names. This improves the consistency of the analysis model and helps to reduce
errors.

ER Diagrams:
In software engineering, an entity-relationship model (ERM) is an abstract and conceptual representation of data.
Entity-relationship modeling is a database modeling method, used to produce a type of conceptual
schema or semantic data model of a system, often a relational database, and its requirements in atop-down fashion.
Diagrams created by this process are called entity-relationship diagrams, ER diagrams, or ERDs.
This article refers to the techniques proposed in Peter Chen's 1976 paper.[1]However, variants of the idea existed
previously,[2] and have been devised subsequently.
entities, relationships, and attributes

Two related entities

An entity with an attribute

A relationship with an attribute

Primary key

An entity may be defined as a thing which is recognized as being capable of an independent existence


and which can be uniquely identified. An entity is an abstraction from the complexities of some domain.
When we speak of an entity we normally speak of some aspect of the real world which can be
distinguished from other aspects of the real world.[3]
An entity may be a physical object such as a house or a car, an event such as a house sale or a car
service, or a concept such as a customer transaction or order. Although the term entity is the one most
commonly used, following Chen we should really distinguish between an entity and an entity-type. An
entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are
usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most
people tend to use the term entity as a synonym for this term.
Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical
theorem.
A relationship captures how two or more entities are related to one another. Relationships can be thought
of as verbs, linking two or more nouns. Examples: an owns relationship between a company and a
computer, a supervises relationship between an employee and a department, a performsrelationship
between an artist and a song, a proved relationship between a mathematician and a theorem.
The model's linguistic aspect described above is utilized in the declarative database query
language ERROL, which mimics natural language constructs.
Entities and relationships can both have attributes. Examples: an employee entity might have a Social
Security Number(SSN) attribute; the proved relationship may have a date attribute.
Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying attributes, which is
called the entity's primary key.
Entity-relationship diagrams don't show single entities or single instances of relations. Rather, they show
entity sets and relationship sets. Example: a particular song is an entity. The collection of all songs in a
database is an entity set. The eaten relationship between a child and her lunch is a single relationship.
The set of all such child-lunch relationships in a database is a relationship set. In other words, a
relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member
of the relation.
Certain cardinality constraints on relationship sets may be indicated as well.

SRS:
A Software Requirements Specification (SRS) - a requirements specification for a software system - is a
complete description of the behavior of a system to be developed. It includes a set of use cases that
describe all the interactions the users will have with the software. In addition to use cases, the SRS also
contains non-functional (or supplementary) requirements. Non-functional requirements are
requirements which impose constraints on the design or implementation (such as performance
engineering requirements, quality standards, or design constraints).

Function Oriented design:


A function-oriented design strategy relies on decomposing the system into a set of
interacting functions with a centralised system state shared by these functions
(Figure 15.1). Functions may also maintain local state information but only for the
duration of their execution.
Function-oriented design has been practised informally since programming
began. Programs were decomposed into subroutines which were functional in
nature. In the late 1960s and early 1970s several books were published which
described ‘top-down’ functional design. They specifically proposed this as a
‘structured’ design strategy (Myers, 1975; Wirth, 1976; Constantine and Yourdon,
1979). These led to the development of many design methods based on functional
decomposition.
Function-oriented design conceals the details of an algorithm in a function
but system state information is not hidden. This can cause problems because a
function can change the state in a way which other functions do not expect.
Changes to a function and the way in which it uses the system state may cause
unanticipated changes in the behaviour of other functions.
A functional approach to design is therefore most likely to be successful
when the amount of system state information is minimised and information sharing
is explicit. Systems whose responses depend on a single stimulus or input and
which are not affected by input histories are naturally functionally-oriented. Many
transaction-processing systems and business data-processing systems fall into this
class. In essence, they are concerned with record processing where the processing of
one record is not dependent on any previous processing.
An example of such a transaction processing system is the software which
controls automatic teller machines (ATMs) which are now installed outside many
banks. The service provided to a user is independent of previous services provided so
can be thought of as a single transaction. Figure 15.2 illustrates a simplified
functional design of such a system. Notice that this design follows the centralised
management control model introduced in Chapter 13.
In this design, the system is implemented as a continuous loop and actions
are triggered when a card is input. Functions such as Dispense_cash,
Get_account_number, Order_statement, Order_checkbook, etc. can be identified
which implement system actions. The system state maintained by the program is
minimal. The user services operate independently and do not interact with each
other. An object-oriented design would be similar to this and would probably not be
significantly more maintainable.

Coupling:
In computer science, coupling or dependency is the degree to which each program module relies on
each one of the other modules
Types of coupling

Conceptual model of coupling


Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of
coupling, in order of highest to lowest coupling, are as follows:
Content coupling (high)
Content coupling is when one module modifies or relies on the internal workings of another
module (e.g., accessing local data of another module).
Therefore changing the way the second module produces data (location, type, timing) will lead to
changing the dependent module.
Common coupling
Common coupling is when two modules share the same global data (e.g., a global variable).
Changing the shared resource implies changing all the modules using it.
External coupling
External coupling occurs when two modules share an externally imposed data format,
communication protocol, or device interface.This is basically related to the communication to
external tools and devices.
Control coupling
Control coupling is one module controlling the flow of another, by passing it information on what
to do (e.g., passing a what-to-do flag).
Stamp coupling (Data-structured coupling)
Stamp coupling is when modules share a composite data structure and use only a part of it,
possibly a different part (e.g., passing a whole record to a function that only needs one field of it).
This may lead to changing the way a module reads a record because a field that the module
doesn't need has been modified.
Data coupling
Data coupling is when modules share data through, for example, parameters. Each datum is an
elementary piece, and these are the only data shared (e.g., passing an integer to a function that
computes a square root).
Message coupling (low)
This is the loosest type of coupling. It can be achieved by state decentralization (as in objects)
and component communication is done via parameters or message passing (see Message
passing).
No coupling
Modules do not communicate at all with one another.
.

You might also like