L04 - The Many Dimensions of The Software Process
L04 - The Many Dimensions of The Software Process
Software Process
By Sebastián Tyrrell
This simple definition shows us nothing new. After all, all software has
been developed using some method. "Try it and see" is a perfectly
valid process. However, as highly trained consultants we would be
more likely to refer to it as a heuristic approach.
We can also see that a process is nothing without the something that
gets developed - in our case the software itself - that the process
produces. Again, this is nothing new. Every process produces
something.
These seven process goals are very close relatives of the McCall
quality factors ([8]) which categorize and describe the attributes that
determine how the quality of the software produced. Does it make
sense that the goals we set for our process are similar to the goals we
have for our software? Of course! A process is software too, albeit
software that is intended to be `run' on human beings rather than
machines!
It has already been hinted that all this is too much for a single project
to do. It is essential that the organization provide a set of guidelines to
the projects to allow them to develop their process quickly and easily
and with the minimum of overhead. These guidelines, a form of meta-
process, consist of a set of detailed instructions of what activities must
be performed and what documents should be produced by each
project. These instructions are generally known as the Quality system.
Quality
Now we all know, more or less, about Quality systems. Somehow the
initial capital `Q' changes the meaning utterly. It is every engineer's
nightmare. On our first day in a new job we find our supervisor is
away or doesn't know what to do with us. So we are handed a
mountain of waste-paper and told to familiarize ourselves with `the
Quality system'.
Such Quality systems are often far removed from the goals I have set
out for a process. All too often they appear to be nothing more than an
endless list of documents to be produced in the knowledge that they
will never be read; written long after they might have had any use; in
order to satisfy the auditor, who in turn is not interested in the content
of the document but only its existence. This gives rise to the quality
dilemma, stated in the following theorem: it is possible for a Quality
system to adhere completely to any given quality standard and yet for
that Quality system to make it impossible to achieve a quality process.
(I will describe this from here on, in a fit of hubris, as Tyrrell's Quality
theorem).
Quality:
Now for many of us, the BSI and Crosby definitions are counter-
intuitive. Life might be a great deal easier if we had conformance
systems - for intuitively the BSI definition could perhaps better be
applied to the word conformance! Crosby in particular explicitly rejects
the notion of quality as `degree of excellence' because of the difficulty
of measuring such a nebulous concept.
Yet Crosby's own definition has serious gaps. Wesselius and Ververs
[13] provide an excellent example. The example comes from the US's
ballistic missile warning systems. These regularly gave false
indications of incoming attacks, and would be triggered by various,
mainly natural, events. One was a flock of geese, and another a
moonrise. By Crosby's definition there was no quality problem with the
system. How come? Because the model didn't include a moonrise.
Therefore, the system remained completely conformant to its
specifications, and hence its quality was unaffected!
Intuitively this is simply wrong. Few of us, especially given the nature
of the application, would agree that this system was flawless! There
has to be a subjective element to quality, even if it is reasonable to
maximize the objective element. More concretely in this case we must
identify that there is a quality problem with the requirements
statement itself. This requires that our quality model be able to reflect
the existence of such problems, for example, by taking measures of
perceived quality, such as the use of questionnaires to measure
customer satisfaction. Such methods can begin to measure the `yes,
its what I asked for but it still doesn't feel right' type response that
indicates a possible requirements specification problem.
Acknowledgements:
I would like to thank Kim Moorman of ACM Crossroads for help during
the preparation of the final draft of this paper and David Covey and
Jurgen Opschroef of Nokia Networks and Bill Culleton of Silicon and
Software Systems Limited for comments on earlier drafts.