Software Engineering Notes
Software Engineering Notes
Page 1 of 61
1. SOTWARE ENGINEERING SPECIFICATIONS
1.1 SWE - DEFINITION
Software engineering is the application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; that is, the application of engineering to
software. (IEEE Computer Society)
Software engineering is concerned with the theories, methods and tools which are needed to
develop high quality, complex software in a cost effective way on a predictable schedule.
Software engineering: The disciplined application of engineering, scientific, and mathematical principles,
methods, and tools to the economical production of quality software.
Software is abstract and intangible. It is not constrained by materials, governed by physical laws
or by manufacturing processes. There are no physical limitations on the potential of software. –
A software product consists of developed programs and all associated documentation and
configuration data needed to make the programs operate correctly.
Software process: The set of activities, methods, and practices that are used in the production
and evolution of software.
Software engineering process: The total set of software engineering activities needed to
transform a user's requirements into software.
Page 2 of 61
5. Optimized - the measurement data used in a feedback mechanism to improve the
software life cycle model over the lifetime of the organization.
But software engineering is radically different than other engineering disciplines, in that:
The end product is abstract, not a concrete object like a bridge
Costs are almost all human; materials are an ever shrinking fraction
Easy to fix bugs, but hard to test and validate
Software never wears out...but the hardware/Os platforms it runs on do
Variations in application domain are open-ended, may require extensive new non-
software, non-engineering knowledge for each project
During analysis review, comparison of application domain model with client's reality may
result in changes to each.
During testing, the system is validated against the solution domain model, which might
change due to the introduction of new technologies.
During project management, managers compare their model of the development process
(schedule & budget) against reality (products and resources).
Page 3 of 61
Software life cycle processes
Software life cycle: The period of time that begins when a software product is conceived and
ends when the software is no longer available for use. This cycle typically includes a
1. Initial conception
2. Requirements analysis
3. Specification
4. Initial design
5. Verification and test of design
6. Redesign
7. Prototype manufacturing
8. Assembly and system-integration tests
9. Acceptance tests (validation of design)
10. Production (if several systems are required)
11. Field (operational) trial and debugging
12. Field maintenance
13. Design and installation of added features
14. Systems discard (death of system) or complete system redesign.
Page 4 of 61
1.2 DATA GATHERING METHODS
1.2.1 INTERVIEWING
Work flows,
Factors that influence the operations of systems, and
The elements (documents, procedures, policies, etc.) that make up systems.
the new system would probably not contain the necessary features to meet the needs of
the organization
Poorly performed interviews can affect the attitudes of the users and have a negative
effect on the entire project effort.
organization reports
annual reports
long-range planning documents
statements of departmental goals
existing procedure manuals and
systems documentation
Page 5 of 61
Analysts must understand common industry terms and be somewhat familiar with the business
problems of the industry. The following are errors commonly made by inexperienced analysts:
Sitting back in a chair with arms folded across the chest (This posture implies a lack of
openness to what is being said and may also indicate that the analyst is ill at ease.)
Looking at objects in the room or staring out the window instead of looking at the
interviewee. (Because this behavior suggests that the analyst would rather be somewhere
else doing other things, the interviewee will often cut the interview short.)
Taking excessive notes or visually reviewing notes. (An analyst who records rather than
listening may arouse interviewee concerns over what is being written.)
Sitting too far away or too close. (Sitting too far away often communicates that the
analyst is intimidated by the interviewee, while sitting too close may communicate an
inappropriate level of intimacy and make the interviewee uncomfortable. Acceptance
cues are used to convey understanding, not agreement.
Advantages:
Disadvantages:
Joint Application Design (JAD) is a group interview conducted by an impartial leader. The
advantages of JAD are that the analysis phase of the life cycle is shortened and the
specifications document produced are better accepted by the users.
Page 6 of 61
Requirements analysis should not start until there is a clear statement of scope and objectives of
the project.
Interviewing users to build a physical model of the present system, abstracting this to a
logical model of the present system, and then incorporating desirable changes so as to
create the logical model of the future system.
Organizing a workshop with group methods such as JAD to create the logical model of
the new system directly, without the intermediate steps of the physical and logical models
of the present system.
The logical model of the new system becomes the requirements specification. Typically, it
contains:
The final test of the system, the user acceptance test, should be conducted as a benchmark
against this document. Therefore, all those factors that will determine whether the users are
satisfied with the system should be documented and agreed upon. There may be a final step in
the requirements analysis--to have the users and top management sign off on the specification
document. There is the need to review the present system before designing a new one, because:
Page 7 of 61
2. SOFTWARE DEVELOPMENT METHODS
The software process is a set of activities and associated results which produce a software
product. The software process is the set of tools, methods, and practices we use to produce a
software product.
Requirements elicitation - Client and developers define the purpose of the system, Delivering
Description of the system in terms of actors and use cases.
Requirements Analysis - Developers transform use cases into an object model that completely
describes the system; Delivering a Model of the system that is correct, complete, consistent,
unambiguous, realistic, and verifiable.
System design - Developers define the design goals of the project and decomposition of the
system in to smaller subsystems that can be realized by individual teams. Delivers a Clear
description of strategies, subsystem decomposition, and deployment diagram
Objects design - Developers define custom objects (interfaces, off-the-shelf components, model
restructuring to attain design goals) to bridge the gap between the analysis model and the
hardware/software platform defined during system design. Delivers a Detailed object model
annotated with constraints and precise descriptions for each element
Implementation - Developers translate object model into source code; Delivering a Complete set
of source code files.
Cost estimates and schedule commitments must be met with reasonable consistency, and the
resulting products should generally meet users' functional and quality expectations.
The objectives of software process management are to produce products according to plan
while simultaneously improving the organization's capability to produce better products.
The basic principles are those of statistical process control, which have been used successfully
in many fields.
A process is said to be stable or under statistical control if its future performance is predictable
within established statistical limits.
Page 8 of 61
When a process is under statistical control, repeating the work in roughly the same way will
produce roughly the same result. The basic principle behind statistical control is measurement.
The numbers (measures) must properly represent the process being controlled, and they must
be sufficiently well defined and verified to provide a reliable basis for action. Also, the mere act
of measuring human processes changes them. Measurements are both expensive and
disruptive; overzealous measuring can degrade the process we are trying to improve.
A fully effective software process must consider the relationships of all the required tasks, the
tools and methods used, and the skill, training, and motivation of the people involved. To
improve their software capabilities, organizations must take six steps:
Needed Actions: Planning (size and cost estimates and schedules), performance tracking, change
control, commitment management, Quality Assurance.
(2) Repeatable - Characteristics: Intuitive--cost and quality highly variable, reasonable control
of schedules, informal and ad hoc process methods and procedures.
Needed Actions: Develop process standards and definitions, assign process resources, establish
methods (requirements, design, inspection, and test).
Page 9 of 61
Needed Actions: Establish process measurements and quantitative quality goals, plans,
measurements, and tracking
(5) Optimizing - Characteristics: Quantitative basis for continued capital investment in process
automation and improvement.
Needed Actions: Continued emphasis on process measurement and process methods for error
prevention.
2. Project management - this deal with the normal activities of planning, tracking, project
control, and subcontracting.
Planning includes the preparation of plans and the operation of the planning system.
The tracking and review systems ensure that appropriate activities are tracked against the plan
and that deviation are reported to management.
Project control provides for control and protection of the critical elements of the software project
and its process. Subcontracting concerns the means used to ensure that subcontracted resources
perform in accordance with established policies, procedures, and standards.
Page 10 of 61
performance of the defined process and process monitoring and adjustment where improvements
are needed.
4. Technology - This topic deals with technology insertion and environments that covers the
means to identify and install needed technology, while environments include the tools and
facilities that support the management and execution of the defined process.
Process Management
The critical steps are:
1. Install a management review system. This involves senior management and ensures that
plans are produced, approved, and tracked in an orderly way.
2. Insist on a comprehensive development plan. This must include code size estimates,
resource estimates and a schedule.
3. Set up a Software Configuration Management (SCM) function. This is crucial to
maintaining control and must be in place and operational before completion of detailed
design. These functions should then be expanded to include requirements and design.
4. Ensure that a Software Quality Assurance (SQA) organization is established and
sufficiently staffed to review a reasonable sample of the work products. Until there is
evidence that the work is done according to plan, this is essentially a 100% review. With
successful experience the sampling percentage can be reduced.
5. Establish rate charts for tracking the plan. Typical milestones are: requirements
completed and approved, the operational concept reviewed and approved, high-level
design completed and reviewed, percent of modules with detailed design completed,
percent of modules through code and unit test. Similar rate charts can be established for
each phase of test.
Page 11 of 61
Number of framework activities, made up of i.e. tasks, milestone, work products
and the project team adapts quality assurance points.
Umbrella activities such as quality assurance, system configuration management
and measurement lays the process model
Project : is planned and controlled system (project) undertaking to achieve a goal/
attain a solution
Project is a human activity that achieves a clear objective against a timescale
Characteristics:
i) Specific objectives to be completed within certain specifications
ii) Has a defined time and task schedule
iii) Has a defined budget
iv) Requires resources such as people
v) There is no practice or rehearsal
vi) Change: a project commissions change
1. Involve management - Significant change requires new priorities, additional resources, and
consistent support. Senior managers will not provide such backing until they are convinced that
the improvement program makes sense.
2. Get technical support - This is best obtained through the technical opinion leaders (those
whose opinions are widely respected). When they perceive that a proposal addresses their key
concerns, they will generally convince the others. However, when the technical community is
directed to implement something they don't believe in, it is much more likely to fail.
3. Involve all management levels - While the senior managers provide the resources and the
technical professionals do the work, the middle managers make the daily decisions on what is
done. When they don't support the plan, their priorities will not be adjusted, and progress will be
painfully slow or nonexistent.
4. Establish an aggressive strategy and a conservative plan - While senior management will be
attracted by an aggressive strategy, the middle managers will insist on a plan that they know how
to implement. It is thus essential to be both aggressive and realistic. The strategy must be visible,
but the plan must provide frequent achievable steps toward the strategic goals.
5. Stay aware of the current situation - It is essential to stay in touch with current problems.
Issues change, and elegant solutions to last year's problems may no longer be pertinent. While
important changes take time, the plan must keep pace with current needs.
6. Keep progress visible - People easily become discouraged if they don't see frequent evidence
of progress. Advertise success, periodically reward the key contributors, and maintain
enthusiasm and excitement.
Page 12 of 61
SOFTWARE DEVELOPMENT TOOLS
Any System Development is represented by a model called System Development Life Cycle
(SDLC) contains 5 stages that flow from one to the next in order (hence the 'waterfall' imagery).
There are alternatives to SDLC that includes, Waterfall model, Prototyping, Iterative
Development, Spiral Model of the SDLC etc
System Engineering. Top level customer requirements are identified, functional and
system interfaces are defined and the relation of this software to overall business function is
established.
Analysis. Detailed requirements necessary to define the function and performance of the
software are defined. The information domain for the system is analyzed to identify data
flow characteristics, key data objects and overall data structure.
Design. Detailed requirements are translated into a series of system representations that
depict how the software will be constructed.
The design encompasses a description of program structure, data structure and detailed
procedural descriptions of the software.
Code. Design must be translated into a machine executable form.
The coding step accomplishes this translation through the use of conventional
programming languages (e.g., C, Ada, Pascal) or fourth generation languages.
Testing. Testing is a multi-step activity that serves to verify that each software component
properly performs its required function and validates that the system as a whole meets
overall customer requirements.
Maintenance. Maintenance is the re-application of each of the preceding activities for
existing software. The re-application may be required to correct an error in the original
software, to adapt the software to changes in its external environment (e.g., new hardware,
operating system), or to provide enhancement to function or performance requested by the
customer.
This is the most widely used approach to software engineering. It leads to systematic, rational
software development, but like any generic model, the life cycle paradigm can be problematic
for the following reasons:
1. The rigid sequential flow of the model is rarely encountered in real life. Iteration can
occur causing the sequence of steps to become muddled.
2. It is often difficult for the customer to provide a detailed specification of what is required
early in the process. Yet this model requires a definite specification as a necessary
building block for subsequent steps.
Page 13 of 61
3. Much time can pass before any operational elements of the system are available for
customer evaluation. If a major error in implementation is made, it may not be uncovered
until much later.
Do these potential problems mean that the life cycle paradigm should be avoided? Absolutely
not! They do mean, however, that the application of this software engineering paradigm must be
carefully managed to ensure successful results.
PROTOTYPING
Prototyping moves the developer and customer toward a "quick" implementation. Prototyping
begins with requirements gathering. Meetings between developer and customer are conducted to
determine overall system objectives and functional and performance requirements. The developer
then applies a set of tools to develop a quick design and build a working model (the "prototype")
of some element(s) of the system. The customer or user "test drives" the prototype, evaluating its
function and recommending changes to better meet customer needs. Iteration occurs as this process
is repeated, and an acceptable model is derived. The developer then moves to "productize" the
prototype by applying many of the steps described for the classic life cycle. In object oriented
programming a library of reusable objects (data structures and associated procedures) the
software engineer can rapidly create prototypes and production programs.
1. A working model is provided to the customer/user early in the process, enabling early
assessment and bolstering confidence,
2. The developer gains experience and insight by building the model, thereby resulting in a
more solid implementation of "the real thing"
3. The prototype serves to clarify otherwise vague requirements, reducing ambiguity and
improving communication between developer and user.
1. The user sees what appears to be a fully working system (in actuality, it is a partially
working model) and believes that the prototype (a model) can be easily transformed into
a production system. This is rarely the case. Yet many users have pressured developers
into releasing prototypes for production use that have been unreliable, and worse,
virtually un - maintainable.
2. The developer often makes technical compromises to build a "quick and dirty" model.
Sometimes these compromises are propagated into the production system, resulting in
implementation and maintenance problems.
3. Prototyping is applicable only to a limited class of problems. In general, a prototype is
valuable when heavy human-machine interaction occurs, when complex output is to be
produced or when new or untested algorithms are to be applied. It is far less beneficial for
large, batch-oriented processing or embedded process control applications.
ITERATIVE DEVELOPMENT
The problems with the Waterfall Model created a demand for a new method of developing
Page 14 of 61
Systems which could provide faster results, require less up-front information, and offer greater
flexibility. With Iterative Development, the project is divided into small parts. This allows the
development team to demonstrate results earlier on in the process and obtain valuable feedback
from system users.
Often, each iteration actually a mini-Waterfall process with the feedback from one phase providing
vital information for the design of the next phase. In a variation of this model, the software products
which are produced at the end of each step (or series of steps) can go into production immediately
functions that will be included in the developed system. Prototyping is comprised of the following
steps:
Prototype Creation / Modification -The information from the design is rapidly rolled into
a prototype. This may mean the creation/modification of paper information, new coding,
or modifications to existing coding.
Assessment - The prototype is presented to the customer for review. Comments and
suggestions are collected from the customer.
Prototype Refinement - Information collected from the customer is digested and the
prototype is refined. The developer revises the prototype to make it more effective and
efficient.
System Implementation -In most cases, the system is rewritten once requirements are
understood. Sometimes, the Iterative process eventually produces a working system that
can be the cornerstone for the fully functional system.
Page 15 of 61
THE SPIRAL MODEL
The term “spiral” is used to describe the process that is followed as the development of the system
takes place. With each iteration around the spiral (beginning at the center and working outward),
progressively more complete versions of the system are built.
Risk assessment is included as a step in the development process as a means of evaluating each
version of the system to determine whether or not development should continue. If the customer
decides that any identified risks are too great, the project may be halted.
For example, if a substantial increase in cost or project completion time is identified during one
phase of risk assessment, the customer or the developer may decide that it does not make sense to
continue with the project, since the increased cost or lengthened timeframe may make continuation
of the project impractical or unfeasible.
The Spiral Model is made up of the following steps:
Project Objectives - Similar to the system conception phase of the Waterfall Model
Objectives are determined, possible obstacles are identified and alternative approaches are
weighed
Risk Assessment - Possible alternatives are examined by the developer, and associated
risks/problems are identified. Resolutions of the risks are evaluated and weighed in the
consideration of project continuation. Sometimes prototyping is used to clarify needs.
Engineering & Production - Detailed requirements are determined and the software
piece is developed.
Planning and Management - The customer is given an opportunity to analyze the results
of the version created in the Engineering step and to offer feedback to the developer.
Page 16 of 61
REQUIREMENTS ENGINEERING
Requirements: the features that the system must have or a constraint that it must satisfy to be
accepted by the client - a model of the system that aims to be correct, complete, consistent, and
verifiable.
Requirements engineering (in the context of systems engineering) is concerned with the
acquisition, analysis, specification, validation, and management of software requirements.
Software requirements express the needs and constraints that are placed upon a software product
that contribute to the satisfaction of some real world application, alternately, the properties that
must be exhibited in order to solve some real world problem.
Requirements elicitation: to systematically extract and inventory the requirements of the system
from a combination of human stakeholders, the system's environment, feasibility studies, market
analyses, business plans, analyses of competing products and domain knowledge.
Stakeholders include Users, Customers, Market analysts, Regulators, System developers
Issues
CHARACTERISTICS OF REQUIREMENTS
Emergent - they gradually emerge from interactions between requirements engineers and the
client organization
Open - they are always subject to change because organization and their contexts continually
change
Local - requirements must be interpreted in the context of a particular organization at a particular
time
Contingent- because they are an evolving outcome of an on-going processes or builds on prior
interactions and documents.
Page 17 of 61
Situated - because they can only be understood in relation to the particular, concrete situation in
which it actually occurs.
Vague - because they are only elaborated to the degree that it is useful to do so; the rest is left
grounded in tacit knowledge.
Embodied - because they are tied to bodies in particular physical situations, so that the particular
way that bodies are embedded in as situation may be essential to some interpretations
Retrospective hypothesis: Only post hoc explanations for situated events can attain relative
stability and independence from context. Thus, it is impossible to completely formalize
requirements because they cannot be fully separated from their social context.
REQUIREMENTS ELICITATION
Requirements elicitation: to systematically extract and inventory the requirements of the system
from a combination of human stakeholders, the system's environment, feasibility studies, market
analyses, business plans, analyses of competing products and domain knowledge. Corresponds to
data collection
Activities
Identify stakeholders
Establish relationships between the development team and the customer
Identify actors scenarios use cases; refining use cases relationships among use cases participating
objects nonfunctional requirements
Resource requirements
Goals - the high-level objectives of the system. Must assess the value (relative to priority) and
cost of goals..
Domain knowledge
System stakeholders
The operational environment
The organizational environment: business processes conditioned by the structure, culture, and
internal politics.
Elicitation techniques
Users may have difficulty describing their tasks, may leave important information unstated, or
may be unwilling or unable to cooperate. Corresponds to the tools and methods for data
collection
Interviews between requirements team and the client organization
structured / unstructured interview, Scenarios, Prototypes, paper mockups, a hastily built
software that exhibits the key functionality of the target product, 4GL – SQL, Interpreted
languages (Smalltalk, Prolog, Lisp, Java, Unix shell, Hypertext, ...)
Page 18 of 61
TECHNICAL DOCUMENTATION REVIEW
Requirements Analysis - is the process of analyzing requirements to: detect and resolve conflicts
between requirements, discover the bounds of the system and how it must interact with its
environment,
During this process elaborate system requirements to software requirements
Market analysis
Competitive system assessment
Reverse engineering
Simulations
Benchmarking processes and systems
Process parameters - are constraints on the development of the system where the requirements
must be stated clearly, unambiguously, appropriately and quantitatively; avoiding vague and
unverifiable requirements that depend on subjective judgment for interpretation.
Emergent requirements - depend on how all the system components inter-operate and are
dependent on the system architecture.
Quality requirements - should be measurable and prioritized.
The tasks involved in the requirements engineering process are iterative and consists of the
following
Page 19 of 61
Establish the aims of the project - User needs, Domain information, Standards
Repeat
DELIVERABLES
Systems Requirements
Software Requirements
(Software requirements specification SRS should have; IEEE std 1362-1998, IEEE std 830-
1998)
Purpose / content - detailed requirements derived from the system requirements that
may form the basis of a contract between developer and customer.
Readership/Language - readers have some knowledge of software engineering concepts
Structure – style that minimizes effort needed to read and locate information
Page 20 of 61
4. SOFTWARE DESIGN
SOFTWARE CONSTRUCTION
Software construction is a fundamental act of software engineering involving construction of
working meaningful software through a combination of coding, validation, and testing (unit
testing) by a programmer.
Design - Description of the internal structure and organization of a system with global solutions
expressed as a set of smaller solutions.
Construction process – concerns finding a complete and executable solution to a problem
Tools - compilers, version control systems, debuggers, code generators, specialized editors, tools
for path and coverage analysis, test scaffolding and documentation tools
1. Scale or size:
o very small projects are "construction sized" and neither need or require an explicit
design phase
o very large project may require an interactive relationship between design and
construction
2. As similar methods are used in design and construction, design is often as much
construction as it is design.
3. Design will always include some degree of guessing or approximations that turn out to be
wrong and will require corrective actions during construction.
Styles of construction
- Concerns use of Linguistic construction methods, formal and informal and aims mainly to have
Reduction in complexity
- Usually complexity in form of:
Design patterns
Software templates
Functions, procedures, code blocks
Objects and data structures
Encapsulation and abstract data types
Component libraries and frameworks
Page 21 of 61
DESIGN CONCEPTS
Modularity - helps to isolate functional elements of the system. One module may be debugged,
improved, or extended with minimal personnel interaction or system discontinuity.
Specification - The key to production success of any modular construct is a rigid specification of
the interfaces; the specification, as a side benefit, aids in the maintenance task by supplying the
documentation necessary to train, understand, and provide maintenance.
Generality - is essential to satisfy the requirement for extensibility. From this viewpoint,
specification should encompass from the innermost primitive functions outward to the
generalized functions such as a general file management system.
Implementation design - is sufficiently describing each component to allow for its coding.
The programmer - provides creativity and insight into how to solve new, difficult problems,
plus the ability to express those solutions with sufficient precision to be meaningful to the
computer.
The Computer - provides astonishing, reliability, retention, and speed of performance.
The implementation - Code that fully and correctly processes data for its entire problem space,
anticipates and handles all plausible (and some implausible) classes of errors, runs efficiently,
and is structured to be resilient and easy-to-change over time.
Programming languages - that have to be of Higher-level and domain specific with features,
Functions and procedures, Functional and Logic programming, Concurrent and real-time
programming, Program generators, Mathematical libraries and Spreadsheets, OOP (Visual
programming and creation of user interfaces)
A programmer should anticipation of diversity and prepare by having:- Complete and sufficient
method sets, OO methods, Table driven software, Configuration files and internationalization,
Naming and coding styles, Reuse and repositories, Self-describing software (plug and play),
Page 22 of 61
Parameterization, Generics, Objects and Object classes, Error handling, Extensible frameworks,
Visual configuration specification, Separation of GUI design and functionality implementation
Structuring for validation – require both Modular design and Structured programming where
the programmer Style guides for stepwise refinement that includes:- Assertion-based, State
machine logic, Redundant systems with self-diagnosis and fail-safe methods, Hot spot analysis
and performance tuning. Also use Numerical analysis that has Complete and sufficient design of
OO class methods with Dynamic validation of visual requests
The criteria that are proposed for the choice of a formal design language are:
ASPECTS OF OO PROGRAMMING
Object-oriented analysis is a method of analysis that examines requirements from the
perspective of the classes and objects found in the vocabulary of the problem domain
The key features of the OO approach and the languages that support it are such that
It supports objects that are data abstractions with an interface of named operations
Objects have an associated type [class]
Types [classes] may inherit attributes from super-types [super-classes]
Page 23 of 61
The properties of an Object Oriented Programming Language are:
i) Object-oriented analysis, design and programming are being presented as major advances
in software engineering.
ii) Hardware is of larger magnitude and available memory larger, but our ability to make
that hardware perform effectively, and to manage this on schedule and to budget is still
very poor.
Reasons why we often fail to complete systems with large programs on schedule?
1. Inability to make realistic program design schedules and meet them. For the following
reasons:
o Underestimation of time to gather requirements and define system functions
o Underestimation of time to produce a workable (cost and time - wise) program design.
o Underestimation of time to test individual programs.
o Underestimation of time to integrate complete program into the system and complete
acceptance tests.
o Underestimation of time and effort needed to correct and retest program changes.
o Failure to provide time for restructuring program due to changes in requirements.
o Failure to keep documentation up-to-date.
2. Underestimation of system time required to perform complex functions.
3. Underestimation of program and data memory requirements. Tendency to set end date for
job completion and then to try to meet the schedule by attempting to bring more manpower
to the job by splitting job into program design blocks in advance of having defined the
overall system plan well enough to define the individual program blocks and their
appropriate interfaces.
Page 24 of 61
5. SOFTWARE PROJECT MANAGEMENT
DEFINITION OF TERMS
Software engineering management - the application of management activities that includes:-
planning, coordinating, resource allocation, monitoring, controlling and reporting to insure that
the development of software is systematically disciplined and measurable.
Rationale management - involves problem generation, solutions considered, the criteria used to
evaluate the alternatives and the decision making process
Project management - oversight activities that insure the delivery of a high-quality system on
time and within budget including planning, budgeting, hiring and organizing developers into
teams
System Testing - concerns finding differences between the system and its models by executing
the system with sample input data sets and includes:-
Clients often do not appreciate the complexity inherent in software engineering, particularly to
the impact of changed client requirements
The software engineering process itself will generate the need for new or changed client
requirements
Software development is an iterative rather than linear process, thus need to maintain a balance
between creativity and discipline
Management to have an underlying theory concerning Software products which are intangible
and cannot be easily tested
Also the degree of Software complexity and the rapid pace of change in the underlying
technology
There Is need to carry out management in the following
Project Management – during the initiation and scope definition, Planning, Enactment, Review,
evaluation and Closure of the project
Page 25 of 61
Software Engineering Measurement – issues concerning goals, selection of software and its
development, Collection of data, Software measurement models and Organizational comparison
Personnel management - Hiring and retention, Training and motivation, mentoring for career
Development, enhancing Communication channels and media, Meeting procedures, Written,
Oral or Negotiation presentations
Process Management
Contains the following aspects
Initiation and scope definition – considered are Determination and negotiation of requirements,
Feasibility analysis (technical, operational, financial, social/political), Process for review and
revision of requirements
Planning - Process planning, Project planning, Determine deliverables, Effort, schedule and cost
estimation, Resource allocation, Risk management, Quality management, Plan management
PROJECT SCHEDULING
Estimation of time and resources required to complete activities and organization then in a coherent
sequence.
Involves separating the work (project) into separate activities and judging the time required to
complete these activities, some of which are carried out in parallel
Schedules must: -
Properly co-ordinate the parallel activities properly
Avoid situation where whole project is delayed for a critical task to be finished
Page 26 of 61
Schedules must have allowances (error allowances) that can cause delays in completion therefore
flexible
They must also estimate resources needed to complete each task (human effort, hardware,
software, finance (budget) etc)
NB: key estimation is to estimate as if nothing will go wrong, then increase the estimate to cover
anticipated problems. Also add a further contingency factor to cover the problems.
Gantt Charts
A Gantt chart is a horizontal bar chart that illustrates a project task against a calendar.
The horizontal position of the bar shows the start and end of the activity, and the length of
the bar indicates its duration.
For the work in progress the actual dates are shown on the horizontal axis
A vertical line indicates a current or reporting date
To reduce complexity a Gantt chart for a large project can have a master chart displaying
the major activity groups (where each activity represents several task) and is followed by
individual Gantt charts that show the tasks assigned to team members.
The chart can be used to track and report progress as it presents a clear picture of project
status
It clearly shows overlapping tasks- tasks that can be performed at that same time
The bars can be shaded to clearly indicate percentage completion and project progress
Popular due to its simplicity – it is easy to read, learn, prepare and use
More effective than PERT/CPM charts when one is seeking to communicate schedules
They do not show activity dependencies. One cannot determine from the Gantt chart the
impact on the entire project caused by single activity that falls behind schedule.
The length of the bar only indicates the time span for completing an activity not the number
of people assigned or the person days required
PERT/CPM
Page 27 of 61
Program Evaluation and Review Technique (PERT), (Also Critical Path Method – CPM) is
a graphical network model that depicts a project’s tasks and the relationships between those
tasks. The project is shown as a network diagram with the activities shown as vectors and
events displayed as nodes.
Shows all individual activities and dependencies
It forms the basis for planning and provides management with the ability to plan for best
possible use of resources to achieve a given goal within time and cost limitations
It provides visibility and allows management to control unique programs as opposed to
repetitive situations
Helps management handle the uncertainties involved in programs by answering such
questions as how time delays in certain elements influence others as well as the project
completion. This provides management with a means for evaluating alternatives
It provides a basic structure for reporting information
Reveals interdependencies of activities
Facilitates hat if exercises
It allows one to perform scheduling risk analysis
Allows a large amount of sophisticated data to be presented in a well organized diagram
from which both the contractor and the customer can make joint decisions
Allows one to evaluate the effect of changes in the program
More effective than Gantt charts when you want to study the relationships between tasks
Requires intensive labour and time
The complexity of the charts adds to implementation problems
Has more data requirements thus is expensive to maintain
Is utilized mainly in large and complex projects
Gantt Charts and PERT/CPM are not mutually exclusive techniques project managers often
use both methods. Neither handles the scheduling of personnel and allocation of resources
NETWORK ANALYSIS
Network analysis is a generic name for a family of related techniques developed to aid
management to plan and control projects. It provides planning and control information on time,
cost and resource aspects of a project. It is most suitable where the projects are complex, large or
restrictions exist.
The critical path method is applied most where a network is drawn either an activity on arrow or
activity on node network. In the network analysis a project is broken down into consistent activities
and their presentation in a diagrammatic form. In the CPM one has to analyze the project, draw
the network, estimate the time and cost, locate the critical path, schedule the project, monitor and
control the progress of the project and revise the plan.
Example: draw a network and find the critical path for the following project
Activity Proceeding Activity Duration
A - 4
B A 2
C B 10
D A 2
Page 28 of 61
E D 5
F A 2
G F 4
H G 3
J C 6
K C, E 6
L H 3
More effective planning – CPM forces the management to think the project through for it requires
careful detailed planning and high discipline that justifies its use
Better focusing on the problem areas – the technique enables the manager to pin-point likely
bottle-necks and problem areas before they can occur
Improve resource allocation – resources can be directed where most needed thus reducing costs
and speeding up completion of the project, e.g. overtime can be eliminated or confined to those
tasks where it will do most good
Strong alternative options – management can simulate the effect of alternative causes of action;
and gauge the effect of problems in carrying out particular tasks and making contingency plans
Management by exception – CPM identifies those actions whose timely completion is critical to
the overall timetable and enables the leeway on other actions to be calculated. This enables the
management to focus their attention on important areas of the project
Improve project monitoring – by comparing the actual performance of each task with the
expected the manager can immediately recognize when the problems are occurring, identify their
causes and take appropriate action in time to rescue the project.
Page 29 of 61
Allows for better communication of project tasks and deadlines
Completed:
i) Within the allocated time
ii) Within the budgeted cost
iii) At the proper performance or specification level
iv) With acceptance by the customer/user
v) With minimum or mutually agreed upon scope changes
vi) Without disturbing the main work flow of the organization
vii) Without changing the corporate culture
viii) Within the required quality and standards thus you can use the customer’s name as a
reference
Based on the work of Dr W Edwards Deming, who is reputed to have transformed Japanese
products from shoddy to first in the world, He developed 14 points for management to transform
organizations.
1. Create constancy of purpose toward improvement of product and service, with the aim to
become competitive and to stay in business and to provide jobs.
2. Adopt the new philosophy. We are in a new economic age. Western management must
awaken to the challenge, must learn their responsibilities, and take on leadership for
change.
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on
a mass basis by building quality into the product in the first place.
4. End the practice of awarding business on the basis of price tag. Instead, minimize total
cost. Move toward a single supplier for any one item, with a long-term relationship of
loyalty and trust.
Page 30 of 61
5. Improve constantly and forever the system of production and service, to improve quality
and productivity and thus constantly decrease costs.
6. Institute training on the job.
7. Institute leadership. The aim of leadership should be to help people and machines and
gadgets to do a better job. Leadership of management is in need of overhaul, as well as
leadership of production workers.
8. Drive out fear, so that everyone may work effectively for the company.
9. Break down barriers between departments. People in research, design, sales, and
production must work as a team to foresee problems in production and in use that may be
encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the work force that ask for zero defects
and new levels of productivity.
11. Eliminate work standards (quotas) on the factory floor. Substitute leadership. Eliminate
management by objective. Eliminate management by numbers and numerical goals.
Substitute leadership.
12. Remove barriers that rob the hourly worker of his right to pride of workmanship. The
responsibility of supervisors must be changed from stressing sheer numbers to quality.
Remove barriers that rob people in management and engineering of their right to pride of
workmanship. This means abolishment of the annual merit rating and management by
objective.
13. Encourage education and self- improvement for everyone.
14. Take action to accomplish the transformation.
Computer resources - These include the estimating storage space, processors, terminals,
drivers, servers, output media etc. Generally sufficient allowance is given for data growth that
involves work files and transaction documents
Project overheads - Overheads may include the many supporting activities that form an integral
part of the project e.g. assessing technical performance, commitment of personnel, project
control, system demonstration, project reviews etc
Also considered are Traveling, Training and Effort Costs (e.g. paying software engineers)
PROJECT ESTIMATION
System (software) cost and effort estimate can never be exact, too many variables, human,
technical, environmental, political can affect system cost and efforts applied to development
The four major tasks undertaken by the project manager when preparing estimates for a given
project
Page 31 of 61
Assess the project
Identify all the activities
Evaluate all the net resource activities
Cost the resources
Project estimation strive to achieve a reliable cost and effort estimation
a) Delay estimation until late in the project (estimates done after the project)
b) Base estimates on similar projects that have already been completed
c) Use relating simple decomposition technique to generate project cost and effort
estimates
d) Use one or more empirical models for system cost and effort estimation
decomposition techniques take a “divide and conquer approach to project estimation. Project is
decomposed (divided) into major functions and related activities and cost and efforts estimated on
each.
Empirical estimation models: based an experience (historical data) and takes a form
D= f(vi)
Where d = one of a number of estimated values (e.g. effort, cost, project duration etc)
Vi = selected independent parameters (e.g. estimated *LOC or *FP)
Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project
Lines of Code and Function points
Page 32 of 61
QUANTITATIVE MANAGEMENT AND ASSURANCE
Definition – Quality management involves defining appropriate procedures and standards and
checking that all engineers (developers) follow them
(1) Algorithmic cost modeling - A model is developed using historical cost information which
relates some software metric (usually its size) to the project cost. An estimate is made of that
metric and the model predicts the effort required.
Page 33 of 61
(2) Expert judgement - One or more experts on the software development techniques to be used
and on the application domain are consulted. They each estimate the project cost and the final
cost estimate is arrived at by consensus.
(3) Estimation by analogy - This technique is applicable when other projects in the same
application domain have been completed. The cost of a new project is estimated by analogy with
these completed projects.
(4) Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In
software costing, this means that the cost is determined by available resources rather than by
objective assessment. If the software has to be delivered in 12 months and 5 people are available,
the effort required is estimated to be 60 person-months.
(5) Pricing to win - The software cost is estimated to be whatever the customer has available to
spend on the project. The estimated effort depends on the customer's budget and not on the
software functionality.
(6) Top- down estimation - A cost estimate is established by considering the overall
functionality of the product and how that functionality is provided by interacting sub-functions.
Cost estimates are made on the basis of the logical function rather than the components
implementing that function.
(7) Bottom- up estimation - The cost of each component is estimated. All these costs are added
to produce a final cost estimate.
HISTORICAL DATA
Using historical data for estimation is pretty self-explanatory. This method uses track record on
previous projects to make estimates for the new project.
Main advantage: It is specific to that organization
Page 34 of 61
Main disadvantages:
· Continuous process improvement is sometimes hard to factor in
· Some projects may be very different than previous ones
Costs are analyzed using mathematical formulae linking costs with metrics. The most commonly
used metric for cost estimation is the number of lines of source code in the finished system
(which of course is not known). Size estimation may involve estimation by analogy with other
projects, estimation by ranking the sizes of system components and using a known reference
component to estimate the component size or may simply be a question of engineering
judgement.
Code size estimates are uncertain because they depend on hardware and software choices, use of
a commercial database management system etc. An alternative to using code size as the
estimated product attribute is the use of `function- points', which are related to the functionality
of the software rather than to its size. Function points are computed by counting the following
software characteristics:
Each of these is then individually assessed for complexity and given a weighting value which
varies from 3 (for simple external inputs) to 15 (for complex internal files). The function point
count is computed by multiplying each raw count by the estimated weight and summing all
values, then multiplied by the project complexity factors which consider the overall complexity
of the project according to a range of factors such as the degree of distributed processing, the
amount of reuse, the performance, and so on.
Function point counts can be used in conjunction with lines of code estimation techniques. The
number of function points is used to estimate the final code size.
Based on historical data analysis, the average number of lines of code in a particular language
required to implement a function point can be estimated (AVC). The estimated code size for a
new application is computed as follows: Code size = AVC x Number of function points
The advantage of this approach is that the number of function points can often be estimated from
the requirements specification so an early code size prediction can be made. Levels of selected
software languages relative to Assembler language
DECOMPOSITION TECHNIQUES
For estimation can be one of two types:
Page 35 of 61
Problem-based - Use functional or object decomposition, estimate each object, and sum the
result. Estimates either use historical data or empirical techniques.
Process-based - Estimate effort and cost for each task of the process, and then sum the result.
Estimates use historical data
Main advantage: Easier to estimate smaller parts
Main disadvantage: More variables involved means more potential errors
EMPIRICAL MODELS
For estimation uses formulae of the form g = f(x)
Where g is the value to be estimated (cost, effort or project duration) and x is a parameter such as
KLOC, FP or OP. Most formulae involving KLOC consistently show that there is an almost
linear relationship between KLOC and estimated total effort.
The model is based on empirical studies and relates the size (S), technology factor (C), total project
effort in person years (td); thus S = CK1/3td4/3
The equation allows one to assess the effect of varying delivery date on the total effort needed to
complete the project. Thus for a 10%decrease in elapsed time
S = CK1/3td4/3 = CK1/3(0.9td)4/3 i.e. K’/K = 1.52 a 52% increase in total life cycle effort.
Advantages of SLIM
Uses linear programming to consider development constraints on both cost and effort
Has fewer parameters needed to generate an estimate (over COCOMO)
Disadvantages of SLIM
Model usually found insufficiency
Not suitable for small projects
Estimates are extremely sensitive to the technology factor
Page 36 of 61
COCOMO - Most widely used model for effort and cost estimation and considers a wide variety
of factors. Projects fall into three categories: organic, semidetached, and embedded, characterized
by their size. In the basic model which uses only source size: There is also an intermediate model
which, as well as size, uses 15 other cost drivers. Cost Drivers for the COCOMO Model.
Software reliability
Size of application database
Complexity
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language expertise
Performance requirements
Memory constraints
Volatility of virtual machine
Environment
Turnaround time
Use of software tools
Application of software engineering methods
Required development schedule
Values are assigned by the manager.
Page 37 of 61
The intermediate model is more accurate than the basic model.
Page 38 of 61
7. SOFTWARE METRICS
From a survey of managers and technicians:
In an experiment, five programming teams were given a different objective each: When
productivity was evaluated each team ranked first in its primary objective. This shows that
programmers respond to a goal.
Define Software Metrics as a set of the measures that are considered and are to be
incorporated for quality software product
These set of Software Metrics and include:-
Software Maintainability - This is the main programming costs in most installations, and is
affected by data structures, logical structure, documentation, diagnostic tools, and by personnel
attributes such as specialization, experience, training, intelligence, motivation. Maintenance
includes the cost of rewriting, testing, debugging and integrating new features. Methods for
improving maintainability are:
inspections
automated audits of comments
test path analysis programs
use of pseudo code documentation
dual maintenance of source code
modularity
Structured program logic flow.
Maintainability is "the ease with which changes can be made to satisfy new requirements or to
correct deficiencies" [Balci 1997]. Well designed software should be flexible enough to
accommodate future changes that will be needed as new requirements come to light. Since
maintenance accounts for nearly 70% of the cost of the software life cycle [Schach 1999], the
importance of this quality characteristic cannot be overemphasized. Quite often the programmer
responsible for writing a section of code is not the one who must maintain it. For this reason, the
Page 39 of 61
quality of the software documentation significantly affects the maintainability of the software
product.
Software Correctness is "the degree with which software adheres to its specified requirements"
[Balci 1997]. At the start of the software life cycle, the requirements for the software are
determined and formalized in the requirements specification document. Well designed software
should meet all the stated requirements. While it might seem obvious that software should be
correct, the reality is that this characteristic is one of the hardest to assess. Because of the
tremendous complexity of software products, it is impossible to perform exhaustive execution-
based testing to insure that no errors will occur when the software is run. Also, it is important to
remember that some products of the software life cycle such as the design specification cannot be
"executed" for testing. Instead, these products must be tested with various other techniques such
as formal proofs, inspections, and walkthroughs.
Software Reusability is "the ease with which software can be reused in developing other software"
[Balci 1997]. By reusing existing software, developers can create more complex software in a
shorter amount of time. Reuse is already a common technique employed in other engineering
disciplines. For example, when a house is constructed, the trusses which support the roof are
typically purchased preassembled. Unless a special design is needed, the architect will not bother
to design a new truss for the house. Instead, he or she will simply reuse an existing design that has
proven itself to be reliable. In much the same way, software can be designed to accommodate reuse
in many situations. A simple example of software reuse could be the development of an efficient
sorting routine that can be incorporated in many future applications.
Software Documentation - is one of the items which is said to lead to high maintenance costs. It
is not just the program listing with comments. A program librarian must be responsible for the
system documentation, but programmers are responsible for the technical writing. Other aids may
be text editors, and Source Code Control System (SCCS) tool for producing records. Some
companies insist that programmers dictate any test or changes onto a tape every day.
Page 40 of 61
Consideration of storage, complexity, and processing time may or may not be considered in the
conceptualization stage. For example, storage of a large database might be considered, whereas if
one already existed, it could be left until system-specification.
Software Reliability - Reliability means: issues that related to the design of the product which
will operate well for a substantial length of time. a metric which is the probability of operational
success of software.
Probabilistic Models can refer to deterministic events (e.g. motor burns out) when it cannot be
predicted when they will occur; or random events. The probability space -- the space of all
possible occurrences must first be defined, e.g. in a probability model for program error it is all
possible paths in a program. Then the rules for selection are specified, e.g. for each path,
combinations of initial conditions and input values. A software failure occurs when an execution
sequence containing an error is processed.
Software Reliability Theory is the application of probability theory to the modeling of failures
and the prediction of success probability. Thus Software reliability is the probability that the
program performs successfully, according to specifications, for a given time period. The precise
statements / Specifications of:
Errors are found from a system failure, and may be: hardware, software, operator, or unresolved.
Time may be divided into:
operating time,
calendar time during operation,
calendar time during development,
man-hours of coding,
development, testing,
debugging,
Computer test times.
Page 41 of 61
Software is repairable if it can be debugged and the errors corrected. This may not be possible
without inconveniencing the user, e.g. air-traffic control system.
Software Reliability is "the frequency and criticality of software failure, where failure is an
unacceptable effect or behavior occurring under permissible operating conditions" [Balci 1997].
The frequency of software failure is measured by the average time between failures. The
criticality of software failure is measured by the average time required for repair. Ideally,
software engineers want their products to fail as little as possible (i.e., demonstrate high
correctness) and be as easy as possible to fix (i.e., demonstrate good maintainability). For some
real-time systems such as air traffic control or heart monitors, reliability becomes the most
important software quality characteristic. However, it would be difficult to imagine a highly
reliable system that did not also demonstrate high correctness and good maintainability.
Software availability - is the probability that the program is performing successfully, according
to specifications, at a given point in time. Availability is defined as:
1. The ratio of systems up at some instant to the size of the population studied (no. of
systems).
2. The ratio of observed uptime to the sum of the uptime and downtime:
If the system is still in the design and development phase then a third definition is used:
1. the ratio of the mean time to failure (uptimes) and the sum of the mean time to failure and
the mean time to repair (downtime): A = MTTF / (MTTF + MTTR)
Various hypotheses exist about program errors, and seem to be true, but no controlled tests have
been run to prove or disprove them:
1. Bugs per line constant. There are fewer errors per line in a high level language. Many
types of errors in machine code do not exist in HOL.
2. Memory shortage encourages bugs. Mainly due to programming "tricks" used to squeeze
code.
3. Heavy load causes errors to occur. Very difficult to document and test heavy loads.
4. Tuning reduces error occurrences rate. This involves removing errors for a class of input
data. If new inputs are needed, new errors could occur, and the system (hardware and
software) must be retuned.
Page 42 of 61
1. The normalized number of errors is constant. Normalization is the total number of errors
divided by the number of machine language instruction.
2. The normalized error-removal rate is constant. These two hypotheses apply over similar
programs.
3. Bug characteristics remain unchanged as debugging proceeds. Those found in the first
few weeks are representative of the total bug population.
4. Independent debugging results in similar programs. When two independent debuggers
work on a large program, the evolution of the program is such that the difference between
their versions is negligible.
Many researchers have put forward models of reliability based on measures of the hardware, the
software, and the operator; and used them for prediction, comparative analysis, and development
control. Error reliability and availability models provide a quantitative measure of the goodness
of the software. There are still many unanswered questions.
Software Portability is "the ease with which software can be used on computer configurations
other than its current one" [Balci 1997]. Porting software to other computer configurations is
important for several reasons. First, "good software products can have a life of 15 years or more,
whereas hardware is frequently changed at least every 4 or 5 years. Thus good software can be
implemented, over its lifetime, on three or more different hardware configurations" [Schach 1999].
Second, porting software to a new computer configuration may be less expensive than developing
analogous software from scratch. Third, the sales of "shrink-wrapped software" can be increased
because a greater market for the software is available.
Software Efficiency is "the degree with which software fulfills its purpose without waste of
resources" [Balci 1997]. Efficiency is really a multifaceted quality characteristic and must be
assessed with respect to a particular resource such as execution time or storage space. One measure
of efficiency is the speed of a program's execution. Another measure is the amount of storage space
the program requires for execution. Often these two measures are inversely related, that is,
increasing the execution efficiency causes a decrease in the space efficiency. This relationship is
known as the space-time tradeoff. When it is not possible to design a software product with
efficiency in every aspect, the most important resources of the software are given priority.
Page 43 of 61
Summarize the SW Metrics as well as the aspects of quality assurance as follows
They define a hierarchical software characteristic tree, the arrow indicates logical implication. The lowest level
characteristics are combined into medium level characteristics. The lowest level are recommended as quantitative
metrics. They define each one. Then they evaluated each by their correlation with program quality, potential benefits
in terms of insights and decision inputs for the developer and user, quantifiable, feasibility of automating evaluation.
The list is more useful as a check to programmers rather than a guide to program construction.
SOFTWARE TESTING
Defect testing - Testing programs to establish the presence of system defects. The goal is to
discover defects in programs. A successful defect test is a test which causes a program to behave
in an anomalous way. Tests show the presence not the absence of defects
Test data - Inputs which have been devised to test the system
Test cases - Inputs to test the system and the predicted outputs from these inputs if the
system operates according to its specification
Page 44 of 61
Testing guidelines
White Box Testing - also referred to as Structural testing and it is a derivation of test cases
according to program structure, where Knowledge of the program is used to identify additional
test cases especially ascertaining the conditions and elements of arrays
Path testing - The objective of path testing is to ensure that the set of test cases is such that each
path through the program is executed at least once. The starting point for path testing is a
program flow graph that shows nodes representing program decisions and arcs representing the
flow of control
Top-down testing - Starts with high-level system and integrate from the top-downwards
replacing individual components by stubs where appropriate
Bottom-up testing - Integrate individual components in the various levels until the complete
system is created.
Interface testing - Takes place when modules or sub-systems are integrated to create larger
systems. Objectives are to detect faults due to interface errors or invalid assumptions about
interfaces. Particularly important for object-oriented development as objects are defined by their
interfaces
Interfaces types
Stress testing - Stress testing checks for unacceptable loss of service or data. Particularly
relevant to distributed systems which can exhibit severe degradation as a
network becomes overloaded. Stressing the system often causes defects to be revealed.
Object-oriented testing - The components to be tested are object classes that are instantiated as
objects an extension of white-box testing.
Scenario-based testing - Identify scenarios from use-cases and supplement these with
interaction diagrams that show the objects involved in the scenario
Page 45 of 61
SOFTWARE VERIFICATION AND VALIDATION
Verification and Validation are concerned with assuring that a software system meets a user's
needs
Validation: validation shows that the program meets the customer’s needs. The software should
do what the user really requires. The designers are guided by the notion of whether they are
building the right product
Verification: Verification shows conformance with specification. The software should conform
to its functional specification. The designers are guided by the notion whether they are building
the product right
Static verification on Software inspections is concerned with analysis of the static system
representation to discover problems within software product based on document and code
analysis
Dynamic Verification - concerns Software testing with exercising and observing product
behaviour where the system is executed with test data and its operational behaviour is observed
Program testing – is done to reveal the presence of errors NOT their absence. A successful test
is a test which discovers one or more errors. The only validation technique for non-functional
requirements
Verification and validation should establish confidence that the software is fit for purpose it is
designed for. This does NOT mean completely free of defects rather, it must be good enough for
its intended use which determines the degree of confidence needed. Depends on system’s
purpose, user expectations and marketing environment
Software function – concerned with the level of confidence depends on how critical the
software is to an organisation
User expectations - Users may have low expectations of certain kinds of software
Marketing environment - Getting a product to the market early which may be more important
than finding defects in the program
SOFTWARE TESTING
Definition 2 - Testing involves actual execution of program code using representative test data
sets to exercise the program and outputs are examined to detect any deviation from the expected
output
Page 46 of 61
Definition 1- Testing is classified as dynamic verification and validation activities
Objectives of Testing
Page 47 of 61
Acceptance testing is also known as alpha testing or last testing.
In this case the system is tested with real data (from client) and not simulated test data.
Acceptance testing:
Reveals errors and omissions in systems requirements definition.
Test whether the system meets the users’ needs or if the system performance is acceptable.
Acceptance testing is carried out till users /clients agree it’s an acceptable implementation of the
system.
N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing – module testing – sub-system testing - system testing- acceptance testing). But object
oriented development is different and levels have clear/ distinct
Operations and data forms objects –units
Object integrated forms class (equivalent to) –modules
Therefore class testing is cluster testing.
TEST PLANNING
Test planning is setting out standards for the testing process rather than describing product tests.
Test plans allow developers get an overall picture of the system tests as well as ensure required
hardware, software, resources are available to the testing team.
Components of a test plan:
Testing process
This is a description of the major phases of the testing process.
Requirement traceability
This is a plan to test all requirements individually.
Testing schedule
This includes the overall testing schedule and resource allocation.
Test recording procedures
This is the systematic recording of test results.
Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g. staff shortage
should be anticipated here.
Page 48 of 61
TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and development process
used: -
Top-down testing
This involves testing from most abstract component downwards.
Bottom-up testing
This involves testing from fundamental components upwards.
Thread testing
This is testing for systems with multiple processes where the processing of transactions
threads through these processes.
Stress testing
This relies on stressing the system by going beyond the specified limits therefore
testing on how well it can cope with overload situations.
Back to back testing
It is used to test versions of a system and compare the outputs.
Top-down testing
Tests high levels of a system before testing its detailed components. The program is represented
as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-systems) are
implemented and tested through the same way and continues to the bottom component (unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.
Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the hierarchy,
then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been tested.
Page 49 of 61
2. It is appropriate for object oriented systems because individual objects can be tested using their
own test drivers, then integrated and collectively tested.
Page 50 of 61
SOFTWARE MAINTENACE
Definition 1 - Maintenance is the process of changing a system after it has been delivered and is
in use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.
Definition 2 - Maintenance is the evolution i.e. process of changing a system to maintain its ability
to survive.
The maintenance stage of system development involves
a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived
Information is fed back to all previous development phases and errors and omissions in original
software requirements are discovered, program and design errors found and need for new software
functionality identified.
Corrective maintenance - This involves fixing discovered errors in software (Coding errors,
design errors, requirement errors) once the software is implemented and is in full operation, it is
examined to see if it has met the objectives set out in the original specifications. Unforeseen
problems may need to be overcome, and may involve returning to the earlier stages in the system
development life cycle to take corrective actions
Page 51 of 61
repairs and maintenance
safety precautions
date control
Train user to get help on the system.
Maintenance cost (fixing bugs) is usually higher than what software is original due to: -
I. Program being maintained may be old, and not consistent to modern software engineering
techniques. They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change requests. This is
mainly since complexity of the system may make it difficult to assess the effects of a change.
III. Changes made tend to degrade system structure, making it harder to understand and make
further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its documentation is
unreliable therefore need for a new one.
Module independence - Use of design methods that allow easy change through concepts such as
functional independence or object classes (where one can be maintained independently)
Programming language and style - Use of a high level language and adopting a consistent style
through out the code.
Program validation and testing - Comprehensive validation of system design and program
testing will reduce corrective maintenance.
Configuration management - Ensure that all system documentation is kept consistent through
out various releases of system (documentation of new editions.)
Understanding of current system and staff availability - Original development staff may not
always be available. Undocumented code can be difficult to understand (team management).
Page 52 of 61
Maintenance process is triggered by
(a) A set of change requests from users, management or customers.
(b) Cost and impact of the changes are assumed, If acceptable,
(c) New release is planned involving maintenance elements (adaptive, corrective perfective..)
NB. Changes are implemented and validated and new versions of system released.
Definition 2
The process which controls the changes made to a system, and manages the different versions of
the evolving software product. It involves development and application of procedures and
standards for managing an evolving system product. Procedures should be developed for building
system releasing them to customers
Standards should be developed for recording for recording and processing proposed system
changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes. Controlled
systems are called baselines. They are the starting point for controlled evolution.
Configuration managers are responsible for keeping track of difference between software versions
and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct customers at the
appropriate time
Page 53 of 61
Main configuration management’s activities:
1. Configuration management planning (planning for product evolution)
2. Managing changes to the systems
3. Controlling versions and releases (of systems)
4. Building systems from other components
Configuration database is used to record all relevant information relating to configuration to:
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
Customers who have taken delivery of a particular version
Hardware and software operating system requirements to run a given version.
The number of versions of system so far made and when they were made etc
Page 54 of 61
Configuration Management processes are standardised and involve applying pre-defined
procedures so as to manage large amounts of data
Configuration Management tools
Form editor - to support processing the change request forms
Workflow system - to define who does what and to automate information transfer
Change database - that manages change proposals and is linked to a VM system
Version and release identification - Systems assign identifiers automatically when a new
version is submitted to the system
Storage management - System stores the differences between versions rather than all the
version code
Change history recording - Record reasons for version creation
Independent development - Only one version at a time may be checked out for change or
Parallel working on different versions
DISADVANTAGES
CASE products can be expensive
CASE technology is not yet fully evolved so its software is often large and inflexible
Products may not provide a fully integrated development environment
There is usually a long time for learning before the tools can be effectively used i.e. no
soon benefits realized
Page 55 of 61
Analysts must have a mastery of the structured analysis and design techniques if they are to
exploit CASE tools
Time and cost estimates may have to be inflated to allow for an extended learning period of
CASE tools
Documentation should be
Clear and non-ambiguous
Structured and directive
Readable and presentable
Tool-assisted (case tools) in production (automation).
SYSTEM DOCUMENTATION
Items for documentation to be produced for a software product include:-
System Request – this is a written request that identifies deficiencies in the current system
besides requesting for change
Page 56 of 61
Feasibility Report – this indicates the economic, legal, technical and operational feasibility of
the proposed project
Preliminary Investigation Report – this is a report to the management clearly specifying the
identified problems within the system and what further action to be taken is also recommended
System Requirements report – this specifies the entire end –user and management
requirements, all the alternatives plans, their costs and the recommendations to the management
System Design Specification – it contains the designs for the inputs, outputs, program files and
procedures
User Manual – it guides the user in the implementation and installation of the information
system
Maintenance Report – a record of the maintenance tasks done
Software Code – this refers to the code written for the information system
Test Report – this should contain test details e.g. sample test data and results etc
Tutorials - a brief demonstration and exercise to introduce the user to the working of the
software product
SOFTWARE DOCUMENTATION
The typical items included in the software documentation are
Introduction – shows the organization’s principles, abstracts for other sections and notation
guide
Computer characteristics – a general description with particular attention to key attributes and
summarized features
Hardware interfaces – a concise description of information received or transmitted by the
computer
Software functions – shows what the software must do to meet requirements, in various
situations and in response to various events
Timing constraints – how often and how fast each function must be performed
Accuracy constraints – how close output values must be ideal to expected values for them to be
acceptable
Response to undesired events – what the software must do in events e.g. sensor goes down,
invalid data etc
Program sub-sets – what the program should do it if it cannot do everything
Fundamental assumptions – the characteristics of the program that will stay the same, no
matter what changes are made
Changes – the type of changes that have been made or are expected
Sources – annotated list of documentation and personnel, indicating the types of questions each
can answer
Glossary – most documentation is fraught with acronyms and technical terms
Page 57 of 61
This concerns identifying key issues or measures that should show where a program is deficient.
Managers must decide on the relative importance of:
Quality Assurance
Quality management System
- relevant procedures and standards to be followed
- Quality Assurance assessments to be carried out
Page 58 of 61
Software quality assurance techniques:
Correctness - ensures the system operates correctly and provides the value to its user and performs
the required functions therefore defects must be fixed/ corrected
Maintainability - is the ease with which system can be corrected if an error is encountered, adept
if its environment changes or enhance if the user desires a change in requirements
Integrity - is the measure of the system ability to withstand attacks (accidental or intentional) to
its security in terms of data processing, program performance and documentation
Usability - is the measure of user friendliness of a system as measured in terms of physical and
intellectual skills required to learn the system, the time required to become moderately efficient in
using it, the net increase in productivity if used by moderately efficient user, and the general user
attitude towards the system.
QUALITY ASSURANCE
Since quality should be measurable, then quality assurance needs to be put in place
Quality Assurance consists of Auditing and reporting functions of the management
Quality Assurance must outline the standards to be adopted i.e. either International recognized
standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during project lifetime,
which includes: -
Page 59 of 61
Safety aspects and Resource usage
Quality Assurance builds clients confidence (increase acceptability) as well as contractors own
confidence in knowing that they are building the right system and that it will be highly acceptable
Testing and error correction assures system will perform as expected without defects or collapse
and also ensures accuracy and reliability.
System development is a profession and belongs to the engineering discipline that employs
scientific methods in solving problems and providing solutions to the society.
Profession is an employment (not mechanical), that require some degree of learning, a calling,
habitual employment is a collective body of persons engaged in any profession
The main professional task in system development is on management of the tasks, with an aim of
producing system that meets users’ needs, on time and within budget.
Therefore main concerns of the management are: - Planning, Progress monitoring and Quality
control
There are a number of tasks carried out in an engineering organization and are classified into their
function: -
Production - activities that directly contribute to creating products and services the organization
sells
Quality management - activities necessary to ensure the quality of products / services maintained
at this agreed level
Research and development - ways of creating / improving products and production process
Sales and Marketing - selling products / services and involves activities such as advertising,
transporting, distribution etc
Page 60 of 61
INDIVIDUAL PROFESSIONAL RESPONSIBILITIES
Do not harm others – ethical behaviour concerned with both helping clients satisfy their needs
and not hurting them
Be competent – IT Professionals master the complex body of knowledge in their profession; a
challenging issue because IT is dynamic and rapidly evolving field. Wrong advice to the client
can be costly
Maintain independence and avoid conflict of interests – in excising of their professional
duties, they should be free from influence, guidance or control of other parties e.g. vendors. Thus
avoid corruption and fraud
Match clients’ expectations – it is unethical to misrepresent either your qualifications or ability
to perform a certain job
Maintain fiduciary responsibility – IP to hold in trust information provided to them
Safeguard client and source privacy – ensure privacy of all private and personal information and
do not ‘leak it’
Protect records – safeguard records they generate and keep on business transactions with their
clients
Safeguard intellectual property – they are trustees of information and software and hence must
recognize that these are intellectual property that must be safeguarded
Provide quality information – the creator of information / products must disclose information
about the quality and even the source of information in a report or product record
Avoid selection bias – IT Professionals routinely make selection decisions at various stages of
the information life cycle. They must avoid the bias of prevailing point of view. Selection is
related to censorship
Be a steward of a clients assets, energy and attention - Provide information at the right time,
right place and at the right cost
Manage gate-keeping and censorship and obtain informed consent
Obtain confidential information and keep client confidentiality
Abide by laws, contracts, and license agreements; Exercising Professional Judgement
Page 61 of 61