0% found this document useful (0 votes)
43 views58 pages

TMP3413Lecture08 Product Implementation

The document discusses implementation standards for software engineering projects. It covers design completion criteria, including having external specifications and detailed designs for all subsystems and modules before implementation begins. It also discusses implementing components in parallel once their specifications are complete. The document then outlines various implementation standards, including naming conventions, coding standards, size standards for estimating work, defect standards, and ways to prevent defects through improved processes, communication, tools, and oversight.

Uploaded by

Nurfauza Jali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
0% found this document useful (0 votes)
43 views58 pages

TMP3413Lecture08 Product Implementation

The document discusses implementation standards for software engineering projects. It covers design completion criteria, including having external specifications and detailed designs for all subsystems and modules before implementation begins. It also discusses implementing components in parallel once their specifications are complete. The document then outlines various implementation standards, including naming conventions, coding standards, size standards for estimating work, defect standards, and ways to prevent defects through improved processes, communication, tools, and oversight.

Uploaded by

Nurfauza Jali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 58

TMP3413

Software
Engineering Lab

Lecture 08:
Product
Implementation
Topics
 Design Completion Criteria.
 Implementation Standards.
 The Implementation Strategy.
 The IMP Scripts.
1.0 Design Completion Criteria.
 Before implementation starts, check whether
the high-level design have completed.
 At the first level, subdivide the system into
subsystems, components or modules.
 Follow the DES1 or DESn process step
describe in Lecture 7.
 Need to have the external specifications for
each subsystem, component, or module and
should also have the detailed design of the
highest level logic for the system.
1.0 Design Completion Criteria.
1.1 Level of Design
1.2 Parallel Implementation
1.1 Level of Design (cont…)
 If the subsystem are large, repeat the high-level
design process, this time for each subsystem or
component.
 Should end up with external specification for
the subsystem components and the detailed
design of the highest level logic of each one.
 Depending on the system’s size, the larger size
means several more iteration shall be made.
 At this point, will end up with the external
specification for the system’s lowest level
atomic modules.
1.1 Level of Design (cont…)
 Withtruly large systems, the successive
system levels might be:
 System
 Subsystem
 Product
 Component
 Module
1.1 Level of Design (cont…)
 Continue this iterative design process until you
have produced the external specifications for
the truly basic system elements.
 These elements are small enough to implement
directly, and their sizes should generally be
about 150 or fewer LOC.
 Although these modules may contain even
lower level objects or routines, these
subordinate routines either are;
 relatively small,
 have already been developed,
 are reusable parts, or
 are available library functions.
1.1 Level of Design (cont…)
a substantial effort may require to reach this
lowest level specification, but until the software
developer are not ready to start implementation
and should continue repeating the DES1 or DESn
script.
 The reason is that design is a highly irregular
process, and true understanding often requires
more thorough analysis and even
implementation of some areas before you can
start to design the others.
1.2 Parallel Implementation
 With a large development team, the developers
could start implementing any of the components as
soon as they have completed their external
specifications.
 Although it is risky to start implementing any part of
the system before defined all of the highest level
specifications, minimize such problems by breaking
larger systems into components.
 Then, implementing the components when their
external specifications are complete and have
been inspected.
2.0 Implementation Standards
(cont…)
 A few minutes spent defining standards early in
the project can save a lot of time later,
 First, agree as a team on the standards you need
and then ask one or two team members to de-
velop drafts.
 Then the team can productively discuss the drafts
and agree on the content of the specific
standards they will use.
 The quality/process manager leads the team's
standards activities.
 The implementation standards add to and
extend the standards defined during the design
phase.
2.0 Implementation Standards
 In the following slides, we will discuss these
standards issues:
 Standards review
 Naming, interface, and message standards
 Coding standards
 Size standards
 Defect standards
 Defect prevention
 Although defect prevention is not generally
considered a standards issue, it is closely related
to the defect standards and so is discussed here.
2.1 Standards Review (cont…)
 be pragmatic when reviewing a proposed
standards.
 If a proposed standard looks as if it will work, use it
and then improve it after you have some
experience with it.
 You will produce better standards in less time if you
start with reasonable drafts and then update them
after you have tried using them.
 Review the name, interface, and message
standards developed during the design phase to
ensure that they are still appropriate and are being
properly used.
 Also, check that the list of reusable routines is
complete and that all the team members are using
it.
2.1 Standards Review
 Next, review the name glossary to ensure that
everyone is using the same name for the same
item and that all system with names are
recorded in the glossary.
 Check the component and sub element names
and review the shared variable, parameter, and
file names for consistency.
 Also, check the standard interfaces and mes-
sages to ensure that all the internal and external
system interfaces and messages have been
defined, have been recorded in the glossary.
 Make them known and used by all team
members
2.2 Coding Standard
A common coding standard ensures that all the
team's code looks much the same and also
facilitates code sharing among team
members.
 This consistency will make code inspections
easier, quicker, and more effective.
 A well-constructed coding standard also
defines the commenting practices.
 Good commenting practices speed code
reviews and make programs easier to enhance
in subsequent development cycles.
 While reuse can often save a great deal of
design and implementation time.
2.3 Size Standards (cont…)
 For requirements, you can use counts of either
text pages or numbered paragraphs.
 The simplest measures of high-level design are
counts of template pages, text lines, or use
cases.
 For detailed design, pseudo code text lines are
probably adequate.
 For small programs, you can use LOC estimates.
Then, when you have an actual LOC count,
use actual LOC to measure the size of the
detailed-design products.
2.3 Size Standards
 In addition to LOC, most projects produce several
types of products. . Examples are documents such
as the SRS and SDS.
 In the TSPi, we suggest that you use page counts to
measure document size.
 Following are some of the product elements that
might need size measures:
 Requirements (SRS)
 High-level design (SDS)
 Detailed design
 Screens and reports
 Databases
 Messages
 Test scripts and test materials
2.4 Measuring the Sizes of Other
Product Types (cont…)
 Measuring the remaining items is harder.
 If you still want additional measures, consider
two questions.
 First, does the time spent on this product type
represent a significant part of the project's work?
 If it does not, there will be little or no project
benefit from measuring it.
 The principal reason for size measures is to help
estimate and track the work, there is no need
for size-based estimates when the
development effort on any product type is a
small part of the total job.
2.4 Measuring the Sizes of
Other Product Types (cont…)
 Assuming that a product type represents a
significant amount of work, you need a size
measure that correlates with the time required
to develop the product.
 If you cannot, your only choice is to use a
count of the product elements as a simple size
proxy.
 With screens, for example, use a screen count
and historical data on the average time to
develop a screen.
2.4 Measuring the Sizes of
Other Product Types
 Divide the screen data into categories:
very small (VS), small (S), medium (M), large
(L), and very large (VL) categories,
 Then, subdivide your historical data into these
categories and produce average development
times for each category.
 With even more data, you could subdivide the
product data into types, such as data entry,
menu selection, and so on.
 In planning, you then estimate how many
screens, for example, are required, and you
judge how many fall into each of these five
categories.
2.5 Defect Standards (cont…)
 Before
you create any new defect types,
consider the following points.
 First, there is an almost infinite variety of ways
to define defect types. You might produce a
superb standard, it probably would not
materially affect your project's performance.
 Second, the reason to categorize defects into
types is to help analyze and improve the
development process. The common way to
do this is to first analyse data to identify the
key types
 Third, defect type standards are useful only if
they are small.
2.5 Defect Standards
 Many people make the mistake of confusing
defect causes with defect types.
 They want types for incomplete requirements,
poor application knowledge, design
misunderstandings, or language inexperience.
These are NOT defect types; rather, they are
defect causes.
 The defect types, however, are generally data
(70), function (80), or system (90).
2.6 Defect Prevention (cont…)
 An understanding of defect causes can help in
defect prevention. Unfortunately, categorizing
defect causes is hard.
 To devise effective preventive actions, go back
and look at specific defect reports to
understand the problems and then to figure
out how to prevent them.
 To limit the number of cause types, they must
be very general, but you cannot devise
preventions for general cause.
 If you could, you wouldn't need the defect
data.
2.6 Defect Prevention (cont..)
 The
four categories suggested
[Humprey,2000]:
 Education: learn more about the language,
environment, or application
 Communication: fix the process
 Transcription: use better tools
 Oversight: refine your process and use better
methods
2.6 Defect Prevention (cont..)
 Although there are many ways to use defect
causal analysis in defect prevention, The
following approach might be helpful;
 Pick the defect types that seem to be causing the
most trouble. These defects may waste the most lest
time, be hardest to diagnose and fix, or otherwise be
most annoying.
 Examine a number of defects of this type to identify
particular defect causes and decide which ones to
address.
 When you see a defect you think you can prevent,
make a process change to prevent it.
 Assuming this action is effective, start looking for the
next defect category and handle it the same way.
2.6 Defect Prevention
 The key to defect prevention is to look for ways
to change what you do in order to prevent the
defect.
 Then incorporate this change in your process.
 Next, track your performance to see how this
change works.
 If the defect types persist, figure out why the
previous change didn't work and adjust the
process again.
 When you know the cause of a defect, note it in
the comment space on the LOGD form.
3.0 The Implementation Strategy
 The implementation strategy should
generally conform to the design strategy;
you should implement programs
consistently with the way you designed
them.
 To avoid implementation and test problems
consider the following three topics:
 Reviews
 Reuse
 Testing
3.1 The Implementation Strategy:
Review (cont…)
 Indesign, start with the big picture and work
down into detail.
 With reviews, consider starting with the details
and working up to the big picture.
 When reviewing a component, for example,
you will encounter called functions or
subordinate objects.
 The most efficient strategy is to review from the
bottom up.
3.1 The Implementation Strategy:
Review (cont…)
 First,
review all the lowest level objects that
have no subordinate parts.
 When you are sure that these atomic objects
perform according to their external
specifications, move to the next higher level.
 Then, when you encounter these objects in the
next higher- level reviews, you can rely on
them and need not review them again.
 To follow this bottom-up strategy, also
implement the lowest level objects first and
then move progressively up to higher levels.
3.1 The Implementation Strategy:
Review
 By finding these specification problems early,
you can fix them before they are widely
implemented.
 This practice can save a substantial amount of
testing and rework time.
 Because the lowest level atomic objects are
easiest to reuse, a bottom-up implementation
strategy also facilitates reuse.
3.2 The Implementation Strategy:
Reuse (cont…)
 By following some simple implementation
practices, you can make programs much
easier to reuse.
 For example, use standard comment headings
for every source program.
4.0 The IMP Scripts
 TheIMP1 and IMPn scripts are shown in
the Table 8.1 and 8.2.
 IMP1 - initial product implementation and
 IMPn –implementing the subsequent
enhancement cycles.
 To meet the IMP1 entry criteria,
 Competed development strategy and plan
 Completed, reviewed, and updated SRS and SDS
specifications
 Defined and documented coding standard
 Copies of routines functional specification list, name
glossary, and all the other standards team has adopted
4.1 Implementation Planning
(cont..)
 Firststep is to review work by making sure all the
tasks are assigned to various team members.
 Because some engineers are better
implementers than others, however, different
assignments could make sense.
 One approach is to have some engineers
concentrate on design and others specialize in
implementation.
 The key is to consider the interests and abilities
of each of the engineers in making these
assignments.
 If necessary, the team leader can help the
team make these assignments.
4.1 Implementation Planning
 For those tasks that are expected to take only a
few hours, a simple time estimate is usually
adequate.
 For larger jobs, such as implementing a program
object or module, make a simple plan using the
SUMP and SUMQ forms.
 For substantial non-programming tasks, make a
plan using form SUMTASK.
 After you have made the plans, update your
TASK and SCHEDULE forms to reflect the newer
level of planning detail and produce a new
earned-value plan.
4.2 Detailed Design and Design
Review (cont…)
 to develop the detailed design for the
program modules you plan to
implement.
 most efficient to carry one program all
the way from detailed design through
unit test before starting the detailed
design of the next program.
 be guided by the specific case and
follow the design strategy that makes
the most sense.
4.2 Detailed Design and Design
Review
 The next step is to conduct a personal
design review of each module or
object.
 ln this review, scan the design with a
checklist.
 Check the loops and all complex logic
with trace tables or a state machine
analysis.
4.3 Test Development
After fixing the problems found in the review, develop
any special unit-test code and facilities.

usually finds more design problems than either the


design inspection or unit testing, important to do this test
development before the detailed-design inspection.

The test-development work follow the test plan and


include checks of all;

• the logic decisions,


• logic paths,
• loop stepping
• and termination conditions.
4.4 Detailed-Design Inspection
 Following test development, have the
quality/process manager lead through a
detailed-design inspection.
 Although such simple inspections need not
involve the quality process manager, enter the
inspection data on an INS form and the major
defects in LOGD.
 make sure that at least one engineer
completes a trace table, execution table, or
state machine analysis for every loop or state
machine structure in the program.
4.5 Coding and Code Review
(cont…)
 Look at the defect history and judge how
many defects are likely to have injected during
coding.
 One approach is to note the number of
defects per hour of coding.
 determine the number of defects per KLOC.
 Then set a target to find this number of defects
in the code review.
4.5 Coding and Code Review
 To guide the code review, it is also a good
idea to set a time target.
 Suggested that a minimum of at least 50%
(and preferably 75%) of the time that you
spent coding the program.
 Instead of looking at the code the same way
every time, look for different things each time.
 Check name consistency one time, then
initializations, punctuation, equals(=) and
equal-equal(==), and then pointers, calling
sequences, and includes.
4.6 Code Inspection (cont…)
After compilation, compare your
design, design review, code,
and code review times to the
team quality plan. Check defect
levels and defect rates.

If your data indicate that you


are having problems or if you are
not sure what the data mean,
review your results with the
quality/process manager.
4.6 Code Inspection (cont…)
 Use the quality criteria to tell whether program
quality is reasonably good. Refer Table 5.3 (TSPi
CYCLE n DEVELOPMENT PLAN: SCRIPT PLANn).
 The time spent in design should be greater than the
coding time.
 The time spent in design review should he greater than
50% of the design time.
 The time spent in code review should be greater than
50% of the coding time (preferably greater than 75% )
 You found at least twice as many defects in the code
review as you did in compiling.
 You found more than 3 defects per review hour.
 Your review rate was less than 200 LOC per hour.
4.6 Code Inspection
 After the quality check, use the quality
assessment results to guide the code
inspection.
 good quality (over 70% consistently) - have
only one other engineer to review the code.
 poor quality (below 70%) - have two or more
reviewers.
 If the program appears to have quality
problems, have one or more engineers do
another review before you fix the known
defects until you run out of reviewers. .
4.7 Unit Test

Use the test materials that


After the code inspection, have been developed
the next step is to run the and follow the test plan
unit tests. prepared during the
detailed design.
4.8 Component Quality Review
(cont…)
 review the component data to determine
whether component quality is good enough
for inclusion in the baseline system
 The criteria are the ones that the team
established in the quality plan.
 sample SUMQ form in Table 8.3, these
data cover defect levels, defect ratios,
process times, and time ratios.
 If a component has quality problems in
any of these areas, the team should
discuss the situation and decide what
to do.
4.8 Component Quality Review
 The objective of this component quality
review is to make sure that every program
logical segment is analyzed at least once
with an execution table or trace table.
 for every program segment that has state
behavior, ensure that it is checked at least
once with a state machine analysis.
 when doing different analyst at the same
routine, use different engineers.
4.9 Component Release

After the completed


component has
passed the quality
review, give it to
support manager to
enter into the system
baseline
4.10 Exit Criteria

• Completed, reviewed, and inspected


components
• The components entered into the
To complete configuration-management system
the • Completed INS and LOGD form for the
implementation design inspections, code inspections,
phase, the and reinspections
• Updated SUMS, SUMQ forms for the
following must system and all its component parts
have; • Unit test plans and support materials
• Size, time and defect data
• An updated project notebook
Summary
 The principle steps in the implementation
process are:
 implementation planning,
 detailed design,
 detailed design inspection,
 Coding
 Code inspection,
 Unit testing
 Component quality review
 Component release
 The implementation strategy should be
consistent with the design strategy
 Implement the same order as in design.
 Consider testing, reuse, reviews and reinspections.

You might also like