0% found this document useful (0 votes)
245 views

Chapter Exercise Questions

The document discusses various topics related to software design including what a software design is, software architecture, separation of concerns, coupling and portability, refactoring, design models, architectural styles and patterns, component design, quality attributes, and the software quality dilemma. Key aspects covered include that a software design is a blueprint for a system's structure and behavior, separation of concerns divides a system into modules, high coupling impacts portability, refactoring improves design iteratively, and there is a tradeoff between quality and cost/schedule.

Uploaded by

u1904060
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
245 views

Chapter Exercise Questions

The document discusses various topics related to software design including what a software design is, software architecture, separation of concerns, coupling and portability, refactoring, design models, architectural styles and patterns, component design, quality attributes, and the software quality dilemma. Key aspects covered include that a software design is a blueprint for a system's structure and behavior, separation of concerns divides a system into modules, high coupling impacts portability, refactoring improves design iteratively, and there is a tradeoff between quality and cost/schedule.

Uploaded by

u1904060
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Chapter 9

9.2. If a software design is not a program (and it isn’t), then what is it?
Ans:
A software design is a plan or blueprint that describes the structure, behavior, and
functionality of a software system. It is not the actual program code, but rather a
representation of the software system that serves as a guide for the development team.

9.4. Describe software architecture in your own words.


Ans:
Software architecture is the process of designing and organizing the structure of a
software system. It involves making decisions about the system's components, their
interactions, and how they will be implemented. The software architecture provides a
high-level view of the system and serves as a blueprint for the software development
process.

9.5. Describe separation of concerns in your own words. Is there a case when a “divide
and conquer” strategy may not be appropriate? How might such a case affect the
argument for modularity?
Ans:
Separation of concerns is the principle of dividing a software system into distinct
components or modules, each of which is responsible for a specific set of tasks. This
allows developers to focus on specific areas of the system without having to
understand the entire system's complexity. While divide and conquer is generally an
effective strategy, there may be cases where it is not appropriate. For example, if the
system's components are highly interdependent, dividing them into separate modules
may not be feasible. In such cases, the argument for modularity may need to be
reevaluated.

9.7. How are the concepts of coupling and software portability related? Provide
examples to support your discussion.
Ans:
Coupling refers to the degree of interdependence between software components.
The more tightly coupled components are, the more difficult it is to change one
component without affecting others. Software portability, on the other hand, refers to
the ease with which a software system can be transferred from one environment to
another. Highly coupled systems are generally less portable, as changes made to one
component may break the entire system when transferred to a new environment.
9.9. Does refactoring mean that you modify the entire design iteratively? If not, what
does it mean?
Ans:
Refactoring involves making changes to a software system's design or code to
improve its quality, maintainability, or performance. It does not necessarily mean
modifying the entire design iteratively, but rather making incremental changes that
improve the system's overall design.

9.10. Briefly describe each of the four elements of the design model.
Ans:
The four elements of the design model are abstraction, modularity, hierarchy, and
control. Abstraction involves creating simplified models of complex systems to aid in
understanding and design. Modularity is the principle of dividing a system into
independent components. Hierarchy involves organizing components into a hierarchy of
levels based on their relationships and dependencies. Control involves specifying the
flow of control and data between system components.

Chapter 10

10.4. The terms architectural style, architectural pattern, and framework (not discussed
in this book) are often encountered in discussions of software architecture. Do some
research, and describe how each of these terms differs from its counterparts.
Ans:
In summary, an architectural style represents a broad set of design decisions, an
architectural pattern is a specific solution to a recurring design problem, and a
framework is a set of pre-written code that provides a standard way of building and
deploying applications.
Chapter 11

11.2. Why are control components necessary in traditional software and generally not
required in object-oriented software?
Ans:
Control components are necessary in traditional software because they are often
implemented using procedural programming languages that lack the encapsulation and
abstraction features of object-oriented programming. Control components provide a
structured way to handle program flow and manage data processing. In contrast,
object-oriented software relies on object interactions and polymorphism to handle
program flow and data processing, making control components generally unnecessary.

11.5. Select three components that you have developed recently, and assess the types
of cohesion that each exhibits. If you had to define the primary benefit of high cohesion,
what would it be?
Ans:
The types of cohesion are functional cohesion, sequential cohesion, communicational
cohesion, procedural cohesion, temporal cohesion, logical cohesion, and coincidental
cohesion. The three components I have recently developed exhibit functional cohesion,
procedural cohesion, and communicational cohesion. The primary benefit of high
cohesion is that it leads to more maintainable and modular code, making it easier to
understand, debug, and extend.

11.6. Select three components that you have developed recently, and assess the types
of coupling that each exhibits. If you had to define the primary benefit of low coupling,
what would it be?
Ans:
The three components I have recently developed exhibit low coupling, message
coupling, and data coupling. The primary benefit of low coupling is that it reduces the
impact of changes made to one component on other components, leading to greater
flexibility and maintainability of the software system.

11.8. What is a WebApp component?


Ans:
A WebApp component is a software component that provides functionality for a
web application, such as handling HTTP requests and responses, managing sessions,
and interfacing with databases.
11.10. Why is “chunking” important during the component-level design review process?
Ans:
Chunking is important during the component-level design review process because it
helps break down a software system into smaller, manageable parts, making it easier to
analyze and evaluate each component's design and functionality. This leads to better
overall system design and ensures that each component works cohesively and with low
coupling.

Chapter 12

12.9. Provide a few examples that illustrate why response time variability can be an
issue.
Ans:
Response time variability can lead to unpredictable performance, affect user
experience, and cause problems in real-time applications. For example, in online
gaming, a high response time variability can cause lag and affect gameplay. In financial
systems, slow response times can cause transactions to time out, leading to financial
losses.

Chapter 14

14.2. What is the difference between a pattern and an anti-pattern?


Ans:
A pattern is a proven and repeatable solution to a commonly occurring problem,
while an anti-pattern is a commonly used but ineffective solution to a problem that
results in bad design or poor system performance.

14.3. How do architectural patterns differ from component patterns?


Ans:
Architectural patterns define the overall structure and organization of a system, while
component patterns focus on the design and implementation of individual components
or modules within a system.
14.4. What is a framework, and how does it differ from a pattern? What is an idiom, and
how does it differ from a pattern?
Ans:
A framework is a set of pre-existing classes and libraries that provide a structure and
functionality to build applications. In contrast, a pattern provides a reusable solution to
a specific design problem. An idiom is a way of using programming language features
to express a solution to a problem elegantly and concisely, whereas a pattern is a
general solution that may not be specific to any programming language.
Chapter 15

15.1 Describe how you would assess the quality of a university before applying to it.
What factors would be important? Which would be critical?

Ans:
When assessing the quality of a university, there are several factors to consider. One
critical factor is the reputation of the university, which can be assessed by looking at its
rankings and accreditation. Another important factor is the quality of its faculty and the
range of academic programs and resources available to students. Other factors that
may be important include the campus environment, student services, and
extracurricular activities.
15.4. Describe the software quality dilemma in your own words.

Ans:
The software quality dilemma refers to the trade-off between the cost of developing
high-quality software and the need to deliver software quickly and within budget. This
dilemma can be exacerbated by the complexity of modern software systems and the
need to integrate with other systems.

15.5. What is “good enough” software? Name a specific company and specific products
that you believe were developed using the good enough philosophy.

Ans:
"Good enough" software refers to software that meets the minimum requirements
for its intended use but may not be perfect or highly optimized. A company that has
adopted the "good enough" philosophy is Google with its search engine.

15.6. Considering each of the four aspects of the cost of quality, which do you think is
the most expensive and why?
Ans:
The cost of quality has four aspects: prevention costs, appraisal costs, internal
failure costs, and external failure costs. Prevention costs are the most expensive aspect
of the cost of quality because they involve upfront investments in processes and
training to prevent defects from occurring in the first place. By investing in prevention,
companies can avoid the more significant costs associated with defects and failures
downstream.

Chapter 16

16.2. Why can’t we just wait until testing to find and correct all software errors?
Ans:
Waiting until testing to find and correct all software errors can be very costly and
time-consuming. This approach can result in a significant number of defects being
discovered late in the development cycle. It is much more efficient and effective to
identify and correct errors early in the development process.

16.7. Can you think of a few instances in which a desk check might create problems
rather than provide benefits?
Ans:
Desk checking is a manual review technique that involves reviewing code or
documentation by reading through it without executing the code or testing the software.
However, desk checks can create problems if the reviewer is not familiar with the
programming language or development tools being used. In such cases, the reviewer
may miss errors or make incorrect assumptions about the code's behavior, leading to
inaccurate or incomplete reviews.

16.8. A formal technical review is effective only if everyone has prepared in advance.
How do you recognize a review participant who has not prepared? What do you do if
you’re the review leader?
Ans:

A review participant who has not prepared can be identified by their lack of engagement
or participation during the review. They may not have read the documentation or
reviewed the code beforehand. As the review leader, it is important to address this
situation by encouraging the participant to prepare beforehand and providing guidance
and support if necessary. If the participant continues to be unprepared, the leader may
need to consider rescheduling the review or reassigning the participant to a different
role.
Assume that 10 errors have been introduced in the requirements model and that each error will be amplified by a
factor of 2:1 into design and an addition 20 design errors are introduced and then amplified 1.5:1 into code where an
additional 30 errors are introduced. Assume further that all unit testing will find 30 percent of all errors, integration will
find 30 percent of the remaining errors, and validation tests will find 50 percent of the remaining errors. No reviews
are conducted. How many errors will be released to the field.

Answer:

Number of errors introduced in requirements analysis = 10

As each error will be amplified by a factor of 2:1 into design, number of errors carried into design analysis = 05

Total number of errors in the beginning of design analysis=15 errors

(10 preliminary errors, 5 amplified errors).

Additional errors introduced in design analysis = 20

So Total Number of errors in design analysis = 35

As each error will be amplified by a factor of 1.5:1 into code, number of errors carried into code = 24 (35/1.5)

Additional errors introduced in code = 30

So Total Number of errors in code = 89 (35 + 24 + 30).

Hence the total number of errors at the beginning of the testing phase is 89 errors.

Percent of errors found during unit testing = 30%

Number of errors found during unit testing= (89*30)/100 =27

Number of uncovered errors in unit testing=(89 - 27)=62 errors

Percent of errors found during integration testing = 30%

Number of errors found in integration testing=(62*30)/100=19 errors

Number of uncovered errors in integration testing=(62-19)=43 errors

Percent of errors found during validation testing = 50%

Number of errors found in validation testing-(43*50)/100=22 errors

Thus 22 errors are released to the field.

Problem:
Reconsider the situation described in Problem 20.3, but now assume that requirements, design, and code reviews
are conducted and are 60 percent effective in uncovering all errors at that step. How many errors will be released to
the field?

Answer:

10 errors have been observed in the requirements model. 60% of these errors, i.e. 6 errors are uncovered in review
meetings. Thus 4 errors are amplified by a factor of 2:1. Each of the two errors would amplify 1 more error. Therefore,
the 4 errors amplify 2 extra errors resulting in a total of 6 errors (4 + 2).

These 6 errors are observed into the design phase also which has 20 newly generated errors also resulting in a total
of 26 errors. 60% of these errors, i.e. 16 errors are uncovered in review meetings leaving 10 errors uncovered in this
phase.

In coding, these 10 errors from the design phase get amplified by a factor of 1.5:1 which accounts for nearly 7
(10/1.5) amplified errors. Also, there are 30 newly introduced errors. Therefore, the total number of errors is 47 (10 +
7 + 30). 60% of these errors, i.e. 28 errors are uncovered in review meetings leaving 19 errors uncovered in this
phase.

Now, in testing, 30% of these errors i.e. 6 are uncovered in unit testing leaving 13 errors (19 - 6). 30% of the 13 errors
i.e. 4 errors are uncovered in integration testing. Now the number of errors left is 9 (13-4).

In the validation testing phase, 50% of the 9 errors are uncovered. Thus 5 errors are released to the field.

Problem:

Reconsider the situation described in Problems 20.3 and 20.4. If each of the errors released to the field costs $4,800
to find and correct and each error found in review costs $240 to find and correct, how much money is saved by
conducting reviews?

Answer:

Without reviews, 22 errors are released to the field. Thus the total cost for find and correct all errors are 22*4800 =
$105600.

Total errors uncovered in reviews are 6+16+28=50. The cost to find and correct these errors is 50*240 = $12000. 5
errors are released to the field which would cost 5 * 4800 = $24000. Thus the total cost to find and correct all errors is
12000+24000=36000.

Therefore by conducting reviews, the amount of money saved is 105600 – 36000 = $69600.
● Number of errors found = 0.3 * 90 = 27
● Number of errors remaining = 90 - 27 = 63
3. Integration testing finds 20% of remaining errors:
● Number of errors found = 0.2 * 63 = 12.6 (rounded to 13)
● Number of errors remaining = 63 - 13 = 50
4. Validation testing finds 50% of remaining errors:
● Number of errors found = 0.5 * 50 = 25
● Number of errors remaining = 50 - 25 = 25
5. Total number of errors released to end users:
● Number of errors found during testing = 27 + 13 + 25 = 65
● Number of errors not found during testing = 90 - 65 = 25
● Total number of errors released to end users = 65
6. Cost of errors released to end users:
● Cost to find and correct each field error = Tk. 55000/-
● Total cost of errors released to end users = 65 * Tk. 55000/- = Tk. 3575000/-

Note: The cost of errors found during reviews is unknown and therefore cannot be

included in the calculation. However, it is important to note that conducting reviews can

help catch errors earlier in the process, potentially reducing the number of errors that

make it to testing and ultimately to the end users.

Chapter 19
19.1. Using your own words, describe the difference between verification and validation.
Do both make use of test-case design methods and testing strategies?
Ans:
Verification is the process of checking whether the software product or system
meets the specified requirements and design specifications. It involves reviews,
walkthroughs, and inspections to ensure that the software is built according to the given
design. On the other hand, validation is the process of checking whether the software
meets the actual needs and requirements of the end-users. It involves testing and user
acceptance to ensure that the software solves the intended problems.

Both verification and validation make use of test-case design methods and testing
strategies. Test-case design methods are used to identify test cases that can effectively
check if the software meets the specified requirements and if it works as intended.
Testing strategies help in identifying the most efficient and effective way of testing the
software to ensure its quality. Therefore, both verification and validation rely on these
methods and strategies to ensure the quality of the software.

19.3. Why is a highly coupled module difficult to unit test?


Ans:
A highly coupled module is difficult to unit test because it is dependent on other
modules, making it challenging to isolate and test the module's behavior in isolation.
Changes to other modules can impact the behavior of the highly coupled module,
making it challenging to identify the root cause of defects or issues.

19.4. Is unit testing possible or even desirable in all circumstances? Provide examples
to justify your answer.
Ans:
Unit testing is not always possible or desirable in all circumstances. One example
where unit testing is not possible is when working with legacy code that lacks proper
design and is tightly coupled. It may be difficult to isolate a particular component or
module for unit testing without affecting other parts of the system. Additionally, in some
cases, the effort and cost involved in creating unit tests may not justify the benefits,
such as in small applications or in situations where the code is simple and
straightforward.

19.6. Select a software component that you have designed and implemented recently.
Design a set of test cases that will ensure that all statements have been executed using
basis path testing.
Ans:
Suppose we have a software component that calculates the area of a circle given
the radius as input. We can design a set of test cases to ensure that all statements have
been executed using basis path testing.

1. Test case 1: Input r = 0. Expected output: A message indicating that the input is
invalid and cannot calculate the area of the circle.
2. Test case 2: Input r = 1. Expected output: The area of the circle with radius 1,
which should be approximately 3.14.
3. Test case 3: Input r = -1. Expected output: A message indicating that the input is
invalid and cannot calculate the area of the circle.
4. Test case 4: Input r = 2. Expected output: The area of the circle with radius 2,
which should be approximately 12.56.
3.1 1–2–3–4–6–7– Sum of any two “Iso sceles
9 – 10 – 18 -19. triangle”
Sides must be > 3rd
1 – 2 – 3 – 4 – 6 – 11 side “Iso sceles
3.2
– 12 – 17 – 18 – 19 triangle”
Sum of any two sides
1 – 2 – 3 – 4 – 6 – 11 must be > 3rd side. “Scalene
– 13 – 15 – 16 – 17 – triangle”
Sum of any two sides
18 - 19
4 must be > 3rd side.

1. A valid triangle exhibits 2 properties

2. Test cases 1 and 3 consists of 3 sub – test cases each upon refinement.

19.9. Give at least three examples in which black-box testing might give the impression
that “everything’s OK,” while white-box tests might uncover an error. Give at least three
examples in which white-box testing might give the impression that “everything’s OK,”
while black-box tests might uncover an error.
Ans:
Three examples in which black-box testing might give the impression that
"everything's OK," while white-box tests might uncover an error are:

1. Boundary conditions:
Cyclomatic complexity
2. Error handling: Basis path testing
3. Performance issues: Loop testing

Three examples in which white-box testing might give the impression that "everything's
OK," while black-box tests might uncover an error are:

1. Undocumented features: Equivalence partitioning


2. Integration issues: Boundary value analysis
Interface testing
3. Security vulnerabilities:
19.10. In your own words, describe why the class is the smallest reasonable unit for
testing within an OO system.
Ans:
A class is the smallest reasonable unit for testing within an object-oriented system
because it encapsulates the behavior and data of a single logical entity. Testing at the
class level allows developers to isolate and test individual units of functionality, making
it easier to identify and correct defects before integration with other modules.

Chapter 20

20.1. How can project scheduling affect integration testing?


Ans:
Project scheduling can affect integration testing by limiting the amount of time
available for testing or delaying the start of testing, which can impact the quality of the
integration and the ability to identify and correct defects early in the development
process.

20.3. Will exhaustive testing (even if it is possible for very small programs) guarantee
that a program is 100 percent correct?
Ans:
Exhaustive testing is not a guarantee that a program is 100% correct because it is
impossible to test every possible input or scenario. Exhaustive testing can be
time-consuming, expensive, and impractical, and it is more effective to use a risk-based
approach to testing that focuses on the most critical areas of the software.

20.4. Why should “testing” begin with requirements analysis and design?
Ans:
Testing should begin with requirements analysis and design to ensure that the
testing process is aligned with the intended functionality of the software and to catch
any defects early in the development cycle.

20.5. Should nonfunctional requirements (e.g., security or performance) be tested as


part of integration testing?
Ans:
Yes, nonfunctional requirements such as security or performance should be tested
as part of integration testing. Integration testing aims to test the interactions and
interfaces between different components of the system, including the nonfunctional
aspects.

20.7. What is the difference between thread-based and use-based strategies for
integration testing?
Ans:
Thread-based integration testing involves testing individual threads of execution in
the system, ensuring that each thread functions correctly and that there are no
deadlocks or race conditions. Use-based integration testing focuses on testing the
system's ability to handle different use cases or scenarios, ensuring that the system
behaves as expected under different conditions.

Chapter 21

21.1. Are there any situations in which MobileApp testing on actual devices can be
disregarded?
Ans:
Mobile app testing on actual devices should not be disregarded, as it is an essential
part of ensuring that the app performs well in real-world scenarios. Testing on actual
devices can help identify issues that may not be apparent in emulators or simulators,
such as performance or compatibility issues.

21.2. Is it fair to say that the overall mobility testing strategy begins with user-visible
elements and moves toward technology elements? Are there exceptions to this
strategy?
Ans:
Yes, the overall mobile testing strategy typically begins with user-visible elements
and moves toward technology elements. However, there may be exceptions to this
strategy, depending on the specific requirements of the app and the testing goals. Three
However, there may be exceptions to this strategy, such as when the technology elements have a significant
pointson
impact tothe
consider regarding exceptions to this strategy are:
user experience.

1. The app's architecture:


2. Testing goals:
3. Testing resources:
21.3. Describe the steps associated with user experience testing for an app.
Ans:
The steps associated with user experience testing for an app typically involve the
following:
The following steps summarize the approach:
1. The content1.model
Defining
for thethe testing
WebApp goals and
is reviewed to objectives
uncover errors.
2. Identifying the target audience andcases
2. The interface model is reviewed to ensure that all use user can
personas
be
accommodated.
3. The design3. Creating
model for the user scenarios
WebApp andtotasks
is reviewed uncover navigation errors.
4. Recruiting
4. The user interface is testedparticipants
to uncover errorsfor testing
in presentation and/or navigation
mechanics.
5. Conducting usability testing sessions, either in-person or remotely
5. Each functional component is unit tested.
6. Navigation 6. Collecting
throughout and analyzing
the architecture user feedback
is tested.
7. The WebApp 7. is implemented in a variety of different environmental configurations and is tested for
compatibility with each configuration.
8. Security tests are conducted in an attempt to exploit vulnerabilities in the
WebApp or within its environment.
21.4. What is the objective of security testing? Who performs this testing activity?
Ans:
The objective of security testing is to identify and address vulnerabilities in the
software that could be exploited by attackers. This testing activity is typically performed
by specialized security testing teams.

21.5. Assume that you are developing a MobileApp to access an online pharmacy
(YourCornerPharmacy.com) that caters to senior citizens. The pharmacy provides
typical functions but also maintains a database for each customer so that it can provide
drug information and warn of potential drug interactions. Discuss any special usability
or accessibility tests for this MobileApp
Ans:
For the YourCornerPharmacy MobileApp, special usability and accessibility tests
may include:

1. Testing font sizes, colors, and contrast to ensure readability for senior citizens
2. Testing voice recognition and text-to-speech functionality to ensure accessibility
for users with visual or motor impairments
3. Testing navigation and menu structure to ensure ease of use for users with
limited experience using mobile devices
21.7. Is it possible to test every configuration that a MobileApp is likely to encounter in
the production environment? If it is not, how do you select a meaningful set of
configuration tests?
Ans:
It is not possible to test every configuration that a MobileApp is likely to encounter
in the production environment. Instead, a meaningful set of configuration tests must be
selected based on factors such as the prevalence of different configurations and the
potential impact of configuration-related defects.

21.8. Describe a security test that might need to be conducted for the
YourCornerPharmacy MobileApp (Problem 21.5). Who should perform this test?
Ans:
A security test that might need to be conducted for the YourCornerPharmacy
MobileApp is penetration testing, which involves simulating an attack on the app to
identify vulnerabilities and weaknesses in the security measures. This test should be
performed by a certified security professional or a specialized security testing company
with experience in mobile app security testing.

21.9. What is the difference between testing that is associated with interface
mechanisms and testing that addresses interface semantics?
Ans:
Testing that is associated with interface mechanisms focuses on testing the
technical aspects of the interface, such as its functionality and performance. Testing
that addresses interface semantics focuses on testing the meaning and interpretation
of the interface, such as whether it presents information in a clear and understandable
way.

21.10. What is the difference between testing for navigation syntax and navigation
semantics?
Ans:
Testing for navigation syntax focuses on testing the technical aspects of navigation,
such as whether links and buttons work as intended. Testing for navigation semantics
focuses on testing the meaning and interpretation of navigation, such as whether the
navigation flow makes sense and is intuitive for users.
Chapter 23
23.1. Software for System X has 24 individual functional requirements and 14
nonfunctional requirements. What is the specificity of the requirements? The
completeness?
Ans:
To determine the specificity and completeness of the requirements, more
information is needed. Specificity refers to the level of detail and clarity of each
requirement, while completeness refers to the extent to which all necessary
requirements have been identified and documented.

23.3. A class X has 12 operations. Cyclomatic complexity has been computed for all
operations in the OO system, and the average value of module complexity is 4. For class
X, the complexity for operations 1 to 12 is 5, 4, 3, 3, 6, 8, 2, 2, 5, 5, 4, 4, respectively.
Compute the weighted methods per class.
Ans:
Sum of complexity for all operations in class X = 5 + 4 + 3 + 3 + 6 + 8 + 2 + 2 + 5 + 5 + 4 + 4 = 49
WeightedThe sum ofweighted
complexity formethods
all operationsper class
in class (WMC)
X = (5*1) + (4*1) +can
(3*1) be calculated
+ (3*1) by+ (2*1)
+ (6*1) + (8*1) multiplying the
+ (2*1) + (5*1) + (5*1) + (4*1) +
(4*1) = 49
cyclomatic
This means thatcomplexity
the total weighted(CC) of for
complexity each operation
all operations in classby its complexity and summing the
X is 49.
Weighted methods per class = Total weighted complexity / (Average module complexity * Number of operations)
results.
In this case, the average Formodule complexity class
is given as 4, and theX,number of operations
the is 12. Therefore,WMC the weighted methodsis per class for
(55)+(44)+(33)+(33)+(65)+(87)+(22)+(22)+(55)+(55)+(44)+(44)=174.
class X is:
Weighted methods per class = 49 / (4 * 12) = 1.02

23.4. A legacy system has 940 modules. The latest release required that 90 of these
modules be changed. In addition, 40 new modules were added and 12 old modules were
removed. Compute the software maturity index for the system.
Ans: MT = number of modules in the current release
The software maturity index
(940-(40+12+90)) / 940 (SMI)
= .85 can be calculated using the formula SMI = (total
Fc = number of modules in the current release that have been changed
Fa = number of modules in the current release that have been added
number of modules in system - number of new and ofremoved
Fd = number modules from modules) / release
the preceding (number of deleted in the
that were
current release
changed modules). For this legacy system, theTheSMI is (940
software - 40
maturity + is12)
index / 90 =in 10.44.
computed the following manner:
SMI = (MT -(Fa + Fc + Fd)) / MT

23.5. Why should some software metrics be kept “private”? Provide examples of three
metrics that should be private. Provide examples of three metrics that should be public.
Ans:
Some software metrics should be kept private to avoid negative consequences or
misuse. Examples include employee productivity, code quality rankings, and individual
performance evaluations. Public metrics could include customer satisfaction ratings,
number of users, and revenue generated.

23.6. Team A found 342 errors during the software engineering process prior to release.
Team B found 184 errors. What additional measures would have to be made for projects
A and B to determine which of the teams eliminated errors more efficiently? What
metrics would you propose to help in making the determination? What historical data
might be useful?
Ans:
To determine which team eliminated errors more efficiently, additional measures
such as defect density, defect removal efficiency, and defect arrival rate could be used.
Historical data on previous projects and industry benchmarks could also provide
context for evaluating the teams' performance.

23.7. A Web engineering team has built an e-commerce WebApp that contains 145
individual pages. Of these pages, 65 are dynamic; that is, they are internally generated
based on end-user input. What is the customization index for this application?
Ans:
The customization index (CI) can be calculated as the number of dynamic pages
divided by the total number of pages, multiplied by 100. For this e-commerce WebApp,
the CI is (65/145)*100 = 44.8%. Customization index = Ndp/(Ndp+Nsp)
Ndp = dynamic screen display
Nsp = static screen display
23.8. A WebApp and its support environment have not been fully fortified against attack.
Web engineers estimate that the likelihood of repelling an attack is only 30 percent. The
system does not contain sensitive or controversial information, so the threat probability
is 25 percent. What is the integrity of the WebApp?
Ans:
The= [1-(threat*(1-security))]=1-(.25*(1-.3))=.825
Integrity integrity of the WebApp can be calculated using the formula: Integrity = (1 -
probability of attack) * (1 - threat probability). For this WebApp, the integrity is (1 - 0.3) *
(1 - 0.25) = 0.525 or 52.5%.

23.9. At the conclusion of a project, it has been determined that 30 errors were found
during the modeling phase and 12 errors were found during the construction phase that
were traceable to errors not discovered in the modeling phase. What is the DRE for
these two phases?
Ans:
The defect
DRE removal efficiency
= Ei/Ei+1 + Ei = 30/(30+12)(DRE) for the modeling phase is 30 / 30 = 100%, since
all errors found during that phase were successfully removed. For the construction
Ei+1 = error not discovered in phase i
phase, the DRE is (30 + 12) / (30 + 12 + remaining errors) * 100%.
23.10. A software team delivers a software increment to end users. The users uncover
eight defects during the first month of use. Prior to delivery, the software team found
242 errors during formal technical reviews and all testing tasks. What is the overall DRE
for the project after 1 month’s usage?
Ans:
The overall DRE for the project after 1 month's usage is (242 - 8) / 242 * 100% =
96.7%. This means that 96.7% of the errors were removed before delivery or were
discovered and fixed within the first month of use.
DRE= E / (E+D)

E = Before delivery
D= after delivery

DRE= 242/(242+8) = .968


Chapter 25

25.2. Do a functional decomposition of the robot software you described in Problem


25.1. Estimate the size of each function in LOC. Assuming that your organization
produces 450 LOC/pm with a burdened labor rate of $7,000 per person-month, estimate
the effort and cost required to build the software using the LOC-based estimation
technique described in this chapter.
Ans:
Problem 25.1 states that the robot software should be able to move forward,
backward, turn left, turn right, detect obstacles, and communicate with a remote control.

Here's a functional decomposition of the robot software:

1. Move Forward Function (50 LOC)


● Control motors to move robot forward
2. Move Backward Function (50 LOC)
● Control motors to move robot backward
3. Turn Left Function (30 LOC)
● Control motors to turn robot left
4. Turn Right Function (30 LOC)
● Control motors to turn robot right
5. Obstacle Detection Function (150 LOC)
● Use sensors to detect obstacles
● Stop robot if an obstacle is detected
6. Communication Function (100 LOC)
● Establish communication with remote control
● Receive commands from remote control

Total LOC: 410

Using the LOC-based estimation technique, the estimated effort can be calculated as
follows:

Estimated Effort = Total LOC / Productivity Rate

Productivity Rate = 450 LOC/pm

Estimated Effort = 410 / 450 = 0.91 person-months


The cost can be estimated by multiplying the effort by the burdened labor rate of $7,000
per person-month:

Estimated Cost = Estimated Effort * Burdened Labor Rate

Estimated Cost = 0.91 * $7,000 = $6,170

Therefore, the estimated effort required to build the robot software is 0.91
person-months, and the estimated cost is $6,170.

25.5. What is the difference between a macroscopic schedule and a detailed schedule?
Is it possible to manage a project if only a macroscopic schedule is developed? Why?
Ans:
A macroscopic schedule provides a high-level view of the project schedule and
outlines major milestones and deadlines, while a detailed schedule provides a more
granular view of the project schedule and outlines specific tasks, dependencies, and
resources required to complete the project. It is possible to manage a project with only
a macroscopic schedule, but it would be difficult to do so effectively, as a macroscopic
schedule lacks the detail required for effective project management, including progress
tracking, identifying potential delays, and managing resources. A detailed schedule
provides more in-depth information and enables more effective management.

25.6. The relationship between people and time is highly nonlinear. Using Putnam’s
software equation (described in Section 25.8.2), develop a table that relates number of
people to project duration for a software project requiring 50000 LOC and 15
person-years of effort (the productivity parameter is 5000 and B = 0.37). Assume that
the software must be delivered in 24 months plus or minus 12 months.
Ans:
Putnam's software equation is given by:

Effort = A(KLOC)^B * (Productivity Rate)^(-B)

where:

● Effort is measured in person-months


● A and B are constants determined by regression analysis
● KLOC is measured in thousands of lines of code
● Productivity Rate is measured in LOC per person-month
Using the given values, we can solve for A:

15 person-years * 12 months/year = 180 person-months (effort)

50000 LOC = KLOC * 1000

180 = A(50)^0.37 * (5000)^(-0.37)

A = 1.215

Now, we can use the software equation to generate a table that relates the number of
people required to complete the project in 24 months +/- 12 months:

Number of People Duration (months)

1 72.5

2 51.3

3 41.6

4 35.1

5 30.6
6 27.2

7 24.5

8 22.3

9 20.5

10 19.0

We can see from the table that as the number of people increases, the duration of the
project decreases. However, the relationship between people and time is highly
nonlinear, as doubling the number of people does not halve the duration of the project.

Chapter 26

26.2. Describe the difference between “known risks” and “predictable risks.”
Ans:
Known risks are risks that have been previously identified and assessed, and for
which a risk management plan has been developed. Predictable risks, on the other hand,
are risks that can be reasonably anticipated based on past experience, industry
standards, or common sense, but have not yet been identified or assessed.
26.6. Develop a risk management strategy and specific risk management activities for
three of the risks noted in Figure 26.1.
Ans:
Three risks noted in Figure 26.1 are:

1. Schedule risk:
2. Technology risk:
3. Requirements risk:

26.8. Recompute the risk exposure discussed in Section 26.4.2 when cost/LOC is $16
and the probability is 60 percent.
Ans:
Risk Exposure = Probability * Impact * Cost

Risk Exposure = 0.6 * 4000 * $16 = $38,400

26.9. Can you think of a situation in which a high-probability, high-impact risk would not
be considered as part of your RMMM plan?
Ans:
A high-probability, high-impact risk may not be considered as part of the RMMM plan
if it is deemed to be outside the scope of the project, or if the cost of mitigating the risk
is too high compared to the potential impact of the risk. However, in most cases,
high-probability, high-impact risks should be included in the RMMM plan.

26.10. Describe five software application areas in which software safety and hazard
analysis would be a major concern.
Ans:
Five software application areas in which software safety and hazard analysis would be a
major concern are:

1. Medical devices:.
2. Aerospace:
3. Automotive:
4. Industrial control systems:
5. Military and defense:
Chapter 29

29.4. What is a “soft trend”?


Ans:
A soft trend is a trend that is subject to change based on unpredictable factors such
as human behavior, market forces, or technology advancements. Soft trends are more
uncertain than hard trends, which are based on measurable, data-driven evidence and
are more predictable.

Q: Describe internal and external views of testing?


Ans:
Internal and external views of testing are two different perspectives for evaluating the
quality of a software system.

Internal testing, also known as white-box testing

External testing, also known as black-box testing,

You might also like