Chapter Exercise Questions
Chapter Exercise Questions
9.2. If a software design is not a program (and it isn’t), then what is it?
Ans:
A software design is a plan or blueprint that describes the structure, behavior, and
functionality of a software system. It is not the actual program code, but rather a
representation of the software system that serves as a guide for the development team.
9.5. Describe separation of concerns in your own words. Is there a case when a “divide
and conquer” strategy may not be appropriate? How might such a case affect the
argument for modularity?
Ans:
Separation of concerns is the principle of dividing a software system into distinct
components or modules, each of which is responsible for a specific set of tasks. This
allows developers to focus on specific areas of the system without having to
understand the entire system's complexity. While divide and conquer is generally an
effective strategy, there may be cases where it is not appropriate. For example, if the
system's components are highly interdependent, dividing them into separate modules
may not be feasible. In such cases, the argument for modularity may need to be
reevaluated.
9.7. How are the concepts of coupling and software portability related? Provide
examples to support your discussion.
Ans:
Coupling refers to the degree of interdependence between software components.
The more tightly coupled components are, the more difficult it is to change one
component without affecting others. Software portability, on the other hand, refers to
the ease with which a software system can be transferred from one environment to
another. Highly coupled systems are generally less portable, as changes made to one
component may break the entire system when transferred to a new environment.
9.9. Does refactoring mean that you modify the entire design iteratively? If not, what
does it mean?
Ans:
Refactoring involves making changes to a software system's design or code to
improve its quality, maintainability, or performance. It does not necessarily mean
modifying the entire design iteratively, but rather making incremental changes that
improve the system's overall design.
9.10. Briefly describe each of the four elements of the design model.
Ans:
The four elements of the design model are abstraction, modularity, hierarchy, and
control. Abstraction involves creating simplified models of complex systems to aid in
understanding and design. Modularity is the principle of dividing a system into
independent components. Hierarchy involves organizing components into a hierarchy of
levels based on their relationships and dependencies. Control involves specifying the
flow of control and data between system components.
Chapter 10
10.4. The terms architectural style, architectural pattern, and framework (not discussed
in this book) are often encountered in discussions of software architecture. Do some
research, and describe how each of these terms differs from its counterparts.
Ans:
In summary, an architectural style represents a broad set of design decisions, an
architectural pattern is a specific solution to a recurring design problem, and a
framework is a set of pre-written code that provides a standard way of building and
deploying applications.
Chapter 11
11.2. Why are control components necessary in traditional software and generally not
required in object-oriented software?
Ans:
Control components are necessary in traditional software because they are often
implemented using procedural programming languages that lack the encapsulation and
abstraction features of object-oriented programming. Control components provide a
structured way to handle program flow and manage data processing. In contrast,
object-oriented software relies on object interactions and polymorphism to handle
program flow and data processing, making control components generally unnecessary.
11.5. Select three components that you have developed recently, and assess the types
of cohesion that each exhibits. If you had to define the primary benefit of high cohesion,
what would it be?
Ans:
The types of cohesion are functional cohesion, sequential cohesion, communicational
cohesion, procedural cohesion, temporal cohesion, logical cohesion, and coincidental
cohesion. The three components I have recently developed exhibit functional cohesion,
procedural cohesion, and communicational cohesion. The primary benefit of high
cohesion is that it leads to more maintainable and modular code, making it easier to
understand, debug, and extend.
11.6. Select three components that you have developed recently, and assess the types
of coupling that each exhibits. If you had to define the primary benefit of low coupling,
what would it be?
Ans:
The three components I have recently developed exhibit low coupling, message
coupling, and data coupling. The primary benefit of low coupling is that it reduces the
impact of changes made to one component on other components, leading to greater
flexibility and maintainability of the software system.
Chapter 12
12.9. Provide a few examples that illustrate why response time variability can be an
issue.
Ans:
Response time variability can lead to unpredictable performance, affect user
experience, and cause problems in real-time applications. For example, in online
gaming, a high response time variability can cause lag and affect gameplay. In financial
systems, slow response times can cause transactions to time out, leading to financial
losses.
Chapter 14
15.1 Describe how you would assess the quality of a university before applying to it.
What factors would be important? Which would be critical?
Ans:
When assessing the quality of a university, there are several factors to consider. One
critical factor is the reputation of the university, which can be assessed by looking at its
rankings and accreditation. Another important factor is the quality of its faculty and the
range of academic programs and resources available to students. Other factors that
may be important include the campus environment, student services, and
extracurricular activities.
15.4. Describe the software quality dilemma in your own words.
Ans:
The software quality dilemma refers to the trade-off between the cost of developing
high-quality software and the need to deliver software quickly and within budget. This
dilemma can be exacerbated by the complexity of modern software systems and the
need to integrate with other systems.
15.5. What is “good enough” software? Name a specific company and specific products
that you believe were developed using the good enough philosophy.
Ans:
"Good enough" software refers to software that meets the minimum requirements
for its intended use but may not be perfect or highly optimized. A company that has
adopted the "good enough" philosophy is Google with its search engine.
15.6. Considering each of the four aspects of the cost of quality, which do you think is
the most expensive and why?
Ans:
The cost of quality has four aspects: prevention costs, appraisal costs, internal
failure costs, and external failure costs. Prevention costs are the most expensive aspect
of the cost of quality because they involve upfront investments in processes and
training to prevent defects from occurring in the first place. By investing in prevention,
companies can avoid the more significant costs associated with defects and failures
downstream.
Chapter 16
16.2. Why can’t we just wait until testing to find and correct all software errors?
Ans:
Waiting until testing to find and correct all software errors can be very costly and
time-consuming. This approach can result in a significant number of defects being
discovered late in the development cycle. It is much more efficient and effective to
identify and correct errors early in the development process.
16.7. Can you think of a few instances in which a desk check might create problems
rather than provide benefits?
Ans:
Desk checking is a manual review technique that involves reviewing code or
documentation by reading through it without executing the code or testing the software.
However, desk checks can create problems if the reviewer is not familiar with the
programming language or development tools being used. In such cases, the reviewer
may miss errors or make incorrect assumptions about the code's behavior, leading to
inaccurate or incomplete reviews.
16.8. A formal technical review is effective only if everyone has prepared in advance.
How do you recognize a review participant who has not prepared? What do you do if
you’re the review leader?
Ans:
A review participant who has not prepared can be identified by their lack of engagement
or participation during the review. They may not have read the documentation or
reviewed the code beforehand. As the review leader, it is important to address this
situation by encouraging the participant to prepare beforehand and providing guidance
and support if necessary. If the participant continues to be unprepared, the leader may
need to consider rescheduling the review or reassigning the participant to a different
role.
Assume that 10 errors have been introduced in the requirements model and that each error will be amplified by a
factor of 2:1 into design and an addition 20 design errors are introduced and then amplified 1.5:1 into code where an
additional 30 errors are introduced. Assume further that all unit testing will find 30 percent of all errors, integration will
find 30 percent of the remaining errors, and validation tests will find 50 percent of the remaining errors. No reviews
are conducted. How many errors will be released to the field.
Answer:
As each error will be amplified by a factor of 2:1 into design, number of errors carried into design analysis = 05
As each error will be amplified by a factor of 1.5:1 into code, number of errors carried into code = 24 (35/1.5)
Hence the total number of errors at the beginning of the testing phase is 89 errors.
Problem:
Reconsider the situation described in Problem 20.3, but now assume that requirements, design, and code reviews
are conducted and are 60 percent effective in uncovering all errors at that step. How many errors will be released to
the field?
Answer:
10 errors have been observed in the requirements model. 60% of these errors, i.e. 6 errors are uncovered in review
meetings. Thus 4 errors are amplified by a factor of 2:1. Each of the two errors would amplify 1 more error. Therefore,
the 4 errors amplify 2 extra errors resulting in a total of 6 errors (4 + 2).
These 6 errors are observed into the design phase also which has 20 newly generated errors also resulting in a total
of 26 errors. 60% of these errors, i.e. 16 errors are uncovered in review meetings leaving 10 errors uncovered in this
phase.
In coding, these 10 errors from the design phase get amplified by a factor of 1.5:1 which accounts for nearly 7
(10/1.5) amplified errors. Also, there are 30 newly introduced errors. Therefore, the total number of errors is 47 (10 +
7 + 30). 60% of these errors, i.e. 28 errors are uncovered in review meetings leaving 19 errors uncovered in this
phase.
Now, in testing, 30% of these errors i.e. 6 are uncovered in unit testing leaving 13 errors (19 - 6). 30% of the 13 errors
i.e. 4 errors are uncovered in integration testing. Now the number of errors left is 9 (13-4).
In the validation testing phase, 50% of the 9 errors are uncovered. Thus 5 errors are released to the field.
Problem:
Reconsider the situation described in Problems 20.3 and 20.4. If each of the errors released to the field costs $4,800
to find and correct and each error found in review costs $240 to find and correct, how much money is saved by
conducting reviews?
Answer:
Without reviews, 22 errors are released to the field. Thus the total cost for find and correct all errors are 22*4800 =
$105600.
Total errors uncovered in reviews are 6+16+28=50. The cost to find and correct these errors is 50*240 = $12000. 5
errors are released to the field which would cost 5 * 4800 = $24000. Thus the total cost to find and correct all errors is
12000+24000=36000.
Therefore by conducting reviews, the amount of money saved is 105600 – 36000 = $69600.
● Number of errors found = 0.3 * 90 = 27
● Number of errors remaining = 90 - 27 = 63
3. Integration testing finds 20% of remaining errors:
● Number of errors found = 0.2 * 63 = 12.6 (rounded to 13)
● Number of errors remaining = 63 - 13 = 50
4. Validation testing finds 50% of remaining errors:
● Number of errors found = 0.5 * 50 = 25
● Number of errors remaining = 50 - 25 = 25
5. Total number of errors released to end users:
● Number of errors found during testing = 27 + 13 + 25 = 65
● Number of errors not found during testing = 90 - 65 = 25
● Total number of errors released to end users = 65
6. Cost of errors released to end users:
● Cost to find and correct each field error = Tk. 55000/-
● Total cost of errors released to end users = 65 * Tk. 55000/- = Tk. 3575000/-
Note: The cost of errors found during reviews is unknown and therefore cannot be
included in the calculation. However, it is important to note that conducting reviews can
help catch errors earlier in the process, potentially reducing the number of errors that
Chapter 19
19.1. Using your own words, describe the difference between verification and validation.
Do both make use of test-case design methods and testing strategies?
Ans:
Verification is the process of checking whether the software product or system
meets the specified requirements and design specifications. It involves reviews,
walkthroughs, and inspections to ensure that the software is built according to the given
design. On the other hand, validation is the process of checking whether the software
meets the actual needs and requirements of the end-users. It involves testing and user
acceptance to ensure that the software solves the intended problems.
Both verification and validation make use of test-case design methods and testing
strategies. Test-case design methods are used to identify test cases that can effectively
check if the software meets the specified requirements and if it works as intended.
Testing strategies help in identifying the most efficient and effective way of testing the
software to ensure its quality. Therefore, both verification and validation rely on these
methods and strategies to ensure the quality of the software.
19.4. Is unit testing possible or even desirable in all circumstances? Provide examples
to justify your answer.
Ans:
Unit testing is not always possible or desirable in all circumstances. One example
where unit testing is not possible is when working with legacy code that lacks proper
design and is tightly coupled. It may be difficult to isolate a particular component or
module for unit testing without affecting other parts of the system. Additionally, in some
cases, the effort and cost involved in creating unit tests may not justify the benefits,
such as in small applications or in situations where the code is simple and
straightforward.
19.6. Select a software component that you have designed and implemented recently.
Design a set of test cases that will ensure that all statements have been executed using
basis path testing.
Ans:
Suppose we have a software component that calculates the area of a circle given
the radius as input. We can design a set of test cases to ensure that all statements have
been executed using basis path testing.
1. Test case 1: Input r = 0. Expected output: A message indicating that the input is
invalid and cannot calculate the area of the circle.
2. Test case 2: Input r = 1. Expected output: The area of the circle with radius 1,
which should be approximately 3.14.
3. Test case 3: Input r = -1. Expected output: A message indicating that the input is
invalid and cannot calculate the area of the circle.
4. Test case 4: Input r = 2. Expected output: The area of the circle with radius 2,
which should be approximately 12.56.
3.1 1–2–3–4–6–7– Sum of any two “Iso sceles
9 – 10 – 18 -19. triangle”
Sides must be > 3rd
1 – 2 – 3 – 4 – 6 – 11 side “Iso sceles
3.2
– 12 – 17 – 18 – 19 triangle”
Sum of any two sides
1 – 2 – 3 – 4 – 6 – 11 must be > 3rd side. “Scalene
– 13 – 15 – 16 – 17 – triangle”
Sum of any two sides
18 - 19
4 must be > 3rd side.
2. Test cases 1 and 3 consists of 3 sub – test cases each upon refinement.
19.9. Give at least three examples in which black-box testing might give the impression
that “everything’s OK,” while white-box tests might uncover an error. Give at least three
examples in which white-box testing might give the impression that “everything’s OK,”
while black-box tests might uncover an error.
Ans:
Three examples in which black-box testing might give the impression that
"everything's OK," while white-box tests might uncover an error are:
1. Boundary conditions:
Cyclomatic complexity
2. Error handling: Basis path testing
3. Performance issues: Loop testing
Three examples in which white-box testing might give the impression that "everything's
OK," while black-box tests might uncover an error are:
Chapter 20
20.3. Will exhaustive testing (even if it is possible for very small programs) guarantee
that a program is 100 percent correct?
Ans:
Exhaustive testing is not a guarantee that a program is 100% correct because it is
impossible to test every possible input or scenario. Exhaustive testing can be
time-consuming, expensive, and impractical, and it is more effective to use a risk-based
approach to testing that focuses on the most critical areas of the software.
20.4. Why should “testing” begin with requirements analysis and design?
Ans:
Testing should begin with requirements analysis and design to ensure that the
testing process is aligned with the intended functionality of the software and to catch
any defects early in the development cycle.
20.7. What is the difference between thread-based and use-based strategies for
integration testing?
Ans:
Thread-based integration testing involves testing individual threads of execution in
the system, ensuring that each thread functions correctly and that there are no
deadlocks or race conditions. Use-based integration testing focuses on testing the
system's ability to handle different use cases or scenarios, ensuring that the system
behaves as expected under different conditions.
Chapter 21
21.1. Are there any situations in which MobileApp testing on actual devices can be
disregarded?
Ans:
Mobile app testing on actual devices should not be disregarded, as it is an essential
part of ensuring that the app performs well in real-world scenarios. Testing on actual
devices can help identify issues that may not be apparent in emulators or simulators,
such as performance or compatibility issues.
21.2. Is it fair to say that the overall mobility testing strategy begins with user-visible
elements and moves toward technology elements? Are there exceptions to this
strategy?
Ans:
Yes, the overall mobile testing strategy typically begins with user-visible elements
and moves toward technology elements. However, there may be exceptions to this
strategy, depending on the specific requirements of the app and the testing goals. Three
However, there may be exceptions to this strategy, such as when the technology elements have a significant
pointson
impact tothe
consider regarding exceptions to this strategy are:
user experience.
21.5. Assume that you are developing a MobileApp to access an online pharmacy
(YourCornerPharmacy.com) that caters to senior citizens. The pharmacy provides
typical functions but also maintains a database for each customer so that it can provide
drug information and warn of potential drug interactions. Discuss any special usability
or accessibility tests for this MobileApp
Ans:
For the YourCornerPharmacy MobileApp, special usability and accessibility tests
may include:
1. Testing font sizes, colors, and contrast to ensure readability for senior citizens
2. Testing voice recognition and text-to-speech functionality to ensure accessibility
for users with visual or motor impairments
3. Testing navigation and menu structure to ensure ease of use for users with
limited experience using mobile devices
21.7. Is it possible to test every configuration that a MobileApp is likely to encounter in
the production environment? If it is not, how do you select a meaningful set of
configuration tests?
Ans:
It is not possible to test every configuration that a MobileApp is likely to encounter
in the production environment. Instead, a meaningful set of configuration tests must be
selected based on factors such as the prevalence of different configurations and the
potential impact of configuration-related defects.
21.8. Describe a security test that might need to be conducted for the
YourCornerPharmacy MobileApp (Problem 21.5). Who should perform this test?
Ans:
A security test that might need to be conducted for the YourCornerPharmacy
MobileApp is penetration testing, which involves simulating an attack on the app to
identify vulnerabilities and weaknesses in the security measures. This test should be
performed by a certified security professional or a specialized security testing company
with experience in mobile app security testing.
21.9. What is the difference between testing that is associated with interface
mechanisms and testing that addresses interface semantics?
Ans:
Testing that is associated with interface mechanisms focuses on testing the
technical aspects of the interface, such as its functionality and performance. Testing
that addresses interface semantics focuses on testing the meaning and interpretation
of the interface, such as whether it presents information in a clear and understandable
way.
21.10. What is the difference between testing for navigation syntax and navigation
semantics?
Ans:
Testing for navigation syntax focuses on testing the technical aspects of navigation,
such as whether links and buttons work as intended. Testing for navigation semantics
focuses on testing the meaning and interpretation of navigation, such as whether the
navigation flow makes sense and is intuitive for users.
Chapter 23
23.1. Software for System X has 24 individual functional requirements and 14
nonfunctional requirements. What is the specificity of the requirements? The
completeness?
Ans:
To determine the specificity and completeness of the requirements, more
information is needed. Specificity refers to the level of detail and clarity of each
requirement, while completeness refers to the extent to which all necessary
requirements have been identified and documented.
23.3. A class X has 12 operations. Cyclomatic complexity has been computed for all
operations in the OO system, and the average value of module complexity is 4. For class
X, the complexity for operations 1 to 12 is 5, 4, 3, 3, 6, 8, 2, 2, 5, 5, 4, 4, respectively.
Compute the weighted methods per class.
Ans:
Sum of complexity for all operations in class X = 5 + 4 + 3 + 3 + 6 + 8 + 2 + 2 + 5 + 5 + 4 + 4 = 49
WeightedThe sum ofweighted
complexity formethods
all operationsper class
in class (WMC)
X = (5*1) + (4*1) +can
(3*1) be calculated
+ (3*1) by+ (2*1)
+ (6*1) + (8*1) multiplying the
+ (2*1) + (5*1) + (5*1) + (4*1) +
(4*1) = 49
cyclomatic
This means thatcomplexity
the total weighted(CC) of for
complexity each operation
all operations in classby its complexity and summing the
X is 49.
Weighted methods per class = Total weighted complexity / (Average module complexity * Number of operations)
results.
In this case, the average Formodule complexity class
is given as 4, and theX,number of operations
the is 12. Therefore,WMC the weighted methodsis per class for
(55)+(44)+(33)+(33)+(65)+(87)+(22)+(22)+(55)+(55)+(44)+(44)=174.
class X is:
Weighted methods per class = 49 / (4 * 12) = 1.02
23.4. A legacy system has 940 modules. The latest release required that 90 of these
modules be changed. In addition, 40 new modules were added and 12 old modules were
removed. Compute the software maturity index for the system.
Ans: MT = number of modules in the current release
The software maturity index
(940-(40+12+90)) / 940 (SMI)
= .85 can be calculated using the formula SMI = (total
Fc = number of modules in the current release that have been changed
Fa = number of modules in the current release that have been added
number of modules in system - number of new and ofremoved
Fd = number modules from modules) / release
the preceding (number of deleted in the
that were
current release
changed modules). For this legacy system, theTheSMI is (940
software - 40
maturity + is12)
index / 90 =in 10.44.
computed the following manner:
SMI = (MT -(Fa + Fc + Fd)) / MT
23.5. Why should some software metrics be kept “private”? Provide examples of three
metrics that should be private. Provide examples of three metrics that should be public.
Ans:
Some software metrics should be kept private to avoid negative consequences or
misuse. Examples include employee productivity, code quality rankings, and individual
performance evaluations. Public metrics could include customer satisfaction ratings,
number of users, and revenue generated.
23.6. Team A found 342 errors during the software engineering process prior to release.
Team B found 184 errors. What additional measures would have to be made for projects
A and B to determine which of the teams eliminated errors more efficiently? What
metrics would you propose to help in making the determination? What historical data
might be useful?
Ans:
To determine which team eliminated errors more efficiently, additional measures
such as defect density, defect removal efficiency, and defect arrival rate could be used.
Historical data on previous projects and industry benchmarks could also provide
context for evaluating the teams' performance.
23.7. A Web engineering team has built an e-commerce WebApp that contains 145
individual pages. Of these pages, 65 are dynamic; that is, they are internally generated
based on end-user input. What is the customization index for this application?
Ans:
The customization index (CI) can be calculated as the number of dynamic pages
divided by the total number of pages, multiplied by 100. For this e-commerce WebApp,
the CI is (65/145)*100 = 44.8%. Customization index = Ndp/(Ndp+Nsp)
Ndp = dynamic screen display
Nsp = static screen display
23.8. A WebApp and its support environment have not been fully fortified against attack.
Web engineers estimate that the likelihood of repelling an attack is only 30 percent. The
system does not contain sensitive or controversial information, so the threat probability
is 25 percent. What is the integrity of the WebApp?
Ans:
The= [1-(threat*(1-security))]=1-(.25*(1-.3))=.825
Integrity integrity of the WebApp can be calculated using the formula: Integrity = (1 -
probability of attack) * (1 - threat probability). For this WebApp, the integrity is (1 - 0.3) *
(1 - 0.25) = 0.525 or 52.5%.
23.9. At the conclusion of a project, it has been determined that 30 errors were found
during the modeling phase and 12 errors were found during the construction phase that
were traceable to errors not discovered in the modeling phase. What is the DRE for
these two phases?
Ans:
The defect
DRE removal efficiency
= Ei/Ei+1 + Ei = 30/(30+12)(DRE) for the modeling phase is 30 / 30 = 100%, since
all errors found during that phase were successfully removed. For the construction
Ei+1 = error not discovered in phase i
phase, the DRE is (30 + 12) / (30 + 12 + remaining errors) * 100%.
23.10. A software team delivers a software increment to end users. The users uncover
eight defects during the first month of use. Prior to delivery, the software team found
242 errors during formal technical reviews and all testing tasks. What is the overall DRE
for the project after 1 month’s usage?
Ans:
The overall DRE for the project after 1 month's usage is (242 - 8) / 242 * 100% =
96.7%. This means that 96.7% of the errors were removed before delivery or were
discovered and fixed within the first month of use.
DRE= E / (E+D)
E = Before delivery
D= after delivery
Using the LOC-based estimation technique, the estimated effort can be calculated as
follows:
Therefore, the estimated effort required to build the robot software is 0.91
person-months, and the estimated cost is $6,170.
25.5. What is the difference between a macroscopic schedule and a detailed schedule?
Is it possible to manage a project if only a macroscopic schedule is developed? Why?
Ans:
A macroscopic schedule provides a high-level view of the project schedule and
outlines major milestones and deadlines, while a detailed schedule provides a more
granular view of the project schedule and outlines specific tasks, dependencies, and
resources required to complete the project. It is possible to manage a project with only
a macroscopic schedule, but it would be difficult to do so effectively, as a macroscopic
schedule lacks the detail required for effective project management, including progress
tracking, identifying potential delays, and managing resources. A detailed schedule
provides more in-depth information and enables more effective management.
25.6. The relationship between people and time is highly nonlinear. Using Putnam’s
software equation (described in Section 25.8.2), develop a table that relates number of
people to project duration for a software project requiring 50000 LOC and 15
person-years of effort (the productivity parameter is 5000 and B = 0.37). Assume that
the software must be delivered in 24 months plus or minus 12 months.
Ans:
Putnam's software equation is given by:
where:
A = 1.215
Now, we can use the software equation to generate a table that relates the number of
people required to complete the project in 24 months +/- 12 months:
1 72.5
2 51.3
3 41.6
4 35.1
5 30.6
6 27.2
7 24.5
8 22.3
9 20.5
10 19.0
We can see from the table that as the number of people increases, the duration of the
project decreases. However, the relationship between people and time is highly
nonlinear, as doubling the number of people does not halve the duration of the project.
Chapter 26
26.2. Describe the difference between “known risks” and “predictable risks.”
Ans:
Known risks are risks that have been previously identified and assessed, and for
which a risk management plan has been developed. Predictable risks, on the other hand,
are risks that can be reasonably anticipated based on past experience, industry
standards, or common sense, but have not yet been identified or assessed.
26.6. Develop a risk management strategy and specific risk management activities for
three of the risks noted in Figure 26.1.
Ans:
Three risks noted in Figure 26.1 are:
1. Schedule risk:
2. Technology risk:
3. Requirements risk:
26.8. Recompute the risk exposure discussed in Section 26.4.2 when cost/LOC is $16
and the probability is 60 percent.
Ans:
Risk Exposure = Probability * Impact * Cost
26.9. Can you think of a situation in which a high-probability, high-impact risk would not
be considered as part of your RMMM plan?
Ans:
A high-probability, high-impact risk may not be considered as part of the RMMM plan
if it is deemed to be outside the scope of the project, or if the cost of mitigating the risk
is too high compared to the potential impact of the risk. However, in most cases,
high-probability, high-impact risks should be included in the RMMM plan.
26.10. Describe five software application areas in which software safety and hazard
analysis would be a major concern.
Ans:
Five software application areas in which software safety and hazard analysis would be a
major concern are:
1. Medical devices:.
2. Aerospace:
3. Automotive:
4. Industrial control systems:
5. Military and defense:
Chapter 29