UNIT 4 Answers
UNIT 4 Answers
Short Questions: 2M
1. What is debugging?
Sol: Debugging is the process of finding and fixing errors or bugs in the
source code of any software.
Functional Testing:
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Alpha Testing
Beta Testing
Non-Functional Testing:
Performance Testing
o Load Testing
o Stress Testing
Security Testing
Usability Testing
Compatibility Testing
Ad-hoc Testing
Documentation Testing
Sanity Testing
Black-box testing
White-box testing
Sol: Code review is a systematic process where one or more developers review the
code written by another developer.
Sol: Integration testing is the process of testing the interface between two
software units or modules. It focuses on determining the correctness of the
interface. (Or)
Sol: Smoke testing is a quick way to check if the core functions of a software
application are working as expected. It's also known as build verification testing or
confidence testing.
Sol: White box testing techniques analyze the internal structures the used
data structures, internal design, code structure, and the working of the software
rather than just the functionality as in black box testing.
It is also called glass box testing clear box testing or structural testing. White
Box Testing is also known as transparent testing or open box testing.
8. What are the Program analysis tools?
Program analysis tools in software engineering are used to examine and evaluate
programs to ensure they are efficient, correct, maintainable, and free of bugs.
These tools help developers understand code behavior, detect errors, optimize
performance, and ensure code quality. They can be categorized into different
types based on the type of analysis they perform.
Performance Profilers
Memory Analyzers
Unit testing is the process of testing the smallest parts of your code, like
individual functions or methods, to make sure they work correctly. It’s a key
part of software development that improves code quality by testing each unit
in isolation.
42. Differentiate between black box testing and white box testing?
Sol:
Black Box Testing White Box Testing
The tester cannot access the source The tester is aware of the source
code of the software. code and internal workings of the
software.
You can conduct Black box testing at You can conduct White box testing
the software interface. It requires no by ensuring that all the internal
concern with the software’s internal operations take place according to
logical structure. the specifications.
It tests the functions of the software. It tests the structure of the software.
You can initiate this test on the basis You can only start this type of
of the requirement specifications testing software after a detailed
document. design document.
You can also call it closed testing. You can also call it clear box testing.
One can perform it using various trial You can better test data domains
and error methods and ways. and internal or inner boundaries.
2. Integration testing
This type of testing makes sure that various units or components interact with one
another and function as a cohesive unit. This is testing the integration of different
classes and modules in OOAD. Problems with interfaces and how components
interact with one another are found with the aid of integration testing.
3. System testing
System testing assesses the system as a whole to make sure it functions as intended
and satisfies the requirements. This entails testing the security, performance, and
other non-functional components of the system. In OOAD, system testing guarantees
that the system satisfies the intended goals and is prepared for implementation.
4. Performance testing
Performance testing ensures that a software system performs well under various
conditions. Load testing checks how the system behaves under expected user loads.
Stress testing pushes the system beyond its limits to find its breaking point.
Scalability testing assesses how well the system can handle future growth. Resource
utilization testing measures how efficiently the system uses resources.
5. Security testing
Security testing identifies and fixes security vulnerabilities. Vulnerability assessment
prioritizes security issues. Penetration testing simulates cyberattacks to find
weaknesses. Authentication and authorization testing ensures secure user access.
Data security testing protects sensitive data. Security configuration testing identifies
and fixes misconfigurations.
44. Explain with examples Basis Path Testing?
Sol:
Basis Path Testing is a white-box testing technique used to test the control flow of a
program. It is designed to ensure that the program’s logic is correct by testing all
possible independent paths through the code. It focuses on defining a set of test cases
that cover all the independent paths and help identify errors in the program’s logic.
The main goal of Basis Path Testing is to ensure that all decision points, loops, and
branches in the code are tested at least once.
1. Control Flow Graph (CFG): The first step in basis path testing is to construct a
Control Flow Graph (CFG) for the program. In this graph:
o Each node represents a block of code or a decision point.
o Each edge represents a transition between blocks based on the
program's flow.
2. Cyclomatic Complexity: Cyclomatic complexity is a measure used to
determine the number of independent paths in a program. It can be calculated
using the formula:
V(G)=E−N+2PV(G) = E - N + 2PV(G)=E−N+2P
where:
Example
Let's look at an example of a simple code snippet and apply basis path testing.
Code Snippet:
int example(int a, int b) {
if (a > b) {
if (a > 0) {
return a;
} else {
return b;
}
} else {
if (b > 0) {
return b;
} else {
return a;
}
}
}
So, the cyclomatic complexity is 3, which means there are 3 independent paths.
Step 3: Identify Independent Paths
1. Path 1: (Start -> a > b (No) -> b > 0 (Yes) -> return b -> End)
2. Path 2: (Start -> a > b (No) -> b > 0 (No) -> return a -> End)
3. Path 3: (Start -> a > b (Yes) -> a > 0 (Yes) -> return a -> End)
1. Test Case 1: a = 1, b = 2
o Expected result: b (because a > b is false and b > 0 is true)
o This follows Path 1.
2. Test Case 2: a = -1, b = 2
o Expected result: a (because a > b is false and b > 0 is false)
o This follows Path 2.
3. Test Case 3: a = 3, b = 2
o Expected result: a (because a > b is true and a > 0 is true)
o This follows Path 3.
45. Define unit testing. Explain about unit testing considerations and
procedures.
Sol:
Unit Testing is a software testing technique in which individual units or
components of a software application are tested in isolation. These units are the
smallest pieces of code, typically functions or methods, ensuring they perform as
expected.
Unit testing helps in identifying bugs early in the development cycle, enhancing code
quality, and reducing the cost of fixing issues later. It is an essential part of Test-
Driven Development (TDD), promoting reliable code.
(Or)
Unit testing in software engineering involves testing individual, small units of code
(like functions or methods) in isolation to ensure they function correctly, promoting
early bug detection and improving code quality.
Planning:
Identify the units to be tested.
Determine the inputs, outputs, and expected behavior of each unit.
Plan the test cases to cover different scenarios.
Running Tests:
Execute the test cases using a unit testing framework.
Analyze the results to identify any failures.
White box testing techniques analyze the internal structures the used data
structures, internal design, code structure, and the working of the software rather
than just the functionality as in black box testing.
It is also called glass box testing clear box testing or structural testing. White Box
Testing is also known as transparent testing or open box testing.
Types Of White Box Testing
White box testing can be done for different purposes at different places. There are
three main types of White Box testing which is follows:-
Unit Testing: Unit Testing checks if each part or function of the application
works correctly. It will check the application meets design requirements during
development.
Integration Testing: Integration Testing Examines how different parts of the
application work together. After unit testing to make sure components work well
both alone and together.
Regression Testing: Regression Testing Verifies that changes or updates
don’t break existing functionality of the code. It will check the application still
passes all existing tests after updates.
Four stages are followed to create test cases using this technique −
Control Flow Graph – A control flow graph (or simply, flow graph) is a directed
graph which represents the control structure of a program or module. A control flow
graph (V, E) has V number of nodes/vertices and E number of edges in it. A control
graph can also have :
Junction Node – a node with more than one arrow entering it.
Decision Node – a node with more than one arrow leaving it.
Region – area bounded by edges and nodes (area outside the graph is also
counted as a region.).
Below are the notations used while constructing a flow graph :
Sequential Statements –
If – Then – Else –
Do – While –
While – Do –
Switch – Case –
So,
Cyclomatic complexity V(G)
=4-4+2*1
=2
2. Formula based on Decision Nodes :
V(G) = d + P
where, d is number of decision nodes, P is number of connected nodes. For
example, consider first graph given above,
where, d = 1 and p = 1
So,
Cyclomatic Complexity V(G)
=1+1
=2
3. Formula based on Regions :
V(G) = number of regions in the graph
For example, consider first graph given above
Note –
1. For one function [e.g. Main( ) or Factorial( ) ], only one flow graph is
constructed. If in a program, there are multiple functions, then a separate flow
graph is constructed for each one of them. Also, in the cyclomatic complexity
formula, the value of ‘p’ is set depending of the number of graphs present in total.
2. If a decision node has exactly two arrows leaving it, then it is counted as one
decision node. However, if there are more than 2 arrows leaving a decision node,
it is computed using this formula :
d=k-1
Here, k is number of arrows leaving the decision node.
Independent Paths : An independent path in the control flow graph is the one which
introduces at least one new edge that has not been traversed before the path is
defined. The cyclomatic complexity gives the number of independent paths present in
a flow graph. This is because the cyclomatic complexity is used as an upper-bound
for the number of tests that should be executed in order to make sure that all the
statements in the program have been executed at least once. Consider first graph
given above here the independent paths would be 2 because number of independent
paths is equal to the cyclomatic complexity. So, the independent paths in above first
given graph :
Path 1:
A -> B
Path 2:
C -> D
47. Differentiate software testing strategies and Methods. Discuss any two
methods of software testing.
Sol:
Software testing is the process of evaluating a software application to identify if it
meets specified requirements and to identify any defects. The following are
common testing strategies:
1. Black box testing– Tests the functionality of the software without looking at
the internal code structure.
2. White box testing – Tests the internal code structure and logic of the
software.
6. System testing– Tests the complete software system to ensure it meets the
specified requirements.
10. Security testing– Tests the software to identify vulnerabilities and ensure it
meets security requirements.
Two common methods of software testing are unit testing, which focuses on individual
components or modules, and integration testing, which verifies the interactions
between different modules or components.
Unit testing:
Unit testing helps in identifying bugs early in the development cycle, enhancing code
quality, and reducing the cost of fixing issues later. It is an essential part of Test-
Driven Development (TDD), promoting reliable code.
To ensure that each unit performs its intended function correctly and independently.
Unit tests typically involve writing test cases that provide specific inputs to a unit and
then verifying that the output matches the expected outcome.
Unit tests are often automated, meaning they are executed automatically as part of the
software development process.
Integration testing:
Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface. The purpose of
integration testing is to expose faults in the interaction between integrated units. Once
all the modules have been unit-tested, integration testing is performed.
The goal of integration testing is to identify any problems or bugs that arise when
different components are combined and interact with each other. Integration testing is
typically performed after unit testing and before system testing. It helps to identify and
resolve integration issues early in the development cycle, reducing the risk of more
severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done
so that there should be a proper sequence to be followed.
And also if you don’t want to miss out on any integration scenarios then you
have to follow the proper sequence.
Exposing the defects is the major focus of the integration testing and the time of
interaction between the integrated units.
48. What is unit testing? Explain in detail with traditional test strategies.[7M]
[July -2023 Set -1] [Understanding].
Unit Testing:
Key Features:
Example: Testing a function that adds two numbers, ensuring that the function
returns the correct result.
2. Integration Testing
Key Features:
Involves testing multiple modules or components that interact with each other.
Can be performed incrementally (incremental integration testing) or all at once
(big bang integration testing).
Often performed after unit testing but before system testing.
Focuses on issues like data flow, control flow, and error handling between
integrated components.
Example: Testing the interaction between a database module and a user interface to
ensure data is properly retrieved and displayed.
3. System Testing
Definition: System testing evaluates the complete system as a whole to ensure that
the software meets the specified requirements. It is often the first level of testing done
after integration testing and involves testing the system in an environment that
simulates real-world usage.
Key Features:
4. Acceptance Testing
Definition: Acceptance testing determines whether the software meets the user or
business requirements. This is the final stage of testing before the software is delivered
to the customer or deployed to production.
Key Features:
Focuses on validating the software against the business requirements and user
needs.
Can be done in the form of alpha and beta testing.
Often involves real users (or stakeholders) to verify the software’s behavior.
Acceptance testing can be done manually or automatically.
5. Regression Testing
Definition: Regression testing ensures that changes made to the software (like bug
fixes, enhancements, or new features) do not introduce new defects or cause existing
functionality to break.
Key Features:
Involves re-executing previous test cases after changes are made to the
software.
Essential in agile and iterative development cycles where software is frequently
updated.
Can be manual or automated (automated regression testing is highly
recommended for efficiency).
Ensures that no unintended side effects occur due to changes in the software.
Example: After adding a new feature to a website, regression tests are run to verify
that previously working functionality, such as logging in or navigating, still works as
expected.
Definition: Alpha and beta testing are types of acceptance testing done before the
software is released to the public.
Alpha Testing: Conducted by internal developers or QA teams to identify bugs
and issues in the software before it is released to real users.
Beta Testing: Conducted by a limited number of external users (actual end-
users) to gather feedback and identify potential issues in real-world
environments.
Key Features:
7. Performance Testing
Definition: Performance testing evaluates how well the software performs under
various conditions, focusing on its responsiveness, stability, scalability, and resource
usage.
Key Features:
8. Security Testing
Key Features:
9. Usability Testing
Definition: Usability testing evaluates how user-friendly and intuitive the software is.
It ensures that the end users can easily navigate and use the system without
difficulty.
Key Features:
Example: Testing a mobile app to ensure that the interface is intuitive, users
can easily perform key tasks, and there are no confusing or unnecessary steps.
Unit-test procedures:-
The design of unit tests can occur before coding begins or after source code has
been generate. Because a component is not a stand-alone program, driver and/or
stub software must often be developed for each unit test.
Driver is nothing more than a “main program” that accepts test case data, passes
such data to the component (to be tested), and prints relevant results. Stubs
serve to replace modules that are subordinate (invoked by) the component to be
tested.
Drivers and stubs represent testing “overhead.” That is, both are software that
must be written (formal design is not commonly applied) but that is not delivered
with the final software product.
17.3.2 Integration Testing:
Data can be lost across an interface; one component can have an inadvertent,
adverse effect on another; sub functions, when combined, may not produce the
desired major function. The objective of Integration testing is to take unit-tested
components and build a program structure that has been dictated by design. The
program is constructed and tested in small increments, where errors are easier to
isolate and correct. A number of different incremental integration strategies are:-
1. The main control module is used as a test driver and stubs are substituted for
all components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.
The top-down integration strategy verifies major control or decision points early in
the test process. Stubs replace low-level modules at the beginning of top-down
testing. Therefore, no significant data can flow upward in the program structure.
As a tester, you are left with three choices:
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module,
or
(3) Integrate the software from the bottom of the hierarchy upward.
b) Bottom-up integration:
Begins construction and testing with components at the lowest levels in the
program structure. Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always
available and the need for stubs is eliminated.
Regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.
The regression test suite contains three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by
the change.
• Tests that focus on the software components that have been changed.
1. Software components that have been translated into code are integrated into a
build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “showstopper”
errors that have the highest likelihood of throwing the software project behind
schedule.
3. The build is integrated with other builds, and the entire product is smoke tested
daily. The integration approach may be top down or bottom up.
Strategic options:- The major disadvantage of the top-down approach is the need
for stubs and the attendant testing difficulties that can be associated with them.
The major disadvantage of bottom-up integration is that “the program as an entity
does not exist until the last module is added”.
Selection of an integration strategy depends upon software characteristics and,
sometimes, project schedule. In general, a combined approach or sandwich testing
may be the best compromise.
The following criteria and corresponding tests are applied for all test phases:
50. What are the different goals and Metrics used for Statistical SQA? Discuss.
Sol:
1. Reliability Assessment
o Measure the probability of software performing its intended function
without failure
o Quantify system dependability and consistency
o Identify potential points of failure and their likelihood
2. Performance Evaluation
o Analyze software efficiency and resource utilization
o Measure response times, throughput, and computational complexity
o Assess scalability and system performance under various conditions
3. Defect Detection and Prevention
o Statistically estimate the number and severity of potential defects
o Predict error rates and identify high-risk components
o Develop proactive quality improvement strategies
4. Process Optimization
o Use statistical techniques to streamline development processes
o Identify bottlenecks and inefficiencies
o Continuously improve software development lifecycle
1. Defect Density
o Calculation: Number of defects per unit of software size
o Measured in defects per thousand lines of code (KLOC)
o Provides normalized comparison across different software projects
2. Failure Rate
o Measures frequency of software failures over a specific time period
o Expressed as failures per operational hour or per execution cycle
o Helps assess software reliability and stability
3. Mean Time Between Failures (MTBF)
o Average time interval between system failures
o Indicates overall system reliability
o Calculated by dividing total operational time by number of failures
4. Mean Time To Repair (MTTR)
o Average time required to diagnose and fix a software failure
o Measures maintenance efficiency
o Helps in resource allocation and support strategy planning
Complexity Metrics
1. Cyclomatic Complexity
o Quantifies program complexity by measuring linearly independent paths
o Helps predict testing effort and potential defect locations
o Lower values indicate more maintainable code
2. Halstead Complexity Measures
o Calculates software metrics based on operators and operands
o Provides insights into program complexity and potential reliability
o Includes volume, difficulty, effort, and time estimates
1. Code Coverage
o Percentage of code executed during testing
o Types include:
Statement coverage
Branch coverage
Path coverage
o Higher coverage indicates more comprehensive testing
2. Mutation Score
o Evaluates test suite effectiveness by introducing artificial defects
o Measures ability of tests to detect intentional code modifications
o Higher scores suggest more robust testing strategies
Software Reliability
Software reliability is the probability that a software system will perform its required
functions under specified conditions for a designated period without experiencing
failures. It is a critical quality attribute that measures the system's ability to maintain
performance and prevent unexpected behavior.
1. Error Avoidance: During the development of the software, each and every
possibility of availability of the error should be avoided as much as possible.
Therefore, for the development of high reliable software we need the following
attributes:
Experienced developers: To get the high reliability and in order to avoid error
as much as possible, we need experienced developers.
Software engineering tools: For the high reliable software, best software
engineering tools are needed.
CASE tools: CASE tools used should be suitable and adaptable.
2. Error Detection: Instead of using the best possible methods to avoid the
error but it is still possible that some errors may be present in the software.
Hence in order to get the high reliable software, every error should be
detected. The process of error detection is done in form of testing. This
includes various testing processes. There are some certain steps to follow to
detect the errors in software. The most common testing used here for the error
detection is reliability testing.
3. Error Removal: Now when the errors are detected in the software, then we
need to fix them. In order to fix the errors, in other words in order to remove
the errors from the software, we need testing processes that are repeated
again and again. For the error removal process, one error is removed and the
tester checks whether the error is completed removed or not. Therefore, this
technique is the repetition of reliability testing process.
3. Comprehensive Testing
4. Defect Management
Testing Tools
Methodological Approaches
54. Discuss the elements of software quality assurance and software reliability.
Sol:
Elements of Software Quality Assurance (SQA)
1. Standards: The IEEE, ISO, and other standards organizations have produced
a broad array of software engineering standards and related documents. The job
of SQA is to ensure that standards that have been adopted are followed and that
all work products conform to them.
3. Testing: Software testing is a quality control function that has one primary
goal—to find errors. The job of SQA is to ensure that testing is properly planned
and efficiently conducted for primary goal of software.
4. Error/defect collection and analysis : SQA collects and analyzes error and
defect data to better understand how errors are introduced and what software
engineering activities are best suited to eliminating them.
55. Explain the activities of software quality assurance group to assist the software
team in achieving high quality?
Sol:
56. How does quality assurance differ from software testing? Explain with a suitable
example.
Sol:
Software Testing
QA Perspective
1. Technical Risks
2. Project Management Risks
3. Operational Risks
4. Human-Related Risks
5. Quality Assurance Risks
a) Architectural Risks
c) Performance Risks
a) Scheduling Risks
b) Budget Risks
3. Operational Risks
a) Security Risks
b) Compliance Risks
c) Integration Risks
4. Human-Related Risks
b) Communication Risks
a) Testing Risks
b) Documentation Risks
58. Write a short note on software quality assurance tasks, goals, Metrics and
statistical SQA?
Sol:
Introduction
SQA Tasks
1. Process Verification
2. Product Testing
4. Configuration Management
SQA Goals
Technical Goals
Business Goals
Quality Metrics
Defect-Related Metrics
Performance Metrics
Probability-Based Techniques