SE UNIT 4
SE UNIT 4
SE UNIT 4
Testing Objective:
The main objective of testing is
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as-yet-
undiscovered error.
3. A successful test is one that uncovers an as-yet-undiscovered error
Validation refers to a different set of activities that ensure that the software that has
been built is traceable to customer requirements.
Levels of Testing:
Integration Testing:
Integration is defined as a set of integration amoung component. Testing the
interactions between the module and interactions with other system externally is
called Integration Testing.
TOP-DOWN INTEGRATION:
Top-down integration testing is an incremental approach to construction of
program structure. Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module.
BOTTOM-UP INTEGRATION:
Bottom-up integration testing, begins construction and testing with atomic
modules. Because components are integrated from the bottom up, processing
required for components subordinate to a given level is always available and the
need for stubs is eliminated.
REGRESSION TESTING:
Each time a new module is added as part of integration testing, the software
changes. These changes may cause problems. In the context of an integration test
strategy, regression testing is the re-execution of some subset of tests that have
already been conducted. Regression testing is the activity that helps to ensure that
changes do not introduce unintended behavior or additional errors. Regression
testing may be conducted manually, by re-executing a subset of all test cases.
SMOKE TESTING:
Smoke testing is an integration testing approach that is commonly used when
“shrink wrapped” software products are being developed, allowing the software
team to assess its project on a frequent basis. The smoke testing approach
encompasses the following activities:
1. Software components that have been translated into code are integrated into a
“build. ” A build includes all data files, libraries, reusable modules, and engineered
components.
2. A series of tests is designed to expose errors that will keep the build from
properly performing its function.
3. The build is integrated with other builds and the entire product is smoke tested
daily. The integration approach may be top down or bottom up.
System Testing:
INTEGRATION TECHNIQUE
Unit Test :
Integration Testing :
Top-down testing
Integration Testing :
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
The process continues from step 2 until the entire program structure is built.
The most common of these problems occurs when processing at low levels in the
hierarchy is required to adequately test upper levels.
(1) delay many tests until stubs are replaced with actual modules,
(2) develop stubs that perform limited functions that simulate the actual module, or
(3) integrate the software from the bottom of the hierarchy upward.
Bottom-Up Integration
Regression Testing :
Regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.
Regression testing helps to ensure that changes (due to testing or for other reasons)
do not introduce unintended behavior or additional errors.
It is impractical and inefficient to reexecute every test for every program function
once a change has occurred….
Validation Testing
The process of evaluating software during the development process or at the end
of the development process to determine whether it satisfies specified business
requirements.
Validation Testing ensures that the product actually meets the client's needs. It can
also be defined as to demonstrate that the product fulfills its intended use when
deployed on appropriate environment.
It answers to the question, Are we building the right product?
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
System Testing
System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both
functional and non-functional testing.
System Testing is a black-box testing.
System Testing is performed after the integration testing and before the
acceptance testing.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command line interfaces. Examples of
automated debugging tools include code based tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
Example: Consider a functionally correct software product. That is, it performs all
tasks as specified in the SRS document. But, has an almost unusable user interface.
Even though it may be functionally right, we cannot consider it to be a quality
product.
Usability: A software product has better usability if various categories of users can
easily invoke the functions of the product.
This approach to software quality is best exemplified by fixed quality models, such
as ISO/IEC 25010:2011. This standard describes a hierarchy of eight quality
characteristics, each composed of sub-characteristics:
1. Functional suitability
2. Reliability
3. Operability
4. Performance efficiency
5. Security
6. Compatibility
7. Maintainability
8. Transferability
1. Effectiveness
2. Efficiency
3. Satisfaction
4. Safety
5. Usability
Technical work in software engineering begins with the creation of the analysis
model. It is at this stage that requirements are derived and that a foundation for
design is established. Therefore, technical metrics that provide insight into the
quality of the analysis model are desirable.
Although relatively few analysis and specification metrics have appeared in the
literature, it is possible to adapt metrics derived for project application for use in
this context. These metrics examine the analysis model with the intent of
predicting the “size” of the resultant system. It is likely that size and design
complexity will be directly correlated.
Function-Based Metrics
The function point metric an be used effectively as a means for predicting the size
of a system that will be derived from the analysis model. To illustrate the use of the
FP metric in this context, we consider a simple analysis model representation,
illustrated in figure. Referring to the figure, a data flow diagram for a function
within the Safe Home software is represented. The function manages user
interaction, accepting a user password to activate or deactivate the system, and
allows inquiries on the status of security zones and various security sensors. The
function displays a series of prompting messages and sends appropriate control
signals to various components of the security system.
The data flow diagram is evaluated to determine the key measures required for
computation of the function point metric :
• number of user inputs
• number of user outputs
• number of user inquiries
• number of files
• number of external interfaces
The count total shown in Figure 19.4 must be adjusted using Equation :
where count total is the sum of all FP entries obtained from the first figure and Fi (i
= 1 to 14) are "complexity adjustment values." For the purposes of this example,
we assume that (Fi) is 46 (a moderately complex product). Therefore,
Based on the projected FP value derived from the analysis model, the project team
can estimate the overall implemented size of the Safe Home user interaction
function. Assume that past data indicates that one FP translates into 60 lines of
code (an objectoriented language is to be used) and that 12 FPs are produced for
each person-month of effort. These historical data provide the project manager
with important planning information that is based on the analysis model rather than
preliminary estimates. Assume further that past projects have found an average of
three errors per function point during analysis and design reviews and four errors
per function point during unit and integration testing. These data can help software
engineers assess the completeness of their review and testing activities.
Like the function point metric, the bang metric can be used to develop an
indication of the size of the software to be implemented as a consequence of the
analysis model. Developed by DeMarco, the bang metric is “an implementation
independent indication of system size.” To compute the bang metric, the software
engineer must first evaluate a set of primitives—elements of the analysis model
that are not further subdivided at the analysis level. Primitives are determined by
evaluating the analysis model and developing counts for the following forms:
Data elements (DE). The number of attributes of a data object, data elements are
not composite data and appear within the data dictionary.
States (ST). The number of user observable states in the state transition diagram.
Transitions (TR). The number of state transitions in the state transition diagram.
1. Structural Complexity:
S(k) = f2out(k)
Where fout represents fanout for module k (fan-out means number of modules
that are subordinating module k).
2. Data Complexity:
Where tot_var is total number of input and output variables going to and
coming out of module.
3. System Complexity:
When measuring source code quality make sure you’re looking at the number of
lines of code you have, which will ensure that you have the appropriate amount of
code and it’s no more complex than it needs to be. Another thing to track is how
compliant each line of code is with the programming languages’ standard usage
rules. Equally important is to track the percentage of comments within the code,
which will tell you how much maintenance the program will require. The less
comments, the more problems when you decide to change or upgrade. Other things
to include in your measurements is code duplications and unit test coverage, which
will tell you how smoothly your product will run.
Development metrics
These metrics measure the custom software development process itself. Gather
development metrics to look for ways to make your operations more efficient and
reduce incidents of software errors.
Measuring number of defects within the code and time to fix them tells you a lot
about the development process itself. Start by tallying up the number of defects
that appear in the code and note the time it takes to fix them. If any defects have to
be fixed multiple time then there might be a misunderstanding of requirements or a
skills gap – which is important to address as soon as possible.
Testing metrics
These metrics help you evaluate how functional your product is. There are two
major testing metrics. One of them is “test coverage” that collects data about which
parts of the software program are executed when it runs a test. The second part is a
test of the testing itself. It’s called “defect removal efficiency,” and it checks your
success rate for spotting and removing defects.
The more you measure, the more you know about your software product, the more
likely you are able to improve it. Automating the measurement process is the best
way to measure software quality – it’s not the easiest thing, or the cheapest, but
it’ll save you tons of cost down the line.
Metrics for Software Maintenance
When development of a software product is complete and it is released to the
market, it enters the maintenance phase of its life cycle. During this phase the
defect arrivals by time interval and customer problem calls (which may or may not
be defects) by time interval are the de facto metrics. However, the number of
defect or problem arrivals is largely determined by the development process before
the maintenance phase. Not much can be done to alter the quality of the product
during this phase. Therefore, these two de facto metrics, although important, do not
reflect the quality of software maintenance. What can be done during the
maintenance phase is to fix the defects as soon as possible and with excellent fix
quality. Such actions, although still not able to improve the defect rate of the
product, can improve customer satisfaction to a large extent. The following metrics
are therefore very important:
Fix Quality
Fix quality or the number of defective fixes is another important quality metric for
the maintenance phase. From the customer's perspective, it is bad enough to
encounter functional defects when running a business on the software. It is even
worse if the fixes turn out to be defective. A fix is defective if it did not fix the
reported problem, or if it fixed the original problem but injected a new defect. For
mission-critical software, defective fixes are detrimental to customer satisfaction.