Se 4
Se 4
Software quality assurance is a planned and systematic plan of all actions necessary to
provide adequate confidence that an item or product conforms to establish technical
requirements.
A set of activities designed to calculate the process by which the products are developed or
manufactured.
SQA Encompasses
SQA Activities
The program is developed during project planning and is reviewed by all stakeholders. The
plan governs quality assurance activities performed by the software engineering team and
the SQA group. The plan identifies calculation to be performed, audits and reviews to be
performed, standards that apply to the project, techniques for error reporting and tracking,
documents to be produced by the SQA team, and amount of feedback provided to the
software project team.
The software team selects a process for the work to be performed. The SQA group reviews
the process description for compliance with organizational policy, internal software
standards, externally imposed standards (e.g. ISO-9001), and other parts of the software
project plan.
Audits designated software work products to verify compliance with those defined as
a part of the software process:
The SQA group reviews selected work products, identifies, documents and tracks deviations,
verify that corrections have been made, and periodically reports the results of its work to the
project manager.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure:
SOFTWARE REVIEW
Software Review is systematic inspection of a software by one or more individuals who work
together to find and resolve errors and defects in the software during the early stages of
Software Development Life Cycle (SDLC). Software review is an essential part of Software
Development Life Cycle (SDLC) that helps software engineers in validating the quality,
functionality and other vital features and components of the software. It is a whole process
that includes testing the software product and it makes sure that it meets the requirements
stated by the client.
Usually performed manually, software review is used to verify various documents like
requirements, system designs, codes, test plans and test cases
Peer review is the process of assessing the technical content and quality of the product and
it is usually conducted by the author of the work product along with some other developers.
Peer review is performed in order to examine or resolve the defects in the software, whose
quality is also checked by other members of the team.
Peer Review has following types:
It is a code review where two developers develop code together at the same platform.
(iii) Walkthrough:
Members of the development team is guided by author and other interested parties and the
participants ask questions and make comments about defects.
A team of highly qualified individuals examines the software product for its client’s use and
identifies technical defects from specifications and standards.
(v) Inspection:
Software Audit Review is a type of external review in which one or more critics, who are not
a part of the development team, organize an independent inspection of the software product
and its processes to assess their compliance with stated specifications and standards. This
is done by managerial level people.
SOFTWARE INSPECTIONS
The term software inspection was developed by IBM in the early 1970s, when it was noticed
that the testing was not enough sufficient to attain high quality software for large
applications.
Inspection is used to determine the defects in the code and remove it efficiently. This
prevents defects and enhances the quality of testing to remove defects. This software
inspection method achieved the highest level for efficiently removing defects and improving
software quality.
There are some factors that generate the high quality software:
This factor refers to formal oversight that follows protocols such as training. Participants,
material distributed for inspection. Both moderators and recorders are present to analyze
defect statistics.
This factor refers to an active software quality assurance group, which joins a group of
software developments to support them in the development of high quality software.
3)Formal Testing
It throws the test process under certain conditions
● For an application, a test plan was created.
● Are complete specifications so that test cases can be made without significant gaps.
● Vast library control tools are used.
● Test coverage analysis tools are used.
The inspection process was developed in the mid 1970s, later extended and revised. The
process must have an entry criterion that determines whether the inspection process is
ready to begin. this prevents incomplete products from entering the inspection process.
Entry criteria can be interstitial with items such as “The Spell-Document Check”.
There are some of the stages in the software inspection process such as-
2) Overview Meeting: The background of the work product is described by the author.
3) Preparation: The examination of the work product is done by inspector to identify the
possible defects.
4) Inspection Meeting: The reader reads the work product part by part during this
meeting and the inspectors the faults of each part.
5) Rework: After the inspection meeting, the writer changes the work product according
to the work plans.
6) Follow Up: The changes done by the author are checked to make sure that
everything is correct.
● It is a time-consuming process.
● Software inspection requires discipline.
Verification and Validation is the process of investigating that a software system satisfies
specifications and standards and it fulfills the required purpose. Barry Boehm described
verification and validation as the following:
Verification:
Verification is the process of checking that a software achieves its goal without any bugs. It
is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfills the requirements that we have. TestingVerification is
Static Testing.
● Inspections
● Reviews
● Walkthroughs
● Desk-checking
Validation:
Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the validation
of product i.e. it checks what we are developing is the right product. it is validation of actual
and expected product.Validation is the Dynamic Testing
Verification
Validation
Cleanroom Testing was pioneered by IBM. this kind of testing depends heavily on
walkthroughs, inspection, and formal verification. The programmers don’t seem to be
allowed to check any of their code by corporal punishment the code apart from doing a little
syntax testing employing a compiler. The computer code development philosophy relies on
avoiding computer code defects by employing a rigorous examination method. the target of
this computer code is that the zero-defect computer code.
The name ‘CLEAN ROOM’ was derived from the analogy with semiconductor fabrication
units. In these units (clean rooms), defects area unit avoided by producing within the
ultra-clean atmosphere. during this reasonable development, inspections to ascertain the
consistency of the parts with their specifications has replaced unit-testing.
This technique reportedly produces documentation and code that’s extra reliable and fixable
than various development methods relying heavily on code execution-based testing.
1)Formal specification:
The computer code to be developed is formally given. A state-transition model that shows
system responses to stimuli is employed to precise the specification.
2)Incremental development:
The computer code is partitioned off into increments that area unit developed and valid on
individual basis mistreatment the white room method. These increments area unit given, with
client input, at Associate in Nursing early stage within the method.
3)Structured programming:
Only a restricted range of management and information abstraction constructs area unit
used. The program development method is that the method of stepwise refinement of the
specification. Use
4)Static verification:
The developed computer code is statically verified by mistreatment rigorous computer code
inspections. there’s no unit or module testing method for code parts.
TESTING STRATEGIES
Unit testing
Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
Validation testing
Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
As per the procedural point of view the testing includes following steps.
1) Unit testing
2) Integration testing
3) High-order tests
4) Validation testing
DIFFERENT TYPES AND LEVELS OF TESTING
Functional Testing
Functional testing relates to testing how a product functions, performing a functional test
entails checking and testing each functionality of the software to ensure you are getting the
expected results.
1)Unit testing
Level of software testing where individual units or components of the software are tested.
The purpose is to validate each unit of the software performs ad designed. A unit is the
smallest testable part of any software. It usually has one or a few inputs and usually a single
output.
Benefits- Reliable, Cost-effective, Easy to maintain code, Faster, Debugging is easy.
2)Integration testing
It is a level of software testing where individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the integration between integrated
units. Test drivers and test stubs are used to assist in integration testing.
stubs- are used during top-down integration testing, in order to simulate the behavior of the
lower-level modules that are not yet integrated.
drivers- are used in the bottom-up integration testing approach. It can simulate the behavior
of the upper-level module that is not integrated yet.
Approaches:
● Big Bang- no incremental testing takes place prior to all system’s components being
combined to form the system.
● Top-down- simulate the behavior of the lower-level modules that are not yet
integrated.
● Bottom-up- simulate the behavior of the upper-level modules that are not yet
integrated.
● Hybrid- is a combination of both top-down and bottom-up integration testing.
4)Interface testing — is defined as a software testing type that verifies whether the
communication between two different software systems is done correctly.
Non-functional Testing
Non-functional testing deals with testing the non-functional aspects of the software, aspects
such as usability, performance, security, reliability and more. This test is performed after the
functional test as been done. The non-functional test doesn’t focus on whether the software
works or not it focuses on how well it works.
2)Installation testing — Most software systems have installation procedures that are needed
before they can be used for their main purpose. Testing these procedures to achieve an
installed software system that may be used is known as installation testing. These
procedures may involve full or partial upgrades and install/uninstall processes.
4)Reliability testing —is a field of software testing that relates to testing a software’s ability to
function, given environmental conditions, for a particular amount of time. This testing helps
to discover many problems in software design and functionality.
5)Security testing — is a process intended to reveal flaws in the security mechanisms of an
information system that protect data and maintain functionality as intended.
Focusing areas:
Network security
System security
Client-Server security
LEVEL OF TESTING
1)Unit Testing :
In this type of testing, errors are detected individually from every component or unit by
individually testing the components or units of software to ensure that if they are fit for use by
the developers. It is the smallest testable part of the software.
2)Integration Testing :
In this testing, two or more modules which are unit tested are integrated to test i.e. technique
interacting components and are then verified if these integrated modules work as per the
expectation or not and interface errors are also detected.
3)System Testing :
In system testing, complete and integrated Softwares are tested i.e. all the system elements
forming the system is tested as a whole to meet the requirements of the system.
4)Acceptance Testing :
It is a kind of testing conducted to ensure whether the requirement of the users are fulfilled
prior to its delivery and the software works correctly in the user’s working environment.
These testing can be conducted at various stages of software development. The levels of
testing along with the corresponding software development phase is shown by the following
diagram
While performing the software testing, following Testing principles must be applied by every
software engineer:
● The requirements of customers should be traceable and identified by all different
tests.
● Planning of tests that how tests will be conducted should be done long before the
beginning of the test.
● The Pareto principle can be applied to software testing- 80% of all errors identified
during testing will likely be traceable to 20% of all program modules.
● Testing should begin “in the small” and progress toward testing “in the large”.
● Exhaustive testing which simply means to test all the possible combinations of data is
not possible.
● Testing conducted should be most effective and for this purpose, an independent
third party is required.
There are different methods that can be used for software testing.
Black-Box Testing
Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.
This type of testing is applied to systems that can be syntactically represented by some
language. For example- compilers,language that can be represented by context free
grammar. In this, the test cases are generated so that each grammar rule is used at least
once.
2. Equivalence partitioning –
It is often seen that many type of inputs work similarly so instead of giving all of them
separately we can group them together and test only one input of each group. The idea is to
partition the input domain of the system into a number of equivalence classes such that each
member of class works in a similar way, i.e., if a test case in one class results in some error,
other members of class would also result into same error.
● Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the valid range is 0 to 100 then
select one valid input like 49 and one invalid like 104.
● Generating test cases –
(i) To each valid and invalid class of input assign unique identification number
(ii) Write test case covering all valid and invalid test case considering that no two
invalid inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
Boundaries are very good places for errors to occur. Hence if test cases are designed for
boundary values of input domain then the efficiency of testing improves and probability of
finding errors also increase. For example – If valid range is 10 to 100 then test for 10,100
also apart from valid and invalid inputs.
This technique establishes relationship between logical input called causes with
corresponding actions called effect. The causes and effects are represented using Boolean
graphs. The following steps are followed:
6. Compatibility testing –
The test case result not only depend on product but also infrastructure for delivering
functionality. When the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect compatibility of software are:
Advantages
● Efficient when used on large systems.
● SInce the tester and developer are independent of each other, testing is balanced
and unprejudiced.
● Tester can be non-technical.
● There is no need for the tester to have detailed functional knowledge of system.
● Tests will be done from an end user's point of view, because the end user should
accept the system. (This testing technique is sometimes also called Acceptance
testing.)
● Testing helps to identify vagueness and contradictions in functional specifications.
● Test cases can be designed as soon as the functional specifications are complete.
Disadvantages
● Test cases are challenging to design without having clear functional specifications.
● It is difficult to identify tricky inputs if the test cases are not developed based on
specifications.
● It is difficult to identify all possible inputs in limited testing time. As a result, writing
test cases may be slow and difficult.
● There are chances of having unidentified paths during the testing process.
● There is a high probability of repeating tests already performed by the programmer.
White box testing techniques analyze the internal structures the used data structures,
internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or
structural testing.
● Processing: Performing risk analysis for guiding through the entire process.
● Proper test planning: Designing test cases so as to cover entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are communicated.
Testing techniques:
1)Statement coverage:
In this technique, the aim is to traverse all statement at least once. Hence, each line of code
is tested. In case of a flowchart, every node must be traversed at least once. Since all lines
of code are covered, helps in pointing out faulty code.
2)Branch Coverage:
In this technique, test cases are designed so that each branch from all decision points are
traversed at least once. In a flowchart, all edges must be traversed at least once.
3)Condition Coverage:
In this technique, all the possible combinations of the possible outcomes of conditions are
tested at least once.
In this technique, control flow graphs are made from code or flowchart and then Cyclomatic
complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path.
Steps:
6)Loop Testing:
Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.
Advantages
● White box testing is very thorough as the entire code and structures are tested.
● It results in the optimization of code removing error and helps in removing extra lines
of code.
● It can start at an earlier stage as it doesn’t require any interface as in case of black
box testing.
● Easy to automate.
Disadvantages
● In white-box testing, the internal structure of the software is known to the tester.
● It is also known as structural testing, clear box testing, code-based testing, and
transparent testing.
● In white-box testing, there is a requirement of programming knowledge.
● It is well suitable and recommended for algorithm testing.
● It is done at lower levels of testing that are unit testing and integration testing.
● It is easy to automate the white box testing.
● It is mainly performed by developers.
● It is more time-consuming. It takes a long time to design test cases due to lengthy
code.
● The base of this testing is coding which is responsible for internal working.
● It is more exhaustive than Black Box testing.
● In white-box testing, there is a requirement of implementation knowledge.
● Its main objective is to check the code quality.
● Whereas, in white box testing, there is a possibility of early detection of defects.
● It can test data domain and data boundaries in a better way.
● The types of white box testing are – Path testing, Loop testing, and Condition testing.
● In white-box testing, there is the detection of hidden errors. It also helps to optimize
the code.
When object-oriented software is considered, the concept of the unit changes. Encapsulation
drives the defi nition of classes and objects. This means that each class and each instance
of a class packages attributes (data) and the operations that manipulate these data. An
encapsulated class is usually the focus of unit testing. However, operations (methods) within
the class are the smallest testable units. Because a class can contain a number of different
operations, and a particular operation may exist as part of a number of different classes, the
tactics applied to unit testing must change.
You can no longer test a single operation in isolation (the conventional view of unit testing)
but rather as part of a class. To illustrate, consider a class hierarchy in which an operation X
is defined for the superclass and is inherited by a number of subclasses. Each subclass
uses operation X, but it is applied within the context of the private attributes and operations
that have been defi ned for the subclass. Because the context in which operation X is used
varies in subtle ways, it is necessary to test operation X in the context of each of the
subclasses. This means that testing operation X in a stand-alone fashion (the conventional
unit-testing approach) is usually ineffective in the object-oriented context.
Class testing for OO software is the equivalent of unit testing for conventional
software. Unlike unit testing of
conventional software, which tends to focus on the algorithmic detail of a module and the
data that fl ow across the module interface, class testing for OO software is driven by the
operations encapsulated by
the class and the state behavior of the class.
The use of drivers and stubs also changes when integration testing of OO sys-
tems is conducted. Drivers can be used to test operations at the lowest level and
for the testing of whole groups of classes. A driver can also be used to replace
the user interface so that tests of system functionality can be conducted prior to
implementation of the interface. Stubs can be used in situations in which collab-
oration between classes is required but one or more of the collaborating classes
has not yet been fully implemented
The strategy for WebApp testing adopts the basic principles for all software test-
ing and applies a strategy and tactics that are used for object-oriented systems.
The following steps summarize the approach:
2. The interface model is reviewed to ensure that all use cases can be
accommodated.
10. The WebApp is tested by a controlled and monitored population of end users. The
results of their interaction with the system are evaluated for errors.
Because many WebApps evolve continuously, the testing process is an ongoing activity,
conducted by support staff who use regression tests derived from the tests developed when
the WebApp was first engineered
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been
delivered to the customer. The main purpose of software maintenance is to modify and
update software applications after delivery to correct faults and to improve performance.
● Correct faults.
● Improve the design.
● Implement enhancements.
● Interface with other systems.
● Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
● Migrate legacy software.
● Retire software.
● The popular age of any software program is taken into consideration up to ten to
fifteen years. As software program renovation is open ended and might maintain for
decades making it very expensive.
● Older software program’s, which had been intended to paintings on sluggish
machines with much less reminiscence and garage ability can not maintain
themselves tough in opposition to newly coming more advantageous software
program on contemporary-day hardware.
● Changes are frequently left undocumented which can also additionally reason
greater conflicts in future.
● As era advances, it turns into high priced to preserve vintage software program.
● Often adjustments made can without problems harm the authentic shape of the
software program, making it difficult for any next adjustments.
Categories of Software Maintenance –
1)Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs
observed while the system is in use, or to enhance the performance of the system.
2)Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on
new platforms, on new operating systems, or when they need the product to interface with
new hardware and software.
3)Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to
change different types of functionalities of the system according to the customer demands.
4)Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of
the software. It goals to attend problems, which are not significant at this moment but may
cause serious issues in future.
Software Evolution
Software Evolution is a term which refers to the process of developing software initially, then
timely updating it for various reasons, i.e., to add new features or to remove obsolete
functionalities etc. The evolution process includes fundamental activities of change analysis,
release planning, system implementation and releasing a system to customers.
The cost and impact of these changes are accessed to see how much system is affected by
the change and how much it might cost to implement the change. If the proposed changes
are accepted, a new release of the software system is planned. During release planning, all
the proposed changes (fault repair, adaptation, and new functionality) are considered
A design is then made on which changes to implement in the next version of the system.
The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented and tested.
b) Environment change:
As the working environment changes the things(tools) that enable us to work in that
environment also changes proportionally same happens in the software world as the working
environment changes then, the organizations need reintroduction of old software with
updated features and functionality to adapt the new environment.
As the age of the deployed software within an organization increases their preciseness or
impeccability decrease and the efficiency to bear the increasing complexity workload also
continually degrades. So, in that case, it becomes necessary to avoid use of obsolete and
aged software. All such obsolete Softwares need to undergo the evolution process in order
to become robust as per the workload complexity of the current environment.
d) Security risks:
Using outdated software within an organization may lead you to at the verge of various
software-based cyberattacks and could expose your confidential data illegally associated
with the software that is in use. So, it becomes necessary to avoid such security breaches
through regular assessment of the security patches/modules are used within the software. If
the software isn’t robust enough to bear the current occurring Cyber attacks so it must be
changed (updated).
In order to increase the performance and fast data processing and other functionalities, an
organization need to continuously evolute the software throughout its life cycle so that
stakeholders & clients of the product could work efficiently.
As an evolving program changes, its structure becomes more complex unless effective
efforts are made to avoid this phenomenon.
Over the lifetime of a program, the rate of development of that program is approximately
constant and independent of the resource devoted to system development.
This law states that during the active lifetime of the program, changes made in the
successive release are almost constant.