Chapter 3 Levels of Testing and Special Tests
Chapter 3 Levels of Testing and Special Tests
Levels Of Testing
And Special Tests
Levels Of Testing
Unit Testing
Unit Testing is a level of software testing where individual
units/ components of a software are tested. The purpose is
to validate that each unit of the software performs as
designed.
Unit Testing is the first level of testing and is performed
prior to Integration Testing.
A unit is the smallest testable part of software. It usually
has one or a few inputs and usually a single output.
It is executed by the Developer.
Unit Testing is performed by using the White Box Testing
method
Example: - A function, method, Loop or statement in
program is working fine.
Drivers
Drivers are used in bottom-up integration testing approach.
It can simulate the behavior of upper-level module that is not
integrated yet.
Drivers modules act as the temporary replacement of module
and act as the actual products.
Drivers are also used for interact with external system and
usually complex than stubs.
Driver: Calls the Module to be tested.
Now suppose you have modules B and C ready but module A which
calls functions from module B and C is not ready so developer will
write a dummy piece of code for module A which will return values to
module B and C. This dummy piece of code is known as driver.
Stubs
Stubs are used in top down integration testing.
It can simulate the behavior of lower-level module that are not
integrated.
They are act as a temporary replacement of module and provide
same output as actual product.
When needs to intact with external system then also stubs are
used.
Stub: Is called by the Module under Test.
Assume you have 3 modules, Module A, Module B and module C.
Module A is ready and we need to test it, but module A calls
functions from Module B and C which are not ready, so developer
will write a dummy module which simulates B and C and returns
values to module A. This dummy module code is known as stub.
Stub Driver
Type Dummy codes Dummy codes
Disadvantages:
Disadvantages:
•Needs many Stubs.
•Modules at lower level are tested
inadequately.
Unit Testing Integration Testing
Unit testing is a type ofIntegration testing is a
testing to check if the type of testing to check
small piece of code is if different pieces of the
doing what it is modules are working
suppose to do. together.
The behavior of
Unit testing checks a
integration modules is
single component of an
considered in the
application.
Integration testing.
The scope of Unit
testing is narrow, it
The scope of Integration
covers the Unit or small
testing is wide, it covers
piece of code under
the whole application
test. Therefore while
under test and it
System Testing
The process of testing of an integrated hardware and
software system to verify that the system meets its specified
requirements.
It is performed when integration testing is completed.
It is mainly a black box type testing. This testing evaluates
working of the system from a user point of view, with the help
of specification document. It does not require any internal
knowledge of system like design or structure of the code.
It contains functional and non-functional areas of
application/product.
System testing is performed in the context of a System
Requirement Specification (SRS) and/or a Functional
Requirement Specifications (FRS). It is the final test to verify
that the product to be delivered meets the specifications
mentioned in the requirement document. It should
investigate both functional and non-functional requirements.
It mainly focuses on following:
External interfaces
complex functionalities
Security
Recovery
Performance
Operator and user’s smooth interaction with system
Documentation
Usability
Load / Stress
Recovery Testing
Recovery testing is a type of non-functional testing technique
performed in order to determine how quickly the system can
recover after it has gone through system crash or hardware failure.
Recovery testing is the forced failure of the software to verify if the
recovery is successful.
For example: When an application is receiving data from a
network, unplug the connecting cable. After some time, plug the
cable back in and analyze the application’s ability to continue
receiving data from the point at which the network connection was
broken.
Example: Restart the system while a browser has a definite
number of sessions and check whether the browser is able to
recover all of them or not.
Security Testing
Security testing is a testing technique to determine if
an information system protects data and maintains
functionality as intended
It also aims at verifying 6 basic principles as listed
below:
Confidentiality
Integrity
Authentication
Authorization
Availability
Non-repudiation
Confidentiality
Authentication
This might involve confirming the identity of a person, tracing
the origins of an artifact, ensuring that a product is what its
packaging and labeling claims to be, or assuring that a
computer program is a trusted one
Authorization
The process of determining that a requester is allowed to receive
a service or perform an operation.
Access control is an example of authorization.
Availability
Assuring information and communications services will be ready
for use when expected.
Information must be kept available to authorized persons when
they need it.
Non-repudiation (acknowledgment)
In reference to digital security, non-repudiation means to ensure
that a transferred message has been sent and received by the
parties claiming to have sent and received the message. Non-
repudiation is a way to guarantee that the sender of a message
cannot later deny having sent the message and that the
recipient cannot deny having received the message.
Example :
A Student Management System is
insecure if ‘Admission’ branch can edit the
data of ‘Exam’ branch
An ERP system is not secure if DEO (data
entry operator) can generate ‘Reports’
An online Shopping Mall has no security if
customer’s Credit Card Detail is not
encrypted
A custom software possess inadequate
security if an SQL query retrieves actual
passwords of its users
Performance Testing
Performance Testing is a type of testing to ensure software
applications will perform well under their expected workload.
A software application's performance like its response time,
reliability, resource usage and scalability do matter.
The goal of Performance Testing is not to find bugs but to
eliminate performance bottlenecks.
The focus of Performance Testing is checking a software
program's
Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software
application can handle.
Stability - Determines if the application is stable under
varying loads
Performance Testing Process
1) Identify your testing environment –
Do proper requirement study & analyzing test goals and its
objectives. Also determine the testing scope along with test
Initiation Checklist. Identify the logical and physical
production architecture for performance testing, identify the
software, hardware and networks configurations required for
kick off the performance testing. Compare the both test and
production environments while identifying the testing
environment. Get resolve the environment-related concerns if
any, analyze that whether additional tools are required for
performance testing. This step also helps to identify the
probable challenges tester may face while performance
testing.
2) Identify the performance acceptance criteria –
Identify the desired performance characteristics of the
application like Response time, Throughput and Resource
utilization.
3)Plan & design performance tests –
Planning and designing performance tests involves identifying
key usage scenarios, determining appropriate variability
across users, identifying and generating test data, and
specifying the metrics to be collected. Ultimately, these items
will provide the foundation for workloads and workload
profiles. The output of this stage is prerequisites for Test
execution are ready, all required resources, tools & test data
are ready.
4) Configuring the test environment –
Prepare with conceptual strategy, available tools, designed
tests along with testing environment before execution. The
output of this stage is configured load-generation environment
and resource-monitoring tools.
5) Implement test design –
According to test planning and design create your performance
tests.
6) Execute the tests –
Collect and analyze the data.
Problem Investigation like bottlenecks (memory, disk, processor,
process, cache, network, etc.) resource usage like (memory, CPU,
network, etc.,)
Generate the Performance analysis reports containing all
performance attributes of the application.
Based on the analysis prepare recommendation report.
Repeat the above test for the new build received from client after
fixing the bugs and implementing the recommendations
7) Analyze Results, Report, and Retest
Consolidate, analyze and share test results.
Based on the test report re-prioritize the test & re-execute the
same. If any specific test result within the specified metric limit &
all results are between the thresholds limits then testing of same
scenario on particular configuration is completed.
Test objectives frequently include the
following:
Performed at developer's site - testing environment. Performed in real environment, and hence activities
Hence, the activities can be controlled cannot be controlled
Only functionality, usability are tested. Reliability and Functionality, Usability, Reliability, Security testing are
Security testing are not usually performed in-depth all given equal importance to be performed
White box and / or Black box testing techniques are Only Black box testing techniques are involved
involved
Build released for Alpha Testing is called Alpha Release Build released for Beta Testing is called Beta Release
System Testing is performed before Alpha Testing Alpha Testing is performed before Beta Testing
Issues / Bugs are logged into the identified tool directly Issues / Bugs are collected from real users in the form of
and are fixed by developer at high priority suggestions / feedbacks and are considered as
improvements for future releases.
Helps to identify the different views of product usage as Helps to understand the possible success rate of the
different business streams are involved product based on real user’s feedback / suggestions.
Alpha Testing Beta Testing
Test Goals
When
Usually after System testing phase or when Usually after Alpha Testing and product is
the product is 70% - 90% complete 90% - 95% complete
Features are almost freezed and no scope for Features are freezed and no enhancements
major enhancements accepted
Build should be stable for technical user Build should be stable for real users
Alpha Testing Beta Testing
Test Duration
Many test cycles conducted Only 1 or 2 test cycles conducted
Each test cycle lasts for 1 - 2 weeks Each test cycle lasts for 4 - 6 weeks
Duration also depends on the number of issues Test cycles may increase based on real user's
found and number of new features added feedback / suggestion
Stake Holders
Engineers (in-house developers), Quality Assurance Product Management, Quality Management, and
Team, and Product Management Team User Experience teams
Participants
Technical Experts, Specialized Testers with good End users to whom the product is designed
domain knowledge (new or who were already part of
System Testing phase), Subject Matter Expertise
Customers and / or End Users can participate in Customers also usually participate in Beta Testing
Alpha Testing in some cases
Alpha Testing Beta Testing
Expectations
Acceptable number of bugs that were missed in earlier Major completed product with very less amount of
testing activities bugs and crashes
Incomplete features and documentation Almost completed features and documentation
Entry Criteria
• Alpha Tests designed and reviewed for Business • Beta Tests like what to test and procedures
requirements documented for Product usage
• Traceability matrix should be achieved for all the • No need of Traceability matrix
between alpha tests and requirements • Identified end users and customer team up
• Testing team with knowledge about the domain and • End user environment setup
product • Tool set up should be ready to capture the feedback /
• Environment setup and build for execution suggestions
• Tool set up should be ready for bug logging and test • Alpha Testing should be signed off
management
System testing should be signed-off (ideally)
Exit Criteria
• All the alpha tests should be executed and all the • All the cycles should be completed
cycles should be completed • Critical / Major issues should be fixed and retested
• Critical / Major issues should be fixed and retested • Effective review of feedback provided by participants
• Effective review of feedback provided by participants should be completed
should be completed • Beta Test summary report
• Alpha Test Summary report • Beta Testing should be signed off
• Alpha testing should be signed off
Alpha Testing Beta Testing
Pros
• Helps to uncover bugs that were not found during • Product testing is not controllable and user may test
previous testing activities any available feature in any way - corner areas are
• Better view of product usage and reliability well tested in this case
• Analyze possible risks during and after launch of the • Helps to uncover bugs that were not found during
product previous testing activities (including alpha)
• Helps to be prepared for future customer support • Better view of product usage, reliability, and security
• Helps to build customer faith on the product • Analyze the real user's perspective and opinion on
• Maintenance Cost reduction as the bugs are identified the product
and fixed before Beta / Production launch • Feedback / suggestions from real users helps in
• Easy Test Management improvising the product in future
• Helps to increase customer satisfaction on the
product
Cons
• Not all the functionality of the product is expected to • Scope defined may or may not be followed by
be tested participants
• Only Business requirements are scoped • Documentation is more and time consuming -
required for using bug logging tool (if required), using
tool to collect feedback / suggestion, test procedure
(installation / uninstallation, user guides)
• Not all the participants assure to give quality testing
• Not all the feedback are effective - time taken to
review feedback is high
• Test Management is too difficult
Smoke Testing
Smoke testing is the surface level testing to certify that build provided by
development to QA is ready to accept for further testing.
In smoke testing we only checks the major functionality of the software.
Smoke testing is also known by the name BAT (Build Acceptance Test)
because it establishes the acceptance criteria for QA to accept and reject a
build for further testing. So apart from smoke testing it is also very
important for software people to know about build.
A build is called as the version of software, typically one that is still in testing
stage.
Smoke testing is performed by developers before releasing the build to the
testing team and after releasing the build to the testing team it is performed
by testers whether to accept the build for further testing or not.
If the build clears the Smoke test, then it is accepted by QA for further
testing, however if the build fails the Smoke test, then it’s rejected and QA
reverts back to previously accepted build.
Smoke Testing Example
So now the bug is reported by the testing team to the developer team to fix it.
When the developing team fixes the bug and passed it to testing team than the
testing team checks the other modules of the application means checks that fix
bug does not affect the functionality of the other modules but keep one point
always in mind that testing team only checks the extreme functionality of the
modules, do not go deep to test the details because of the short time so this is
the sanity testing.
Smoke Testing Sanity Testing
Smoke Testing is performed to ascertain Sanity Testing is done to check the new
that the critical functionalities of the functionality / bugs have been fixed
program is working fine
The objective of this testing is to verify the The objective of the testing is to verify the
"stability" of the system in order to proceed "rationality" of the system in order to
with more rigorous testing proceed with more rigorous testing
Smoke testing is like General Health Check Sanity Testing is like specialized health
Up check up
Regression Testing
Regression Testing is defined as a type of software testing to confirm that a
recent program or code change has not adversely affected existing features.
Regression Testing is nothing but full or partial selection of already executed
test cases which are re-executed to ensure existing functionalities work fine.
This testing is done to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that old code still works once
the new code changes are done.
Regression Testing is required when there is a
Change in requirements and code is modified according to the requirement
New feature is added to the software
Defect fixing
Performance issue fix
Regression Testing Techniques :
Retest All
This is one of the methods for Regression Testing in which all
the tests in the existing test bucket or suite should be re-
executed. This is very expensive as it requires huge time and
resources.
Regression Test Selection
Instead of re-executing the entire test suite, it is better to
select part of test suite to be run
Test cases selected can be categorized as 1) Reusable Test
Cases 2) Obsolete Test Cases.
Re-usable Test cases can be used in succeeding regression
cycles.
Obsolete Test Cases can't be used in succeeding cycles.
Prioritization of Test Cases
Prioritize the test cases depending on business impact, critical
& frequently used functionalities. Selection of test cases
based on priority will greatly reduce the regression test suite.
Selecting test cases for regression testing
It was found from industry data that good number of the defects
reported by customers were due to last minute bug fixes creating side
effects and hence selecting the Test Case for regression testing is an art
and not that easy.
Following are most important tools used for both functional and regression
testing:
Selenium: This is an open source tool used for automating web applications.
Selenium can be used for browser based regression testing.
Timing Diagram
Communication Diagram