STLC Stage Entry Criteria Activity Exit Criteria Deliverables
STLC Stage Entry Criteria Activity Exit Criteria Deliverables
Testing lifecycle
Types of testing
Testing Artifacts
Test plan
Test cases
Test scenario
Test data
Defect lifecycle management
Role of Ba in testing
Entry exit criteria
Issue log
UAT
Role of BA in UAT
Smoke, Regression, sanity in detail
Manual testing v/s Automation testing
TDD approach
Test - Testing has been - Evaluate cycle - Test closure - Test closure report
Cycle completed completion criteria report signed off - Test metrics
Closure - Test results are available based on time, test by client
- Defect logs are available coverage, cost,
critical buss.
objectives, quality
- Prepare test metrics
based on the above
parameters
- Document learning
- Prepare test closure
report
- Qual./quant.
Reporting of the
work to the customer
- Test result analysis
UNIT TESTING
Two types:
1. Manual
2. Automated:
a. A developer writes a section of code in the application just to test the function.
The test code is removed when the application is deployed.
b. Developer could also isolate the function to test it more rigorously. Isolating
the code helps in revealing unnecessary dependencies between the code being
tested and other units or data spaces.
INTEGRATION TESTING
Definition: type of software testing where software modules are integrated logically and
tested as a group.
Integration testing is a level of software testing where individual units are combined and
tested as a group. The purpose of this testing is to expose faults in the interaction between
integrated units. Performed after unit testing.
Focuses mainly on the interfaces & flow of data/information between the modules. Priority is
given for integrating links rather than the unit functions which are already tested.
SYSTEM TESTING
Is the testing of a complete and fully integrated software product. Software is interfaced with
other software/hardware systems. System testing is a series of different tests whose sole
purpose is to exercise the full computer-based system.
Two Category of Software Testing
- Black Box Testing: testing does not have any information about the internal working
of the software. It is a high level of testing that focuses on the behavior of the
software. It involves testing from an external or end-user perspective. Can be applied
to every level of testing: unit, integration, system or acceptance.
- White Box Testing: it checks the internal functioning (internal workings or code) of
the system. It is considered as a low-level testing.
SMOKE TESTING
Software testing performed after software build to ascertain that the critical functionalities
of the program are working fine. It is executed before any detailed functional or regression
tests are executed on the software build.
The purpose is to reject a badly broken application so that the QA team does not waste
time installing and testing the software application.
The objective is not to perform exhaustive testing, but to verify that the critical functionalities
of the system are working fine.
E.g. Verify that the app launches successfully, check that the GUI is responsive etc.
SANITY TESTING
Software testing performed after receiving a software build, with minor changes in code, or
functionality to ascertain that the bugs have been fixed and no further issues are
introduced due to these changes. If sanity test fails, the build is rejected to save time and
cost involved in a more rigorous testing.
The objective is not to verify thoroughly the new functionality but to determine that the
developer has applied some rationality(sanity) while producing the software.
REGRESSION TESTING
Type of software testing to confirm that a recent program or code change has not adversely
affected existing features.
It is a full or partial selection of already executed test cases which are re-executed to ensure
existing functionalities work fine. New code changes should not have side effects on the
existing functionalities. Old code still works once the new code changes are done.
How to do UAT?
- Analysis of requirements
- UAT plan creation
- Identify test scenarios
- Creation of UAT test cases
- Test data preparation
- Test run
- Confirm business objectives
PERFORMANCE TESTING
Type of software testing to ensure software applications will perform well under their
expected load. A software application’s performance like its response time, reliability,
resource usage and scalability do matter. The goal of performance testing is to eliminate
performance bottlenecks.
MANUAL TESTING
Testers manually execute test cases without using any automation tools. Most primitive of all
testing types and helps find bugs in the software system.
Any new application must be manually tested before its testing can be automated.
Manual testing requires more effort but is necessary to check automation feasibility.
Manual testing does not require knowledge of any testing tool.
TEST SCENARIO
Any functionality that can be tested. It is also called Test Condition or Test Possibility. As
a tester, you may put yourself in the end user’s shoes and figure out the real-world scenarios
and use cases of the AUT.
TEST PLAN
Detailed document that outlines the test strategy, testing objectives, resources
(manpower, software, hardware) required for testing, test schedule, test estimation and
test deliverables.
It serves as a blueprint to conduct software testing activities as a defined process which is
monitored and controlled by the test manager.
Test plan helps us to determine the effort needed to validate the quality of the application
Helps the test team understand the details of testing
Guides thinking, it is a rule book
TEST DATA
Test data is the input given to a software program, represents data that affects or is affected
by the execution of the specific module.
Some data may be used for positive testing (verify that a given set of input to a given
function produces an expected result)
Other data may be used for negative testing (test the ability of the program to handle
unusual, extreme, exceptional or unexpected input).
The variation in the test result is referred as Software Defect. Referred to as issues,
problems, bug or incidents.
FAILURE
Software fails to perform its required function.
BUG
Bug or defect- expected or actual behavior are not matching.
ISSUE
Defined as a unit of work to accomplish an improvement in a system.
It could be a bug, change request, task, missing documentation etc.
Issue Management: process to make others aware of the problem and then resolve it as fast as
possible.
Tool to record project issue is issue log. There is always a priority level assigned to an issue.
It captures all requirements proposed by the client or software development team and their
traceability in a single document delivered at the conclusion of the lifecycle.
Maps and traces user requirements with test cases. To see that no functionality is missed
while doing software testing.
Reqts split into Test Scenarios and then into Test Cases.
Typical RTM contains: Req ID, Req description, TC ID, Test Desc, test designer, UAT Test
req?, Test execution(Test Env., UAT Env., Prod Env.), Defects, Defect ID, Defect Status,
Req coverage Status.
Advantages:
1. It confirms 100% test coverage
2. It highlights any reqts missing or document inconsistencies.
3. It shows overall defects or execution status with a focus on business reqts.
4. It helps in analyzing or estimating the impact on the QA team’s work with respect to
revisiting or re-working on test cases.