Testing Methodology
Testing Methodology
There are several levels of testing that can be performed during datawarehouse testing. Some examples are
Constraint testing Source to target counts and Source to target data validation. The objective is to validate
unique constraints, primary keys, foreign keys, indexes, and relationships. Some ETL processes can be
developed to validate constraints during the loading of the warehouse
The Steps are to be followed from the start of Testing of software to the end of the testing as follows:
1- Before the dynamic testing, there is a static testing. Static testing includes review of documents required for the
software development. This includes following activities:
(a) All the documents related to customer requirements and business rules that are required for the design and
development processes are reviewed. The review process includes comprehensive and thorough study of the
documents. If any discrepancy is found then it would be noted and raised.
(c) After this the QA and development team meets and discuses on the discrepancies found. The agenda would
mainly include what is missing in the document, QA queries to be answered by Development/Project Team and/or
clarification required for any confusions.
2- After the application is developed, QA team starts dynamic testing. If during the development the requirement has
been changed on customer demand or due to any other reason, then that should be documented and a copy of this
revised document is given to the QA team.
3- Development and Testing environment should be made clear to the QA by the Development team. It include the
following activities:
(a)- Server to hit for Testing
(b)- Installation of latest build on the test server.
(c)- Modules/Screens to test.
(d)- Test duration as decided by test manager and project manager mutually based on scope
of work and team strength.
(e)– Demo of the software on test server by development team to the QC members.
4- After this Test cases and test scenarios are prepared and then the Test execution by QC.
5- A comprehensive Report of Bugs is prepared by the Testers and a review/verification by QC/QA/Testing Head
takes place. Before handing over this report to Development Team there is a thorough review of Bugs List by Test
Manager and in case of any clarification required on a bug submitted, the Testing Head discusses the bugs with the
assigned tester.
7- Discussion/simulation of bugs by QC with development team if development team requires and time required for
fixing the bugs should be made clear by Dev team at this stage.
8- Feedback from Development team on reported bugs with the stipulated time frame required to fix all bugs.
9- Any changes in the software being made in respect to fix these bugs should be made clear to the QA team by the
Development team.
10- Testing team then Retests or verifies the bugs fixed by the development team.
11- Submitting the retesting bug report to the Test manager and after this the step 5 to step 10 are followed until the
product has reached a stage where it can be released to customer.
12- Criteria for ending the testing should be defined by management or Test Manager Like when all major bugs are
reported and fixed. Major bugs mean the bugs that affect the Business of the Client.
V – Model
Requirements analysis
In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). Usually, the users are
interviewed and a document called the user requirements document is generated. The user requirements document will typically
describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user.
The user acceptance tests (UAT) are designed in this phase.
System Design
System engineers analyze and understand the business of the proposed system by studying the user requirements document.
They figure out possibilities and techniques by which the user requirements can be implemented. Other technical
documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing are
prepared in this phase.
Architecture Design
This phase can also be called as high-level design (HLD) which consists of the list of modules, brief functionality of each
module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The
integration testing design is carried out in this phase.
Module Design
This phase can also be called as low-level design (LLD). The designed system is broken up in to smaller units or modules and
each of them is explained so that the programmer can start coding directly. All interface details with complete API references-
all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this
stage.
Coding Phase
Validation Phases
Unit Testing
This is the first stage of dynamic testing process. It involves analysis of the written code with the intention of eliminating
errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box.
It is done using the Unit test design prepared during the module design phase. This may be carried out by software testers,
software developers or both.
Integration Testing
In integration testing the separate modules will be tested together expose faults in the interfaces and in the interaction between
integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the
integration test design prepared during the architecture design phase. Integration testing is generally conducted by software
testers.
System Testing
System testing will compare the system specifications against the actual system. The system test design derived from the system
design documents and is used in this phase. Sometimes system testing is automated using testing tools. once all the modules are
integrated several erros may rise.Testing done at this stage is called system test.
Acceptance Testing checks the system against the requirements of the user. It uses black box testing using real data, real
people and real documents to ensure ease of use and functionality of systems. Users who understand the business functions run
the tests as given in the acceptance test plans, including installation and Online help. Hardcopies of user documentation are also
being reviewed for usability and accuracy. The testers formally document the results of each test, and provide error reports,
correction requests to the developers.
Instead of this:
The software requirements shall be read and understood by the testing team, too. Without the testing
participation, no serious estimation can be considered.
The experts in each requirement should say how long it would take for testing them. The categories would help
the experts in estimating the effort for testing the requirements.
Testing Estimation:
Steps:
1. Guaging the size of of the application using functional points
2. Assesing the ETL complexity factors
3. Performing regression analysis
4. Implementing the effort estimaition model
5. Corrections using statistics recorded from history of similar projects
Functional Points:
1. Data Functions
a. Internal Logical Files (Any logic before updating data before it hits the staging tables / Target
tables)
b. External Interface Files (staging tables)
2. Transactional functions
a. External Inputs
b. External outputs
c. External Inquiries
ETL Complexities:
1. Number or sources
2. Amount of data processed
3. If performance is a criteria
4. Number and type of the transformations and business rules
5. Number of target columns
6. Are target tables partitioned
7. Nature of the target table
8. Number of sessions and worklets
9. Test environment setup and comparison data availability
10. Number of sessions or worklets
11. Recon files provided to further validate Source file and Target table data
Regression Analysis:
Actual effort is caluclated here. It is a sum of the functional points PLUS ETL complexities.