0% found this document useful (0 votes)
143 views

Testing Methodology

The document describes the Software Testing Life Cycle (STLC) process. It discusses different levels of testing that can be performed during data warehouse testing, including constraint testing, source to target counts, and source to target data validation to validate unique constraints, primary keys, foreign keys, indexes, and relationships. The document then outlines 12 steps to be followed from the start of testing software to the end, including static testing, dynamic testing, test case preparation, test execution, bug reporting, fixing bugs, and retesting until the product is ready for release.

Uploaded by

raghubhaskar1
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views

Testing Methodology

The document describes the Software Testing Life Cycle (STLC) process. It discusses different levels of testing that can be performed during data warehouse testing, including constraint testing, source to target counts, and source to target data validation to validate unique constraints, primary keys, foreign keys, indexes, and relationships. The document then outlines 12 steps to be followed from the start of testing software to the end, including static testing, dynamic testing, test case preparation, test execution, bug reporting, fixing bugs, and retesting until the product is ready for release.

Uploaded by

raghubhaskar1
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

In an SDLC environment, STLC (Software Testing Life Cycle) is followed.

There are several levels of testing that can be performed during datawarehouse testing. Some examples are
Constraint testing Source to target counts and Source to target data validation. The objective is to validate
unique constraints, primary keys, foreign keys, indexes, and relationships. Some ETL processes can be
developed to validate constraints during the loading of the warehouse

The Steps are to be followed from the start of Testing of software to the end of the testing as follows:
1- Before the dynamic testing, there is a static testing. Static testing includes review of documents required for the
software development. This includes following activities:

(a) All the documents related to customer requirements and business rules that are required for the design and
development processes are reviewed. The review process includes comprehensive and thorough study of the
documents. If any discrepancy is found then it would be noted and raised.

(c) After this the QA and development team meets and discuses on the discrepancies found. The agenda would
mainly include what is missing in the document, QA queries to be answered by Development/Project Team and/or
clarification required for any confusions.

2- After the application is developed, QA team starts dynamic testing. If during the development the requirement has
been changed on customer demand or due to any other reason, then that should be documented and a copy of this
revised document is given to the QA team.

3- Development and Testing environment should be made clear to the QA by the Development team. It include the
following activities:
(a)- Server to hit for Testing
(b)- Installation of latest build on the test server.
(c)- Modules/Screens to test.
(d)- Test duration as decided by test manager and project manager mutually based on scope
of work and team strength.
(e)– Demo of the software on test server by development team to the QC members.

4- After this Test cases and test scenarios are prepared and then the Test execution by QC.

5- A comprehensive Report of Bugs is prepared by the Testers and a review/verification by QC/QA/Testing Head
takes place. Before handing over this report to Development Team there is a thorough review of Bugs List by Test
Manager and in case of any clarification required on a bug submitted, the Testing Head discusses the bugs with the
assigned tester.

6- Release of bug report by QC Team to Development Team.

7- Discussion/simulation of bugs by QC with development team if development team requires and time required for
fixing the bugs should be made clear by Dev team at this stage.

8- Feedback from Development team on reported bugs with the stipulated time frame required to fix all bugs.

9- Any changes in the software being made in respect to fix these bugs should be made clear to the QA team by the
Development team.

10- Testing team then Retests or verifies the bugs fixed by the development team.

11- Submitting the retesting bug report to the Test manager and after this the step 5 to step 10 are followed until the
product has reached a stage where it can be released to customer.
12- Criteria for ending the testing should be defined by management or Test Manager Like when all major bugs are
reported and fixed. Major bugs mean the bugs that affect the Business of the Client.

V – Model

There are two phases in the V model development. They are


1. Verification
2. Validation

Requirements analysis
In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). Usually, the users are
interviewed and a document called the user requirements document is generated. The user requirements document will typically
describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user.
The user acceptance tests (UAT) are designed in this phase.

System Design
System engineers analyze and understand the business of the proposed system by studying the user requirements document.
They figure out possibilities and techniques by which the user requirements can be implemented. Other technical
documentation like entity diagrams, data dictionary will also be produced in this phase.  The documents for system testing are
prepared in this phase.

Architecture Design
This phase can also be called as high-level design (HLD) which consists of the list of modules, brief functionality  of each
module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The
integration testing design is carried out in this phase.
Module Design
This phase can also be called as low-level design (LLD). The designed system is broken up in to smaller units or modules and
each of them is explained so that the programmer can start coding directly. All interface details with complete API references-
all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this
stage.

Coding Phase

Validation Phases

Unit Testing
This is the first stage of dynamic testing process. It involves analysis of the written code with the intention of eliminating
errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box.
It is done using the Unit test design prepared during the module design phase. This may be carried out by software testers,
software developers or both.

Integration Testing
In integration testing the separate modules will be tested together expose faults in the interfaces and in the interaction between
integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the
integration test design prepared during the architecture design phase.  Integration testing is generally conducted by software
testers.

System Testing

System testing will compare the system specifications against the actual system. The system test design derived from the system
design documents and is used in this phase. Sometimes system testing is automated using testing tools. once all the modules are
integrated several erros may rise.Testing done at this stage is called system test.

User Acceptance Testing

Acceptance Testing checks the system against the requirements of the user. It uses  black box testing using real data, real
people and real documents to ensure ease of use and functionality of systems. Users who understand the business functions run
the tests as given in the acceptance test plans, including installation and Online help. Hardcopies of user documentation are also
being reviewed for usability and accuracy. The testers formally document the results of each test,  and provide error reports,
correction requests to the developers.

Rule 1: Estimation shall be always based on the software requirements


All estimation should be based on what would be tested, i.e., the software requirements.
In many cases, the software requirements are only established by the development team without any or just a
little participation from the testing team. After the specification have been established and the project costs and
duration have been estimated, the development team asks how long would take for testing the solution. 

Instead of this:
The software requirements shall be read and understood by the testing team, too. Without the testing
participation, no serious estimation can be considered.

Rule 2: Estimation shall be based on expert judgment


Before estimating, the testing team classifies the requirements in the following categories:
- Critical: The development team has little knowledge in how to implement it;
- High: The development team has good knowledge in how to implement it but it is not an easy task;
- Normal: The development team has good knowledge in how to implement.

The experts in each requirement should say how long it would take for testing them. The categories would help
the experts in estimating the effort for testing the requirements.

Rule 3: Estimation shall be based on previous projects


All estimation should be based on previous projects. If a new project has similar requirements from a previous one,
the estimation is based on that project.

Rule 4: Estimation shall be recorded


All decisions should be recorded. It is very important because if requirements change for any reason, the records
would help the testing team to estimate again. The testing team would not need to return for all steps and take
the same decisions again. Sometimes, it is an opportunity to adjust the estimation made earlier.

Rule 5: Estimation shall be supported by tools


Tools (e.g a spreadsheet containing metrics) that help to reach the estimation quickly should be used. In this case,
the spreadsheet calculates automatically the costs and duration for each testing phase.
Also, a document containing sections such as: cost table, risks, and free notes should be created. This letter
should be sent to the customer. It also shows the different options for testing that can help the customer decide
which kind of test he needs.

Rule 6: Estimation shall always be verified


Finally, all estimation should be verified. Another spreadsheet can be created for recording the estimations. The
estimation is compared to the previous ones recorded in a spreadsheet to see if they have similar trend. If the
estimation has any deviation from the recorded ones, then a re-estimation should be made.

Difference between Verification and Validation:


Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code,
requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of
verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of
validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly
perfect set of documents, plans, specifications, and requirements document. The output of validation, on the
other hand, is a nearly perfect, actual product.

Responsibilities of a Test Manager / Lead


 Understand the testing effort by analyzing the requirements of project.
 Estimate and obtain management support for the time, resources and budget required to perform the
testing.
 Organize the testing kick-off meeting
 Define the Strategy
 Build a testing team of professionals with appropriate skills, attitudes and motivation.
Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
 Develop the test plan for the tasks, dependencies and participants required to mitigate the risks to
system quality and obtain stakeholder support for this plan.
 Arrange the Hardware and software requirement for the Test Setup.
 Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
 Ensure content and structure of all Testing documents / artifacts is documented and maintained.
 Document, implement, monitor, and enforce all processes for testing as per standards defined by the
organization.
 Check / Review the Test Cases documents.
 Keep track of the new requirements / change in requirements of the Project.
 Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager /
Sr. Test Manager.
 Organize the status meetings and send the Status Report (Daily, Weekly etc.) to the Client
 Attend the regular client call and discuss the weekly status with the client.
 Communication with the Client (If required).
 Act as the single point of contact between Development and Testers.
 Track and prepare the report of testing activities like test testing results, test case coverage, required
resources, defects discovered and their status, performance baselines etc.
 Review various reports prepared by Test engineers.
 Ensure the timely delivery of different testing milestones.
 Prepares / updates the metrics dashboard at the end of a phase or at the completion of project.

Testing Estimation:

Steps:
1. Guaging the size of of the application using functional points
2. Assesing the ETL complexity factors
3. Performing regression analysis
4. Implementing the effort estimaition model
5. Corrections using statistics recorded from history of similar projects

Functional Points:
1. Data Functions
a. Internal Logical Files (Any logic before updating data before it hits the staging tables / Target
tables)
b. External Interface Files (staging tables)
2. Transactional functions
a. External Inputs
b. External outputs
c. External Inquiries

ETL Complexities:
1. Number or sources
2. Amount of data processed
3. If performance is a criteria
4. Number and type of the transformations and business rules
5. Number of target columns
6. Are target tables partitioned
7. Nature of the target table
8. Number of sessions and worklets
9. Test environment setup and comparison data availability
10. Number of sessions or worklets
11. Recon files provided to further validate Source file and Target table data

We calculate a total actual effort from these factors.

Regression Analysis:
Actual effort is caluclated here. It is a sum of the functional points PLUS ETL complexities.

You might also like