Software Quality Assurance Training Material: Manual Testing
Software Quality Assurance Training Material: Manual Testing
MANUAL TESTING
Manual Testing is a Black Box testing and doesn’t require any programming language where as
White Box testing requires programming language.
Manual testing will be done using client/front end application and make sure every thing is working
as expected based requirements documents.
Product will not be release without testing the application manually (mostly).
Proper automation scripts can be developed by testing the application manually before we start
automation.
Drawbacks: Manual Testing is time consuming and requires heavy resources some times (when it
requires testing on multiple OS’s and browsers etc).
Verification: Verification is a process to ensure that the software that is made, matches the original
design. In other words, it checks whether the software is made according to the criteria and
specification described in the requirement document. It is to check whether you built the product right
as per design. It is a low level checking. (It is done in walk-through meetings generally). It checked
whether it is made accordingly to the design..
Validation: Validation is a process to check whether the product design fits the client’s need. It
checks whether you built the right thing. It checks whether it is designed properly.
There are four major milestones in Development cycle: Planning Phase, Design Phase,
Development Phase, and Stabilization Phase. The Planning Phase culminates in the completion of
the Planning Docs Milestone (Requirements plus Functional Spec). The Design Phase culminates in
the completion of the Design Spec and Test Plan / Test Spec. The Development Phase culminates
in the Code Complete Milestone. The Stabilization Phase culminates in the Release Milestone.
During the first two phases, testing plays a supporting role, providing ideas and limited testing of the
planning and design documents. Throughout the final two stages, testing plays a key role in the
project.
TEST STAGES
Milestone 1 - Planning Phase
Manual Testing Page 3/14
Milestone 2 - Design Phase
Milestone 2a - Usability Testing
Milestone 3 - Developing Phase
Milestone 3a - Unit Testing (Multiple)
Milestone 3b - Acceptance into Internal Release Testing
Milestone 3c - Internal Release Testing
Milestone 3d - Acceptance into Alpha Testing
Milestone 3e - Alpha Testing
Milestone 4 - Stabilization Phase
Milestone 4a - Acceptance into Beta Testing
Milestone 4b - Beta Testing
Milestone 4c - Release to Manufacturing (RTM)
Milestone 4d - Post Release
TEST LEVELS
Build Tests
Level 1 - Build Acceptance Tests
Level 2 - Smoke Tests
Level 2a - Bug Regression Testing
Milestone Tests
Level 3 - Critical Path Tests
Release Tests
Level 4 - Standard Tests
Level 5 - Suggested Test
BUG REGRESSION
BUG TRIAGE
SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS
TEST COMPLETENESS
Standard Conditions:
Bug Reporting & Triage Conditions
The test strategy is a formal description of how a software product will be tested.
It includes introduction, Test Objectives, Test Process, Test Methodology, Test Scope, Release
Criteria for Testing (exit criteria), Test Lab configuration, resource and schedule for test activities,
acceptance criteria, test environment, test tools, test priorities, test planning, executing a test pass
and types of test to be performed.
A test strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team.
A test plan may include test cases, conditions, test environment, a list of related tasks, pass/fail
criteria and risk assessment. Inputs for this process:
• A description of the required hardware and software components, including test tools. This
information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and schedule
constraints. This information comes from man-hours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations. Outputs for this process:
The general testing process is the creation of a test strategy (which sometimes includes the creation
of test cases), creation of a test plan/design (which usually includes test cases and test procedures)
and the execution of tests.
(i) Black Box Testing: This testing is done to find the circumstances in which program doesn’t
behave according to the specification.
Black box testing refers to test activities using specification-based testing methods and criteria to
discover program errors based on program requirements and product specifications.
(ii) White Box Testing: The basic idea is to test a program, based on structure of a program. We’ll
discuss more about the same later.
For more info, refer WhiteBoxTesting.ppt
(iii) Gray Box Testing: If you know black box and white box, then you will be called as Gray box
tester.
1. ISO – International Standard Organization: It provides guidelines for the selection and use of
standards for quality management and quality assurance
2. SEI-CMM – Software Engineering Institute Capability Maturity Model: CMM for s/w provides s/w
organizations with guidance on how to gain control over s/w process for developing and
maintaining s/w.
(i) System Study – System study is nothing but understanding the application by reviewing
application/product related documents like SRS, Use Cases, Functional specifications, screen
shots etc.
(ii) Build: Build contains executable (ex: setup.exe for windows and setup.tar for Unix). Executable
contains set of programs (ex: Registration.asp, Login.asp etc). Build will be developed by Build
Release Engineering team. Some companies they will call it as Drop instead of Build.
QA: From installation to System test we have to perform testing on QA server in initial testing stage
QA server will be in QA lab.
Staging: Once system test is pass against QA server, we have to do quick smoke test or limited system
testing against staging server. A staging server is almost equivalent to production server. That means
staging server and production server configuration/environment is almost same.
Staging server will be in QA lab.
Production: Once testing is pass on staging server, all files should be moved to production server, so that
application will be available to public.
Production server will be in client/customer place.
(iv) Release Candidate: Before testing the final build (Gold Master), you will test this build.
(v) Use Case: A use case is a business need and that will be plain English developed by
management team (Business Analyst etc). use case is a technique for capturing functional
(viii) Test Plan – It is a strategic document file consisting of complete information of testing process.
Refer sample TestPlan.
This document says what QA team is going to test for the application/project. This document will
be developed by QA team (QA manager/QA team etc).
This document will be developed based on requirements documents, use cases etc.
(ix) Test Case – It is a verification point or checkpoint to accomplish the task of testing an
application. Refer sample TestCase.
Test cases will be developed based on requirements documents and test plan.
Test cases will be developed by QA lead, QA team members.
Test cases will be developed in customer point of view.
Test cases will be developed by covering positive, negative, boundary conditions, functional,
user interface, performance etc.
Test cases will be executed against each and every build.
(x) Test Execution – Execute all the test cases against the Build. Build will be release to QA weekly
(mostly every Monday).
(xi) Results – No. of test cases executed, No. of test cases pass, No. of test cases failed, No. of test
cases differed.
(xii) Defect Tracking: If the Test Case fails, the bugs are filed in Bug Tracing System (TD, DDTS,
TrackGear). We can find how new bugs are filed, how many bugs are assigned, how many bugs
are opened, how many are fixed, how many are posted, rejected, duplicated and closed etc.
(xiii) Reports: The testing results of application or bugs which has failed after testing will be
presented in a graphical way or tabular way
(xv) QA Status Meetings – This will happen once in a week – Only QA people will join.
QA status meeting will happen among the QA team. Dev or other teams will not join this
meeting.
QA status meeting will happen once in a week (Thursday).
During this meeting, QA manager may give the details about upcoming projects, schedules.
QA lead/testers should give update about the task which they are working that time. (Ex:
Test Plan development completed 80%, test cases developed 50%, automation completed
75% etc).
QA team can ask QA manager about the following: if they need vacation or if they need to
upgrade memory, or/and if they need another computer etc.
(xvii) Maintenance
(i) Installation Testing: The installation testing is to ensure that s/w can be installed under different
conditions like new installation, an upgrade & custom installation. Once installed properly we
have to verify that s\w is operating correctly.
(ii) Smoke Testing/Sanity: Main feature of the applications are tested, if this testing is pass, then
we will go for further types of tests. If it fails, we’ll send the build back to developers.
NOTE: Once spoke/sanity test is passed, then only QA accepts the build and do bug verification.
(iii) Regression Testing: It is performed to verify the bugs in earlier version, which have been fixed
and fixing of these bugs has not resulted in introducing new errors. This test is useful to the
application during version change. Regression testing will be done whenever there is a change
in code. That includes bug fixes, adding new feature, delete/modify existing feature.
(iv) New Features: Make sure all the new features delivered in that particular build is working as per
requirements.
(v) Functional Testing: Functional Testing is a form of software testing that attempts to determine
whether each function of the system works as specified The goal of this testing is to verify proper
data acceptance, processing and retrieval.
(vi) Integration Testing: It focuses on testing multiple modules working together. Testing groups of
related units. It ensures that units work together correctly.
(vii) User Interface Testing: Checks the images, graphics, links, spellings, alignments, tables,
frames etc (Tool: XENU)
(ix) Configuration Testing: It attempts to verify that all of the system’s functionality works under all
hardware and software configurations.
(x) Usability Testing: Usability Testing is a process that measures how well a web site or software
application allows its users to navigate, find valuable information quickly, and complete business
transactions efficiently.
(xi) Backend Testing: Testing the backend database (SQL, Oracle etc) to make sure data inserted
into the right database, right tables and data populated in correct format.
(xii) Performance Testing: Performance testing is testing that is performed to determine how fast
some aspect of a system performs under a particular workload. It can serve different purposes. It
can demonstrate that the system meets performance criteria. It can compare two systems to find
which performs better.
(xiii) Load Testing: This test subjects the program to test with multiple users using the single
program entity at the same time.
(xiv) Volume Testing: This type of testing is to feed large volumes of data into the system to
determine that the system can correctly process such amounts.
(xv) Stress Testing: It is the test to breakdown the system. In this testing we will check the system
how it is handling peak usage periods.
(xvi) System Testing: System testing is the first time that the entire system can be tested as a whole
system against the Feature Requirement Specification(s) (FRS) or the System Requirement
Specification (SRS), these are the rules that describe the functionality that the vendor (the entity
developing the software) and a customer have agreed upon. This will be done before
acceptance testing
(xviii) Upgrade Testing: Upgrade testing allows you to add/remove feature on the existing application.
(Ex: UI change, new features, new technology etc).
Steps for upgrade testing: -
1. Install customer build e.g. Gold Master or Gold Master plus service pack (ex: SP1)
2. Insert test data
3. Run Version 2.0 installer on top of version 1.0
4. Make sure there is no errors- if there are file a bug. Also it should give two options-one for
upgrade and the second for new installation
5. Once upgrade installation is completed successfully we have to check the following:
New features are added
Old features are still working
You are able to add new data
The old user account is able to update new data or new user can update old data
(xix) Globalization Testing (G10N): Which basically involves the translation of web pages into
various languages, thereby allowing companies to reach a "global public". Internationalization
(I18N), Localization (L10N) is subsets of Globalization.
Internationalization (I18N) and Localization (L10N) are subsets of globalization.
Product will be called as globalized when I18N & L10N are successfully completed.
Globalization Testing will be started once the English version is released. That should start
within 60-90 days.
(xx) Internationalization (I18N): Internationalization is the process of developing a software product
whose core design does not make assumptions based on a locale. It potentially handles all
targeted linguistic and cultural variations (such as text orientation, date/time format, currency,
accented and double-byte characters, sorting, etc.) within a single code base.
When we test I18N we have to make sure date, time, currency are translated to local
language.
We have to find a local operating system (Chinese), on top of that we have to install the
English version of the build.
(xxi) Localization Testing (L10N): Means taking an internationalized product and customizing it for a
specific market. This includes translating the software strings, rearranging the UI components to
preserve the original look and feel after translation, customizing the formats (such as date/time,
paper size, etc.), the defaults, and even the logic depending on the targeted market such a
customization is possible only if the application is properly internationalized; otherwise, the L10N
team faces a challenge whose significance depends on the application and the language
version. Once the Application properly tested in one language (Ex: Default –English) and if it is
stable, then we have to go for Localization testing, that means the same application will be
tested in different languages (Ex: French, Russian, Chinese etc).
When L10N is tested, we have to make sure all the contents (buttons/images/text etc)
are converted into local language.
We have to find local operating system (Chinese), on top of that we have to install
Chinese build.
Below link will convert English character to other languages, so that you can test you application
in other than English language.
http://babelfish.altavista.com/tr
(xxii) Documentation Testing: It is the user documentation to determine the representation of prior
system test cases. Make sure Help, Installation Guide, Tutorial, Users Guide, Admin Guide and
other related documents are properly prepared
Test cases should be executed on all builds and should be updated the test cases based on the test results
and then finally check following items:
1. Total test cases executed 2. Total test cases passed 3. Total test cases failed 4. Total test cases deferred
After developing some modules of the application, QA needs to start testing application. After final build
testing QA needs to stop testing application.
Unit Test Exit Criteria:
(i) All requirements documents should be base lined
(ii) Coding for phase should be completed
Integration Entrance Criteria:
(i) All requirement design documents should be base lined
(ii) Test Plan must be completed
(iii) First time delivery of software must be completed
Integration Exit Criteria:
(i) All test cases for the integration testing have been successfully executed:
- 100% of P-1 bugs fixed
- 100% of P-2 bugs fixed
- 80% of P-3 bugs fixed
Note: The above % will vary based on company/organization
System Test Entrance Criteria:
(i) Integration test exit criteria have been successfully met
(ii) All install documentation is completed
(iii) All software has been successfully built
Release Notes contains the information about the particular build/patch and will give the over all information
to test engineers/customers. This document contains the below info:
Build/Patch number and version
New feature or fix details – (for QA only)
Resolved Issue – (for QA only)
Known issues
Supported configurations etc…
Refer Sample Release Notes
Patch: (i) Find the customer environment (ex: Win 2003, IE, Build 55) (ii) Find similar environment in QA lab
(iii) Install Customer build on QA machine (iv) Reproduce Customer problem (v) Apply patch by following
instruction in patch release notes (vi) Re-test the customer problem (this time you should not see the
problem).
The V-Model:
http://en.wikipedia.org/wiki/V-Model_(software_development)
The V-model is a software development model which can be presumed to be the extension of the waterfall
model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase,
to form the typical V shape. The V-Model demonstrates the relationships between each phase of the
development life cycle and its associated phase of testing. The development process proceeds from the
upper left point of the V toward the right, ending at the upper right point. In the left-hand, downward-sloping
branch of the V, development personnel define business requirements, application design parameters and
design processes. At the base point of the V, the code is written. In the right-hand, upward-sloping branch of
the V, testing and debugging is done. The unit testing is carried out first, followed by bottom-up integration
testing. The extreme upper right point of the V represents product release and ongoing support.
3- Tier Architecture - We create a 3-tier architecture by inserting another program at the server level. We call this the
server application. Now the client application no longer directly queries the database; it queries the server application,
which in turn queries the data server.
*. Client application: the application on the client computer consumes the data and presents it in a readable format to the
student.
*. Server Application:
*. Data server: the database serves up data based on SQL queries submitted by the application.
Client->Server(IIS/Apache)->Database(Oracle/SQL server) – WEB BASED
Application Vs Product:
Application: A software developed for a specific customer (www.safeway.com)
Product: A software developed for multiple customers (MS Office, Payroll Mgmt, Inventory Mgmt)
Build: Build contains executable (ex: setup.exe for windows and setup.tar for Unix). Executable contains set of programs
(ex: Registration.asp, Login.asp etc). Build will be developed by Build Release Engineering team. Some companies they
will call it as Drop instead of Build.
Test Environment:
Hotmail (server) -> Client Machine 1 -> Client Machine 2 -> …………. Client Machine N
Normalization: Normalization Reduce the redundancy in database, that way it will create the performance of database.
There are different types of normalizations: 1st, 2nd, 3rd, 4th, BCN normalizations etc.…s