0% found this document useful (0 votes)
26 views

Manual Testing Book

This document discusses software testing concepts across multiple chapters. Chapter 1 introduces testing, covering what it is, who is involved, and how to approach a testing project. Chapter 2 discusses when testing should occur in the software development life cycle and provides details on different development models. Chapter 3 covers how to test, including classification, levels, processes, techniques, and types of testing.

Uploaded by

waseemzafar393
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Manual Testing Book

This document discusses software testing concepts across multiple chapters. Chapter 1 introduces testing, covering what it is, who is involved, and how to approach a testing project. Chapter 2 discusses when testing should occur in the software development life cycle and provides details on different development models. Chapter 3 covers how to test, including classification, levels, processes, techniques, and types of testing.

Uploaded by

waseemzafar393
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 88

Testing Concepts

Table of Context

Chapter 1 - Introduction to Testing.............................................................................................................4


1.1 What is testing?.................................................................................................................................5
1.2 Who is involved?................................................................................................................................5
1.2.1 Resources and Responsibilities...................................................................................................5
1.3 How to Approach a Testing Project?..................................................................................................6
Chapter 2 - When testing should occur.......................................................................................................8
2.1 What is “Software Development Life Cycle” (SDLC)?.........................................................................9
2.1.1 Software Development Life Cycle Phases...................................................................................9
2.1.2 Common problems in the software development process:......................................................12
2.1.3 When Testing should occur?.....................................................................................................12
2.1.4 Types of Development Projects................................................................................................13
2.1.5 Software Testing Process..........................................................................................................13
2.2 Different SDLC models.....................................................................................................................14
2.2.1 Water Fall Model......................................................................................................................15
2.2.2 V-Model....................................................................................................................................16
Chapter 3 - How to test?...........................................................................................................................18
3.1 Classification of Testing...................................................................................................................19
3.2 Levels of Testing..............................................................................................................................20
1. Unit Testing....................................................................................................................................20
2. Integration Testing.........................................................................................................................21
3. System Testing...............................................................................................................................22
4. User Acceptance Testing................................................................................................................23
3.3 Test Process Model or Methodology...............................................................................................23
3.3.1 Test Strategy.............................................................................................................................23
3.3.2 Test Plan...................................................................................................................................24
3.3.3 Test Case designing...................................................................................................................26
3.4 Testing Techniques..........................................................................................................................29
3.5 Static Testing or Verification............................................................................................................34
3.6 Test Execution and Fault Reports....................................................................................................37
3.7 Types of Testing...............................................................................................................................39
3.8 Defect Tracking................................................................................................................................43
3.9 Test Reports.....................................................................................................................................48
3.10 Traceability Matrix.........................................................................................................................48
3.11 Testing: in-Short (Black Box Testing)..............................................................................................49
Glossary.....................................................................................................................................................51
Chapter 1 - Introduction to Testing

“Every program does something right, it just may not be the thing we want it to do”

Testing is not a newly added activity in software development; it’s been ignored for quite some time as
an expensive and not so important activity.

As industry grew from small to big, as software’s inherent feature of complexity is gradually increased
with more and more functionalities are added to it every now and then, delivering a correct product on
time and within cost became to trouble the development companies.

To overcome this problem there is a need for a process to make sure that you’re delivering a product,
that is of high quality, correctly working and to reduce the costs unnecessarily spent on correcting the
problems during its operational use and inconvenience provided to the customer. And it is nothing but
testing.
1.1 What is testing?

Testing is a process of

“Analyzing and evaluating a system with the intention of identifying a defect”

Or

“Exercising or evaluating a system component by manual or automated means to verify that it


satisfies specified requirement”

The goal of testing is to,

1. Cover all the errors or defects present in the application prior to its deployment in
customer place.
2. Identify and describe the actual behavior of the application under test.
3. To assess the quality of the software to determine whether it meets customer
expectation.

Testing is not done to say that software is working correctly but it is done to identify defects in
given software. Testing is to identify defects not to correct the defects. Testing helps in
identifying the correctness, completeness, reliability and quality of a software.

1.2 Who is involved?


1.2.1 Resources and Responsibilities
Suggested Staffing Requirements for the QA Functions:
1. Business Analyst
2. Project Manager
3. Team Lead
4. Software Tester

The responsibilities of these positions are as given below:

1. Business Analyst
a. Meeting Client and Gathering Requirements from the client
b. System Documentation
c. Involved in preparation of Test Plan
d. Test conditions development

2. Project Manager
Test Methodology for project Total Quality:
a. Leading the Project
b. Conducting meetings with the client
c. Oversees the Life Cycle and Staff
d. Managing the complete project

3. Team Lead
a. Involved in meetings with the client
b. Preparation of Test Plan
c. Assigning the work to the Team members

4. Software Tester
a. Writing the Test Cases
b. Executing the Test Cases
c. Evaluating test execution
d. Raising the Defects
e. Closing the Defects
f. Sending weekly status reports to the Team Lead

In order to become a good tester, any tester should have the following characteristics:

 A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail.

 Tact and diplomacy are useful in maintaining a cooperative relationship with


developers, and an ability to communicate with both technical (developers) and non-
technical (customers, management) people is useful.

 Previous software development experience can be helpful as it provides a deeper


understanding of the software development process, gives the tester an appreciation
for the developers' point of view, and reduce the learning curve in automated test tool
programming.

 Judgement skills are needed to assess high-risk areas of an application on which to focus
testing efforts when time is limited.

1.3 How to Approach a Testing Project?


1. Meeting the management to get the overall picture about the project or background of
the project and discuss the test plan.
2. Meeting Developers after management discussion to get a better understanding of the
application, business rules and documentation of the application.
3. Developing a Test Plan based on Testing Methodology. This is important because this is
the base of the Testing and is prepared by Test Manager/Team Lead typically.
4. Writing Test Cases and testing the application without missing any functionality.
5. Actual testing done based on Test Cases.
6. Reporting any bugs found during testing to the Developers.
7. Retesting the application after the developers fix the bugs/errors already reported by
tester.
8. Sign off and write a completion report.
Chapter 2 - When testing should occur

 ”The earlier you identify a defect, lesser is the correction time and cost”

Testing is sometimes incorrectly thought as an” after the fact” activity: performed after
programming or coding is done for a product. Instead it should be conducted at every stage or
phase of development to effectively and efficiently test correctness and consistency of the
product at every step.

In order to know when testing should occur in a software development process, its mandatory to
have knowledge of various software process models.

This chapter provides a detailed information about the stages or phases through which a
software development process passes through called as “Software Development Life Cycle” and
when testing should occur.
2.1 What is “Software Development Life Cycle” (SDLC)?
Every living creature has a well defined life cycle in this world. Certain activities happen during
specific timeframe and the order of these activities follow a clear-cut path.

For example, if we take a typical life cycle of a butterfly, first the egg is formed, then it becomes
larva, it becomes pupa and then it becomes a full fledged butterfly. Same way, if we take the
life of a human being, first the baby is formed in the uterus, it develops the limbs and grows, it
is born, it crawls, walks, runs etc and then at old age it dies. These things happen in every
typical human being.

Software also has a life and it follows a typical life cycle depending upon the nature of the
software. The software undergoes a lot of phases before it is developed and it undergoes a lot
of modifications after it becomes productive and one day it becomes obsolete too.

From the inception of an idea to develop a product to till that product goes out of use the
software development process passes through a set of phases iteratively called as “Software
Life Cycle”.

The life cycle begins when an application is first conceived and ends when it is no longer in use.
The following sections describe the various stages in the Software Development Life Cycle.

2.1.1 Software Development Life Cycle Phases

1. Requirements Phase
2. Analysis Phase
3. Design Phase
4. Development or Coding Phase
5. Testing Phase
6. Maintenance Phase

1. Requirements Phase:
In a new software development process, the organization management gets the
proposals for software needs to be developed from different vendors. After receiving
the proposal the vendor Project Manager prepares the PIN (Project Initiation Notes)
document with overall plan of required resources.

If the PIN document is reasonable the Business Analyst or Functional Lead gathers
project requirements from the Customer. The gathered requirements are available on
BRS (Business Requirement Specifications)/CRS (Customer Requirements Specifications)
document.
2. Analysis Phase:
After gathering the requirements, Business Analyst and Project Manager conduct
analysis on those requirements to prepare SRS (Software Requirements Specifications).

After completion of BRS and SRS document, Business Analyst again reviews that
document for completeness and correctness. In this review they are verifying below
factors in BRS and SRS.

 Are the requirements correct?


 Are the requirements complete?
 Are the requirements achievable?
 Are the requirements reasonable?
 Are the requirements testable?

3. Design Phase:
After completion of BRS & SRS document and the reviews; the BA, PM and the senior
programmers prepare the project plan with required feasible schedule, H/W
requirements, S/W requirements etc.

After finalization of detailed project plan the senior programmers prepare HLD’S (High
Level Design) and LLD’S (Low Level Design).

The HLD indicates the overall architecture or blue print of complete software. This
design is also known as External Design (or) Architectural Design.

HLD:-
After completion of HLD preparation, the senior programmers prepare Low Level
Designs. Each LLD indicates in-depth logic of each module or function or unit.

In a project design HLD is system level and LLD is module or unit level. Programmers use
LLD for coding and HLD for Integration of that coded programs.

After completion of HLD and LLD preparation, the senior programmers conduct reviews
on those documents. In this review, they verify below factors in HLD and LLD’S.

 Are the designs clear?


 Are the designs correct?
 Are the designs complete?

4. Development or Coding Phase:


After completion of s/w design and reviews, the junior programmers write programs to
develop the application or software. In this stage the Junior Programmers follow LLD’S
to develop the application or software.

5. Testing Phase:
After receiving the software build from programmers, the testing team starts their task
of software testing to validate customer requirements and customer expectations.

6. Maintenance Phase:
After s/w is released and training is provided to the customer on the software; the
project manager establishes a “Change Control Board” (CCB) team with some
representatives. If any changes are required, the CCB team representatives receive s/w
change request from the customer.
2.1.2 Common problems in the software development process:
 Poor requirements - If requirements are unclear, incomplete, too general, and not
testable, there will be problems.
 Unrealistic schedule - If too much work is crammed in too little time, problems are
inevitable.
 Inadequate testing - No one will know whether or not the program is fully tested until
the customer complaints or systems crash.
 Futurities - Requests to pile on new features after development is underway; extremely
common.
 Miscommunication - If developers don't know what's needed or customers have
erroneous expectations, problems are guaranteed.

2.1.3 When Testing should occur?


Testing should occur throughout the different phases of a project.
Requirements Phase
• Determine the test strategy.
• Determine adequacy of requirements.
• Generate functional test conditions.

Design Phase
• Determine consistency of design with requirements.
• Determine adequacy of design.
• Generate structural and functional test conditions.

Development or Coding Phase


• Determine consistency with design.
• Determine adequacy of implementation.
• Generate structural and functional test conditions for programs/units.

Testing Phase
• Determine adequacy of the test plan.
• Test application system.

Installation Phase
• Place tested system into production.

Maintenance Phase
• Modify and retest.
2.1.4 Types of Development Projects
Type Characteristics Test Tactic

Traditional • Uses a system development


• Test at end of each
System methodology.
task/step/phase.
Development • User knows requirements.
• Verify that specs match need.
• Development determines
• Test function and structure.
structure.
Iterative
Development/ • Verify that CASE tools are
• Requirements unknown.
Prototyping/ used properly.
• Structure pre-defined.
CASE • Test functionality.

System • Test structure.


Maintenance • Modify structure. • Works best with release methods.
• Requires regression testing.
Purchased / • Structure unknown.
Contracted • May contain defects. • Verify that functionality
Software • Functionality defined in user matches need.
documentation. • Test functionality.
• Documentation may vary • Test fit into environment.
from software.

2.1.5 Software Testing Process


Below is very basic software testing process. Many companies use this process.

1. Understand the Requirements

2. Test Planning: During this phase Test Strategy is defined and Test Bed created. The Plan
should identify:

 The modules to be tested.


 What is the SDLC model we are going to follow?
 Which testing technique we are going to follow?
 What are the different types of Automation tools needs to be used?
 Total number of human resources required?
 What are the different types of testing needs to be done?

3. Test Environment Setup: A different testing server is prepared where the application
will be tested. It is an independent testing environment.
4. Designing Test Scenarios: The Team lead will identify and write the Test Scenarios. The
Project Manager is going to review the Test Scenarios written by the Team Lead.

5. Writing the Test Cases: Based on the Test Scenarios the QA’s or the Testers will write
the Test Cases. Once the Testers finish writing the Test Cases he will send the Test Cases
to the Team Lead and the concerned people in the Development Team for the Review of
the Test Cases. This is called as “Pre-Review”. Now they will review the Test Cases and
provide their comments and the Tester will incorporate the comments given by the
Team Lead and the Development Team.

After incorporating the comments given by them, now again the “Tester” will send the
modified Test Cases to the Team Lead and the Development Team. Again the Team Lead
and the Development Team will review the Test Cases. This is called as “Post-Review”.

After the “Post-Review” is done then only the Test Cases will get finalized for the
Testing.

6. Test Cases Execution: Now the Testers will execute the Test Cases and report any errors
found to the Development Team.

7. Defect Tracking: Raised Defects are tracked using some of the tools like Bugzilla, Quality
Center, TOM (Test Object Management Tool) etc.

8. Test Reports: As soon as testing is completed, Team Lead or Manager generates metrics
and make final reports for the whole testing effort.

2.2 Different SDLC models


Various frameworks for developing software have been developed called as ‘SDLC models”.
Some of them are listed below:
 Waterfall Model
 V-Model
 Agile Methodology
 Fountain Model
 Incremental Process Model
 The Incremental Model
 The RAD Model
 Evolutionary Process Models

 Prototyping
 Spiral Model
2.2.1 Water Fall Model

Waterfall model is same like as SDLC. This model is the first ever SDLC model proposed; hence it
is also called as “Classic Life Cycle Model”. This is a step by step model, after completion of one
phase the next phase is implemented.

The characteristics of the model are:

 A classic SDLC model, with a linear and sequential method that has goals for each
development phase.
 The waterfall model simplifies task scheduling, because there are no iterative or
overlapping steps.
 One drawback of the waterfall model is that it does not allow for much revision.

Advantages:
 Allow processes to be managed.
 Make processes more systematic.

Disadvantages:

 Dependence on the completion criteria of requirement and design phases.


 No going back to the previous phase if error found.
 Assumes all stages fall neatly from one to another.
 Doesn’t always correspond to reality.

2.2.2 V-Model
The V-model was originally developed from the waterfall software process model. The four
main process phases – requirements, analysis, design and coding – have a corresponding
verification and validation testing phase. In this methodology development and testing takes
place at the same time with the same kind of information in their hands. Typical "V" shows
Development Phases on the Left hand side and Testing Phases on the Right hand side.
Development Team follow "Do-Procedure" to achieve the goals of the company and
Testing Team follow "check-Procedure" to verify them.

Implementation of modules is tested by unit testing, system design is tested by integration


testing, analyses are tested by system testing and acceptance testing verifies the requirements.
The V-model gets its name from the timing of the phases.
Advantages of the V Model:
 Simple and easy to use.
 Each phase has specific deliverables.
 Higher chance of success over the waterfall model due to the development of the test
plans early on during the life cycle.

Works well for small projects where requirements are easily understood.
Chapter 3 - How to test?

 ”Work smarter not harder”

Testing, as it is an important activity consumes nearly 40% of the project time and effort. The
critical success criterion for a software project is the time of delivery. As time constraints are
imposed, no activity in a software development process can take its own time for completion;
rather it should be completed in an optimum time. With the appropriate use of tools and
reusable components significant time can be saved in development life cycle.

In the same way by following a proper process in a systematic way and doing ‘smart’ testing will
result in saving much time and effort still maintaining the quality. This idea leads to the
innovation of testing techniques and methodologies or strategies in a testing process.

This chapter gives in an insight into various testing techniques, types, methodologies and
strategies in a testing process.
3.1 Classification of Testing
There are several approaches, which can be adopted in testing an application.
The following figure shows the classification of testing approaches.

Testing

Static Dynamic
Dynamic
Reviews Static etc.
Reviews etc.
Static Analysis
Behavioural
Behavioural
Inspection Static Analysis
Inspection
Walkthroughs Non-functional
Walkthroughs Structural
Structural Non-functional Functional etc.
Functional
Desk-checking etc. etc.
Desk-checking etc. Equivalence
Control Usability Equivalence
DataFlow Control Usability
DataFlow Performance Partitioning
Flow Partitioning
Boundary
etc. Flow Performance
etc. Boundary
etc.
Symbolic Statement etc. Value Analysis
Symbolic Statement Value Analysis
Branch/Decision Arcs Cause-Effect Graphing
Execution Branch/Decision Arcs Cause-Effect Graphing
Execution
Definition- Use Branch Condition LCSAJ Random
Definition- Use Branch Condition LCSAJ Random
State Transition
Branch Condition Combination State Transition
Branch Condition Combination

The approach to testing can be any of the following types

1. Static or Verification (non-execution)


Examination of documentation, source code listings, etc
2. Dynamic or Validation (execution)
1. Behavioural Testing
1. Functional (Black Box)
Based on behaviour / functionality of software
2. Non Functional
Performance and Usability testing
2. Structural (White Box)
Based on structure of software
Verification

The process of evaluating a system or component to determine whether the products of the
given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1].
“Verification” checks whether we are building the right system.

Validation

Determination of the correctness of the products of software development with respect to the
user needs and requirements [BS 7925-1]. “Validation” checks whether we are building the
system right.

Functional testing

Ensures that the requirements are properly satisfied by the application system. The functions
are those tasks that the system is designed to accomplish.

Structural testing

Ensures sufficient testing of the implementation of a function.

3.2 Levels of Testing


Every work or task as a whole is very difficult to analyze and accomplish. Similarly testing a
product as a whole will meet the same problems. Instead it’s easy to reduce the task to
manageable components and testing those components. To do this testing task is divided into
several levels, at each level a particular aspect and component of the product is tested.

The levels of testing are:


1. Unit Testing
2. Integration Testing
3. System Testing
4. User Acceptance Testing

1. Unit Testing
Testing a piece of code is called Unit Testing. This is also known as Program Testing or Module
Testing. Unit Testing is done by developers.
In this Unit Testing the developers are validating programs using below techniques:
a. Basic Path Coverage
b. Control Structure Coverage
c. Program Technique Coverage
The unit level testing techniques or program level testing techniques are called as white box
testing techniques or Gray Box or clear box testing techniques.

2. Integration Testing
After completion of related programs writing and their unit testing, the corresponding
programs are inter connected to form an s/w build. Testing these inter connected modules is
called as Integration Testing. The Integration Testing is done by the developers. Usually, the
following methods of Integration testing are followed:

1. Top-down Integration approach.

2. Bottom-up Integration approach.

1. Top-down Integration approach:


In this approach the developers will inter connect main programs and some sun programs.
Instead of remaining under constructive sub programs, the programmers are using “stubs”.
Every stub is a temporary program. The stubs are also called as called programs.
2. Bottom-up Integration approach:
In this approach the programmers are interconnecting subprograms instead of under
constructive subprograms. In the place of under constructive sub programs the
developers are using “Drivers” like temporary programs. Drivers are also known as
“Calling Programs”.

3. System Testing
After receiving an s/w build from developers, the testing team is concentrating on system
testing to validate customer requirements and customer expectations.
The following tests can be categorized under System testing:
1. Recovery Testing. 11. Conformance Testing
2. Security Testing. 12. Usability Testing
3. Compatibility Testing 13. End To End Testing
4. Configuration Testing 14. Regression Testing
5. Data Volume Testing 15. Re-Testing
6. Load Testing 16. Smoke/Sanity Testing
7. Stress Testing. 17. Alpha Testing
8. Performance Testing. 18. Beta Testing
9. Installation Testing 19. Functional Testing
10. Encryption/Decryption Testing

4. User Acceptance Testing


User Acceptance testing occurs just before the software is released to the customer. The end-
users along with the developers perform the User Acceptance Testing with a certain set of test
cases and typical scenarios.

3.3 Test Process Model or Methodology


Software testing methodology is a three steps process of.
 Creating a test strategy
 Creating a test plan/design
 Executing tests

3.3.1 Test Strategy


Test Strategy is an activity or process, used to formally describe “what has to be tested” and
“how testing should be conducted or approach should be” for each level of testing.

Test strategy identifies a best possible use of the available resources and time to achieve the
required testing coverage or identified testing goals. It decides on which parts and aspects of
the system the emphasis should fall.
Test Strategy determination is based on a number of factors, a few of them is as listed below.
 Product Technology
 Component selection
 Product criticality
 Product complexity

By reviewing relevant characteristics of a product, key test requirements are derived. Then the
most appropriate test method can be implemented to achieve a real goal: complete test
coverage.

A good test strategy should be:


• – Specific
• – Practical
• – Justified
The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

3.3.2 Test Plan


A Test Plan is a document that describes the Objective, Scope, Approach, Resources and
Schedules of intended test activities. It defines the test items, the features to be tested, the
testing tasks, who will do each task and any risk requiring contingency planning.

The main intention of preparing the Test Plan is that everyone concerned with the project are
in sync with regards to the Scope, responsibilities, deadlines and deliverables for the project.

Purpose of Preparing a Test Plan:

A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a
software product.

The completed document will help people outside the test group understand the ‘Why’ and
‘How’ of the product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it.

Contents of a Test Plan


1. Purpose 10. Risks & Mitigation Plans
2. Scope 11. Tools to be used
3. Test Approach 12. Deliverables
4. Entry Criteria & Exit Criteria 13. Type of methodology used
5. Resources 14. Annexure
6. Tasks / Responsibilities 15. Sign-Off
7. Exit Criteria
8. Schedules / Milestones
9. Hardware / Software Requirements

Contents (In Detail)

Purpose
This section should contain the purpose of preparing the test plan.

Scope
This section should talk about the areas of the application which are to be tested by the QA
team and specify those areas which are definitely out of scope (screens, database, mainframe
processes etc).

Test Approach
This would contain details on how the testing is to be performed and whether any specific
strategy is to be followed (including configuration management).

Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.) pre-
requisites. For example: Timely environment set up, starting the web server / app server,
successful implementation of the latest build etc.

Resources
This section should list out the people who would be involved in the project and their
designation etc.

Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the
various members in the project.

Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test
environment, database refresh etc.

Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met in the
course of the project.

Hardware / Software Requirements


This section would contain the details of PC’s / servers required (with the configuration) to
install the application or perform the testing; specific software that needs to be installed on the
systems to get the application running or to connect to the database; connectivity related
issues etc.

Risks & Mitigation Plans


This section should list out all the possible risks that can arise during the testing and the
mitigation plans that the QA team plans to implement incase the risk actually turns into a
reality.

Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.)
Quick Test Professional, Load Runner, Test Complete, Test Partner, Quality Center (or) Test
Director.

Deliverables
This section contains the various deliverables that are due to the client at various points of time
(i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans,
Test Procedure, Test Matrices, Status Reports and Test Scripts etc. Templates for all these could
also be attached.

References
 Procedures
 Templates (Client Specific or otherwise)
 Standards / Guidelines (e.g.) QView
 Project related documents (RSD, ADD, FSD etc)
Annexure
This could contain embedded documents or links to documents which have been / will be used
in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents
can also be attached here.

Sign-Off

This should contain the mutual agreement between the client and the QA team with both leads
/ managers signing off their agreement on the Test Plan.

3.3.3 Test Case designing


Before testing any software, it is necessary to identify
 Each aspect that has to be tested in the application.
 What set of actions have to be performed to verify those aspects.
 Outcome expected as a result of the actions performed on the application.
This information is required to document before testing any application. And the document
that contains these details is called as “Test Case Document”.

Test Case:

The definition of “Test Case” differs from company to company, tester to tester and even
project to project.

A Test Case consists of set of inputs, execution steps and expected results developed for a
particular objective such as to exercise a particular program path or to verify compliance with a
specific requirement.

Test Case Design

Each test case in the document contains the following details.


 Test Case ID: It is unique number given to test case in order to be identified.
 Test description: The description of the test case you are going to test.
 Revision history: Each test case has to have its revision history in order to know when
and by whom it is created or modified.
 Function to be tested: The name of function to be tested.
 Environment: It tells in which environment you are testing.
 Test Setup: Anything you need to set up outside of your application for example
printers, network and so on.
 Test Execution: It is detailed description of every step of execution.
 Expected Results: The description of what you expect the function to do.
 Actual Results: Pass / Failed - Any test case should adhere to the following principals.
1. Accurate: Tests what the description says it will test.
2. Economical: Has only the steps needed for its purpose.
3. Repeatable: Tests should be consistent, no matter who/when it is executed.
4. Appropriate: Should be apt for the situation.
5. Traceable: The functionality of the test case should be easily found.
Test Case Example:

PROJECT COES

Document Références
MODULE Order Entry : COES SRS Ver1.2
Sec/Page REF NO.:
FORM REF Authentication 5.1.1 / 9

FUNCTIONAL
User Authentication REF NO:- 5.1.1.1
SPECIFICATION

TEST DATE Time Taken

To check whether
TEST the entered User
OBJECTIVE name and Password
are valid or Invalid

PREPAIRED BY Ashok

TEST CASE NO oe_auth_1

USER Name = COES


Test DATA and PASSWORD =
COES

Actual
Step No Steps Data Expected Results
Results

Should Display
Enter User Name
User Name= Warning Message Box
1 and press LOGIN
COES "Please Enter User
Button
name and Password"

Should Display
Enter Password and Password= Warning Message Box
2
press LOGIN Button COES "Please Enter User
name and Password"

Enter user Name USER = COES Should Display


and Password and AND Warning Message Box
3
press Password = "Please Enter User
LOGIN Button XYZ name and Password"

4 Enter user Name USER = XYX Should Display


and Password and AND Warning Message Box
press LOGIN Button Password = "Please Enter User
COES name and Password"

3.4 Testing Techniques


Writing test cases for all the possible test conditions results in “Exhaustive testing “and this kind
of testing is impractical. So the tester has to intelligently select a subset of all these possible
tests, which have high probability of finding defects. And there are various testing techniques in
assisting a tester in finding what test cases to select and what to avoid.

A Test Technique is:


 a procedure for selecting or designing tests
 based on a structural or functional model of the software
 successful at finding faults
 'best' practice
 a way of deriving good test cases
 a way of objectively measuring a test effort
The aspects, item to be tested can be identified by referring to Software Requirement
Specification, Functional Specification, Design Documents or etc. But still certain aspects of
application require large number of test cases to verify their correctness and proper
functioning. To reduce the amount of effort or time, proper techniques have to identified and
applied. By using techniques a tester can derive a few set of test cases to cover the feature
completely.

Advantage of using Test Techniques:

 Different people: similar probability find faults


o gain some independence of thought
 Effective testing: find more faults
o focus attention on specific types of fault
o know you're testing the right thing
 Efficient testing: find faults with less effort
o avoid duplication
o systematic techniques are measurable

3.4.1 Black-Box testing technique


This technique is used for testing based solely on analysis of requirements (specification, user
documentation.). This is also known as functional testing.
Black Box Testing

• It is testing without knowledge of the internal workings of the item being tested.
• The tester would only know the "legal" inputs and what the expected outputs should be,
but not how the program actually arrives at those outputs.
• It is because of this that black box testing can be considered testing with respect to the
specifications, no other knowledge of the program is necessary.
• The tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work.
• For this testing, test groups are often used.

Black box testing Methods

 Equivalence Partitioning
 Boundary Value Analysis
 Orthogonal Array Testing
 Specialized Testing

Equivalence Partitioning:

Equivalence partitioning is a method for deriving test cases. In this method, classes of input
conditions called equivalence classes are identified such that each member of the class causes
the same kind of processing and output to occur.
In this method, the tester identifies various equivalence classes for partitioning. A class is a set
of input conditions that are is likely to be handled the same way by the system. If the system
were to handle one case in the class erroneously, it would handle all cases erroneously.

Equivalence Partitions Example:


Equivalence class guidelines:

Input Condition Equivalence Class

One valid and two invalid equivalence


Specifies a range
classes are defined.

One valid and two invalid equivalence


Requires a specific value
classes are defined.

One valid and one invalid equivalence class


Boolean
is defined.

Boundary Value Analysis:

Black-box technique that focuses on the boundaries of the input domain rather than its center
BVA guidelines:

Input Condition BVA Guide Line

Specifies a range bounded byTest cases should include a and b, values


values a and b just above and just below a and b
If internal program data structures
have boundaries (e.g. sizeBe certain to test the boundaries.
limitations)

Orthogonal Array Testing:

• Black-box technique that enables the design of a reasonably small set of test cases that
provides maximum test coverage.
• Focus is on categories of faulty logic likely to be present in the software component
(Without examining the code)

Priorities for assessing tests using an orthogonal array

• Detect and isolate all single mode faults


• Detect all double mode faults
• Multimode faults
Specialized Testing:

• Graphical user interfaces


• Client/server architectures
• Documentation and help facilities
• Real-time systems
– Task testing (test each time dependent task independently)
– Behavioral testing (simulate system response to external events)
– Inter-task testing (check communications errors among tasks)
– System testing (check interaction of integrated system software and hardware)

Advantages of Black Box Testing:

• More effective on larger units of code than glass box testing.


• Tester needs no knowledge of implementation, including specific programming
languages.
• Tester and programmer are independent of each other.
• Tests are done from a user's point of view.
• Will help to expose any ambiguities or inconsistencies in the specifications.
• Test cases can be designed as soon as the specifications are complete.

Disadvantages of Black Box Testing:

• Only a small number of possible inputs can actually be tested, to test every possible
input stream would take nearly forever.
• Without clear and concise specifications, test cases are hard to design.
• There may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried.
• May leave many program paths untested.
• Cannot be directed toward specific segments of code which may be very complex (and
therefore more error prone).
• Most testing related research has been directed toward glass box testing

3.4.2 White-Box testing technique


This technique is used for testing and analysis of internal logic of the software. (Design, code
etc) This is also known as structural testing.
Some of the White Box testing techniques are:
 Statement Testing  Branch Condition Combination Testing
 Branch / Decision Testing  Modified Condition Decision Testing
 Data Flow Testing  LCSAJ Testing
 Branch Condition Testing
Benefits of White box testing:

• Software testing approaches that examine the program structure and derive test data
from the program logic.
• Structural testing is sometimes referred to as clear-box testing since Black boxes are
considered opaque and do not really permit visibility into the code.

Black box versus white box

Black box appropriate at all levels

But dominates higher levels of testing Acceptance


Acceptance

White box used predominately System


System
at lower levels to compliment black box
Integration
Integration

Component
Component

3.5 Static Testing or Verification


The process of evaluating a system or component to determine whether the products of the
given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1].
“Verification” checks whether we are building the right system.
Reviews are of 3 types:
 Informal Reviews
 Walkthroughs
 Inspections

Informal Reviews:

Anyone to one meeting that can happen between any two persons is called as “Informal
Reviews”. It is also called as “Peer Reviews”.
Peer reviews are not preplanned, details of discussion are not documented and the outcome of
the review is not reported. So it is called as “Informal” review.
Walkthroughs:

In a walkthrough the designer or programmer leads the members of the development team
and other interested parties through the software and the participants ask questions and make
comments about possible errors, violations of development standards and other problems.
Walkthroughs are still not preplanned so they are also called as ‘Semi formal Reviews”

Purpose of Walkthrough:

 Evaluate software product, check conformance to standards and specifications


 Educating/Training the Participants.
 Find anomalies
 Improve the software product.
 Exchange of techniques and ideas.

Participants:

1. Walkthrough Leader
2. Recorder
3. Author
4. Team Members.

Inspections:

A visual examination of a software product to detect and identify software anomalies, including
errors and deviations from standards and specifications

Purpose of Inspection:

 Verifies that the software satisfies its specifications.


 Verifies that the software satisfies specified quality attributes.
 Verifies that the software conforms to applicable regulations, standards, guidelines,
plans and procedures.
 Identifies deviations from standards and specifications.
The following software products are subjected to inspection:

 Software Requirement Specification.


 Software Design Specification.
 Source Code.
 Software Test Documentation.
 Software User Documentation.
 Maintenance Manual.
 Release Notes.
Participants:

1. Inspection Leader
2. Recorder
3. Reader
4. Author
5. Inspector

Rules & Guidelines:

Following are the rules or guidelines to be followed in conducting an inspection.


 3-6 participants.
 All can act as an inspector.
 Author shall not act as Inspection Leader/ Reader, Recorder.
 Roles may be shared among the team members.
 Participant s may act in more than one role

Responsibilities:

Each participant is given some responsibilities which he is required to carry during the
inspection.
 Moderator- Planning and Preparation.
 Recorder- Documentation works.
 Reader-leads the team through the software.
 Author-Checking that the software meets the entry criteria for inspection.
 Inspector-identifies and describes the anomalies in the software.

Inspection process:

 Planning:-Confirms material to be inspected meets entry criteria. Arranges the


availability of appropriate participants. Schedules a meeting place and time.
 Overview meeting:-Educates group of participants in what is to be inspected. Assigns
inspection roles to participants.
 Preparation:-Participants separately learn the material and find potential defects.
 Examination:-Identified defects are agreed on by the group and classified.
 Re work:-The author corrects all defects.
 Follow up:-The moderator or the entire team verifies that all fixes are effective and that
no additional defects have been introduced
Moderator

Author Moderator

Author
Examination
Reader
Moderator
Planning Recorder Moderator Follow Up
Inspector
Inspector Author

Reader
Overview meeting
Recorder
Preparation Moderator
Inspector
Author

Why to do inspections:

 Cost of detecting and fixing defects is less during the earlier stages.
 Testing alone cannot eliminate all defects.
 It gives management an insight into the development process.
 Quality can be maintained from the initial stages.

3.6 Test Execution and Fault Reports


The test cases are prepared and collected and stored in a central location, from where all the
team members can read and share the test case details. The test cases are reviewed by team
members and/or the lead tester and are approved by the lead tester. The next action item is to
execute the test cases. The program unit(s) that is to be tested, using the test cases must be
ready; otherwise we cannot test.

In unit testing, as and when builds/programs are coded completely, the testing starts. When
the related units are coded and unit tested, then the integration test starts. When all units are
unit tested and integration tested and then only the system test starts.

1. Test Case Distribution

Before the test cases are executed, the Test Lead will allocate the test cases to different
individual testers, depending upon the test groups and availability of the testers. Also, the
Test Lead will fix the target dates for executing the test cases, for individual testers. This
comes under the work distribution and scheduling part of the Test Lead.

2. Test Environment Set-UP

Before any one can start testing, the test environment must be ready. It is always advised to
use a separate test environment, which is different from the development environment. It
may be a different machine altogether or a different set of drivers/directories from which
the test cases are executed. The entire set of program files, database objects etc are to be
copied (or installed) in the test environment, before the test execution begins.

By doing this kind of different setup for the testing, the integrity of the software
components are ensured and at the same time, the programs and software are prevented
for getting overwritten when developer fix some bugs and recompile.

Some of the important things to be remembered in the test bed setup.

 There must be no development tools installed in a test bed.


 Ensure the right OS and service pack / patch installed.
 Ensure the disks have enough space for the application.
 Carry out a virus check if needed.
 Ensure the integrity of the web server.
 Ensure the integrity of the database server.

3. Test Data Preparation

The test input section of the test cases, define what are the actual values that are to be fed
to the application programs / screens. This data can be identified either at the time of
writing the test cases itself or just before executing the test cases. Data that are very much
static can be identified while writing the test case itself (for example, name field or amount
field in a banking deposit screen). Data which are dynamic and configurable need more
analysis before preparation. For example, if a shop provides various percentage discounts to
the articles being sold, depending upon the season, the data is not static and it changes for
every season. To test such functionalities, multiple sets of data values are to be prepared.
Again, the values may not be fixed, but they may be configurable.

So, just before executing the test cases, these kinds of data are to be prepared. Preparation
of test data depends upon the functionality that is being tested.
4. Actual Test Execution

Once the data is ready, the tester’s job is to go thru the test pre-requisites and to make sure
that they exist. For example, if the test case is to withdraw some money from the account,
the user must have access rights to perform that operation and the account should have
enough money to be withdrawn. The tester will have to make these pre-requisites to exist
before starting the test case. This may include, going into a separate screen and then
feeding data or going to the database and then populating it manually etc.

3.7 Types of Testing


To test different aspects of an application several types of testing have been defined.

INSTALLATION TESTING: Testing whether the software is able to install successfully or not, is
called Installation Testing. Below are some of the tips for performing Installation Testing:

 Check if while installing application / Product checks for the dependent patches /
software.
 Check whether the installer give a default installation path.
 Installation should start automatically when the CD is inserted.
 Installer should give the remove / repair options.
 When you perform uninstallation, check for all the registry keys, files, Dll, Shortcuts,
active X components are removed from the system after uninstalling the Software.

SMOKE/SANITY TESTING: Smoke/Sanity testing is done to check whether the main


functionalities of the application is working correctly or not and whether can we accept the s/w
build for further testing or not.

COMPATIBILITY TESTING: Testing to ensure compatibility of an application or Web site with


different browsers, Operating Systems, and hardware platforms. Compatibility testing can be
performed manually or can be driven by an automated functional or regression test suite.

USABILITY TESTING: Usability testing is testing for ‘user friendliness’. Clearly this is subjective
and depends on the targeted end-user or customer. User interviews, surveys, video recording
of user sessions and other techniques can be used. Test engineers are needed, because
programmers and developers are usually not appropriate as usability testers.

CONFORMANCE TESTING: Verifying implementation conformance to industry standards.


Producing tests for the behavior of an implementation to be sure it provides the portability,
interoperability, and/or compatibility a standard defines.
FUNCTIONAL TESTING: Testing the application against business requirements. Functional
testing is done using the functional specifications provided by the client or by using the design
specifications like use cases provided by the design team.

Functional Testing covers:


 Sanity Testing
 System Testing
 Regression Testing
 User Acceptance Testing

Black-box type testing geared to functional requirements of an application, this type of testing
is normally done by the testers.

Non FUNCTIONAL TESTING: Testing the application against client’s performance requirements.
Non-Functional testing is done based on the requirements and the test scenarios defined by the
client.

Non-Functional Testing covers:-

 Load and Performance Testing


 Stress and Volume Testing
 Compatibility Testing
 Security Testing
 Installation Testing

END-TO-END TESTING: Similar to system testing ,involves testing of a complete application


environment in a situation like real-world use, such as interacting with a database, using
network communications, or interacting with other hardware, applications or systems.

REGRESSION TESTING: In Regression Testing we are checking whether all the Bugs raised in the
previous build are fixed or not, and because of fixing of the bugs raised in previous build are
causing any new bugs or not.

Regression may be conducted manually, by re-executing a subset of al test cases or using


automated capture/playback tools.

The Regression test suit contains three different classes of test cases:

• A representative sample of tests that will exercise all software functions.


• Additional tests that focus on software functions that are likely to be affected by the
change.
• Tests that focus on the software components that have been changed

RECOVERY TESTING: Recovery testing is a system test that focuses the software to fail in a
variety of ways and verifies that recovery is properly performed. If it is automatic recovery then
re initialization, check pointing mechanisms, data recovery and restart should be evaluated for
correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.

SECURITY TESTING: Security testing attempts to verify whether the programs, data and
documents are safe from unauthorized access.

LOAD TESTING: Testing an application under heavy loads to calculate the response time when
an application is subjected to increase in the load.

STRESS TESTING: Also described as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs to find out the breakpoint at which the
application fails.

DATA VOLUME TESTING: This is also known as “Storage Testing”. During this testing the testers
are validating the maximum capacity of our s/w build to store user’s related data.

CONFIGURATION TESTING: This is also known as “Portability Testing”. During this testing the
testing team is validating that “Whether our s/w build is working in different customer
expected platforms or not?” Platforms mean the operating system, Browser, compilers and
other system software.

ENCRYPTION / DECRYPTION TESTING: When our s/w build is running in network the client
process of that s/w is encrypting the original data and the server process of that s/w is
decrypting to get original data to prevent other party accessing your data.

RE - TESTING: Testing the same module with different sets of data or with the different inputs is
called retesting.

Ad Hoc TESTING: Testing the application without having any test cases is called Ad Hoc Testing.
In Ad Hoc Testing the testers will try to ‘break’ the system by randomly trying the System’s
functionality.

Incremental Integration Testing: Continuous testing of an application as a new functionality is


added.

DOCUMENTATION TESTING: Documentation testing is checking the user manuals, help screens
and setup instructions etc to verify they are correct and appropriate.
General Principles

1. Getting the Big Picture: Skim the table of contents to get a feel for the content of the
manual.
2. Logical Order: The order in which topics are discussed or in which steps are covered is very
important. You may find steps or topics that should be discussed before another topic.
3. Audience: Keep in mind the audience of this manual.
o Is the information appropriate for the person who will be using this manual?
o Is this person a system administrator, a developer, an end-user?
o Is the information too technical or not technical enough?
o What other types of information does this person need?
4. Formal Hand-off: When you have completed reviewing a manual, schedule a formal 10- to
15-minute to sit down with writer and review your comments and concerns with them.

Testing End-User Manuals

NOTE: These types of manuals are very procedure-oriented and include installation guides,
administration tool guides, and end-user guides.
1. Overview and General Information: Please check this information for technical accuracy as
well—even if it seems marketing-oriented.
2. Checking Screens:
o Are screens up to date?
o Are all necessary screens included?
o Are screens correctly identified and labeled?
o Are screen elements and menu options correctly identified?
3. References to other sections or manuals:
o Do references take the reader to the correct page and section?
o If the reference is to another manual, is it the appropriate manual?
4. Technical Issues:
Are there paragraphs or sections that must be rewritten because of missing or inaccurate
technical information? If so, alert the writer immediately—do not wait until your reviews are
turned in.
If critical technical information is missing, please alert the writer immediately—do not wait until
your reviews are turned in.
If meetings with a developer or other information source are required in order to get
information, see if the writer can attend this with you. It will save you both some time. For the
first test draft—the Beta Draft—note bugs and other concerns on the manuscript.
For the final test draft, note bugs on the manuscript and in a formal bug-tracking system, if your
dept. uses one.

Testing Developer Manuals

NOTE: Developer manuals provide information on APIs, programming tools, programming


templates, and so forth.
1. Checklist: The checklist items for testing end-user manuals also apply to these manuals.
2. Overview and General Information: Performing a technical review of the overview and
general information sections is very critical in this type of document, as it is more likely to
contain technical information than end-user manuals.
3. Functionality Accuracy:
Does the manual accurately describe how the application flows?
Does the manual accurately describe how method calls function?
Does the manual accurately describe the ramifications of using different methods?
And the reasons a programmer would choose one method over another?

ALPHA TESTING: The Alpha testing is conducted at the developer sites and in a controlled
environment by the end-user of the software.

BETA TESTING: The Beta testing is conducted at one or more customer sites by the end-user of
the software. The beta test is a live application of the software in an environment that cannot
be controlled by the developer. Beta testing is the last stage of testing, and normally can
involve sending the product to beta test sites outside the company for real-world exposure or
offering the product for a free trial download over the Internet. Beta testing is often preceded
by a round of testing called alpha testing.

Objectives of Beta Testing:


• Ironing out bugs.
• Understanding the product from the customers point of view (what is important).
• Building a strong community for your product that can be used for testimonial.

3.8 Defect Tracking


Defect:
“Any deviation or variation from product specification that leads to customer dissatisfaction is
termed as defect”
Defects are classified into three categories

1. Missing Functionality
2. Extra Functionality
3. Wrongly Understood Functionality

A functionality that is specified in the product specification but not implemented in the
software product is called as “Missing Functionality”.

A Functionality that is not specified in the specification but found in the software is called as
“Extra Functionality”.

A Functionality that is wrongly implemented in the software is called as “Wrongly understood


functionality”.

3.8.1 Bug Life Cycle


From the time a bug is identified in the system to the state where it is completely removed
from the application, the bug revolves through a set of phases called as Bug life cycle.

After a bug is identified it is reported to the concerned people through a document called as
“Bug Report”. Bug Report is a document that contains the details regarding the variations or
deviation identified in the application.

Contents of Bug Report:

Bug Id ID should be given to bug.

Manager Manager should go through this.

Developer Responsible for the source file.

Build The build name.

Date of creation Date when the bug was logged.

Created By The name of the tester.

Verified The name person who is going to verify the bug.


Keyword Which key word to use to search the bug in future?

Project Id Project ID.

Priority Will be assigned by manager, it can be from 1 to higher/low,


medium and higher this shows the importance of the bug to be
resolved.

Module Name of the module.

Severity Assigned by manager, this shows how bug is affecting the behavior
of the product.

Synopsis One line description for the bug.

Description Steps to reproduce the bug.

Evaluation Root cause of the bug.

Fixing What efforts or code to change to solve the problem.

Hard ware & Environment


software

Closed because Reason why bug was closed - May be duplicate bug.

Duplicate of Duplicate of any existing bug.

Differed for The bug won’t be fixed in this version, these are minor errors like
font size, help contents etc.

Attachment How to communicate this bug through attachment.

Mailing List Same as above.


3.8.2 Bug Tracking Process or Bug Life Cycle

Bug Lifecycle

 When the bug is posted for the first time the status of the bug will be “New”, that
means the bug is not yet approved.
 After a tester had posted the bug the “Team lead” of the tester will approve that the
bug is genuine and the Team Lead will change the status as “Open”.
 One the “Team Lead” will change the status as open; he will assign the bug to the
corresponding developer’s team. The state of the bug is now changed to “Assigned”.
 If the Developer feels that it is not a genuine bug he is going to reject the bug. The state
of the bug is now changed to “Rejected”.
 If the bug is changed to “Differed” state means, the bug is expected to be fixed in next
releases. The reasons for changing the bug to this state have several reasons i.e. The
priority of bug might be low, Lack of time for the releases, The bug might not have
significant effect of the application.
 Once the bug is resolved or fixed by the developer he is going to change the status as
“Fixed”.
 Once the bug is fixed or resolved by the developer it comes to the Testing department.
Now the tester is going to test the fixed bug. After testing the fixed bug the tester is
going to change the status as “Verified”.
 Once the developer had verified the fixed bug, if the same bug is not occurring again he
is going to change the status as “Closed”. If the tester is getting the same bug again he is
going to change the status as “Reopen”.

3.8.3 Defect Severity Classification:


“Severity is the impact of the defect on the application.” Identification of severity assists in
focusing what defects are more important and require immediate attention.
This section defines a defect Severity Scale framework for determining defect criticality and the
associated defect Priority Levels to be assigned to errors found in software.
Severity can be classified as follows:
 Critical - There is s functionality block. The application is not able to proceed any
further. The Development Team must be informed immediately and they need to take
corrective action immediately.
 Major - The application is not working as desired. There are variations in the
functionality. The Development team must be informed that day, and they need to take
corrective action within 0-24 hours.
 Minor - There is no failure reported due to the defect, but certainly needs to be
rectified. The Development Team must be informed within 24 hours, and they need to
take corrective action with in 24-48 hours.
 Cosmetic - Defects in the User Interface or Navigation. The Development Team must be
informed within 48 hours, and they need to take corrective action within 48-96 hours.
 Suggestion - Feature which can be added for betterment.

Defect Priority:

The priority level describes the time for resolution of the defect. The priority level would be
classified as follows:
 Immediate : Resolve the defect with immediate effect.
 At the Earliest : Resolve the defect at the earliest, on priority at the second
level.
 Normal : Resolve the defect.
 Later : Could be resolved at the later stages.
3.9 Test Reports
Individual testers will be sending their status on executing the test cases, to the Team Lead, on
daily basis or weekly basis. This will include, what are the test cases that the tester has
executed during that period, Total number of Test Cases passed, Number of test cases failed
and Total number of test cases pending. When the Team Lead gets this status from individual
testers, he will consolidate all these details and will arrive at various kinds of reports. This will
be used for tracking the testing activities and to plan further.

Below is the Test Report temple.

After the Test Execution phase, the following documents would be signed off.

1. Project Closure Document.


2. Reliability Analysis Report.
3. Stability Analysis Report.
4. Performance Analysis Report.
5. Project Metrics.

3.10 Traceability Matrix


The document which shows the relationship between the “Test Requirements” and the “Test
Cases” is called as Traceability Matrix.

It is a very document in the Testing phase because by seeing this document we can clearly
understand whether we have covered all the Test Cases for that particular requirement.
Fig: - Traceability Matrix Template

3.11 Testing: in-Short (Black Box Testing)


1) The product “Requirements” must be very clearly understood by the testers. This may take
place by Product Overview Sessions, going through requirements documents etc.

2) Test Planning is the next step. This is done in form of a “Test Plan”. In this, the following
items are addressed:-

a) The Scope of testing is discussed. Which modules are in the scope and which are out
of scope.
b) The risks involved in the test activity. Any risk that may affect the schedule, cost and
other resources must be clearly documented along with what could be the
mitigation plan against each risk.
c) The types of testing. Different types to be conducted are Installation Testing (all
installer packages), Functionality Testing (of course this is must), Security Testing
(application role based and NT authentication based), Volume Testing (For huge
number of database records), Stress Testing (For a large number of concurrent
users), Recovery Testing, Documentation Testing (Help Documents, Troubleshooting
manuals etc), Compatibility Testing (IE VS Netscape, different backend databases)
etc.
Note: All the above mentioned testing(s) are applicable to all projects.

d) Escalation Criteria - If things go wrong, who are all to be informed within what time
frame?
e) Resources: What is the hardware, software and human resources need to execute
the project.
f) Schedule of the testing activity with important milestones.

This Test Plan document is prepared by the Team Lead and is reviewed by the Project
Manager and the Development Lead also.

3) The Testers will write all “Testing Scenarios” with a brief description. Basically a document
all possible test scenarios in a nut shell. This is reviewed by the Team Lead.

4) The Testers write Detailed “Test Cases”. This will elaborate the Test Precondition, Test
Input, Test Steps and Expected Results. The Testers will send the Test Cases to the Team
Lead and the Developer’s Lead for reviewing. This is called as Pre-Review. They will review
the Test Cases and will tell their comments whether any thing needs to be added or
modified.

After incorporating the comments given by Team Lead and the Developer’s Lead the Tester
will again send the Test Cases for the review. This is called “Post Review”.

After Post review now the Test Cases are ready for the Execution.

5) When the Development Team gives the build to the testers, testers will execute the Test
Cases one by one. When they find problems in the application, the bugs are logged through
Excel Sheets (or) through any Bug Tracking System like Bugzilla, Quality Center, TOM (Test
Object Management) tools.

The Development Team fixes the bugs and in the next build, the testing team does
Regression Testing. This cycle continues until bugs go to zero or to a minimal acceptable
level.

6) When Regression test automation is an activity, when large number of tests needs to be
repeated. We may use some typical automation tools like Quick Test Professional or Test
Partner or Test Complete etc. The Testers will write Test Scripts (Recording and putting
verification points) and these automated test library will be in place for test execution in the
subsequent test cycles.
Glossary

Acceptance Testing: Formal testing conducted to enable a user, customer, or other authorized
entity to determine whether to accept a system or component. [IEEE].

Ad Hoc Testing: Testing carried out using no recognized test case design technique.

Alpha Testing: Simulated or actual operational testing at an in-house site not otherwise
involved with the software developers.

Behavior: The combination of input values and preconditions and the required response for a
function of a system. The full specification of a function would normally comprise one or more
behaviors.

Beta Testing: Operational testing at a site not otherwise involved with the software developers.

Big-bang Testing: Integration testing where no incremental testing takes place prior to all the
system's components being combined to form the system.

Black Box Testing: See "Functional Testing".

Bottom-up Testing: An approach to integration testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is tested.

Boundary Value: An input value or output value which is on the boundary between equivalence
classes, or an incremental distance either side of the boundary.

Boundary Value Analysis: A test case design technique for a component in which test cases are
designed which include representatives of boundary values.

Boundary Value Coverage: The percentage of boundary values of the component's equivalence
classes which have been exercised by a test case suite.
Boundary Value Testing: A test case selection technique in which test data is chosen to lie
among "boundaries" or extremes of input domain (or output range) classes, data structures,
procedure parameters, etc. Boundary value test cases often include the minimum and
maximum in-range values and the out-of-range values just beyond these values.

Branch: A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other statement
in the component except the next statement, or when a component has more than one entry
point, a transfer of control to an entry point of the component.

Branch Condition: See decision condition.

Branch Condition Combination Coverage: The percentage of combinations of all branch


condition outcomes in every decision that have been exercised by a test case suite.

Branch Condition Combination Testing: A test case design technique in which test cases are
designed to execute combinations of branch condition outcomes.

Branch Condition Coverage: The percentage of branch condition outcomes in every decision
that have been exercised by a test case suite.

Branch Condition Testing: A test case design technique in which test cases are designed to
execute branch condition outcomes.

Branch Coverage: The percentage of branches that have been exercised by a test case suite.

Branch Testing: A test case design technique for a component in which test cases are designed
to execute branch outcomes.

Bug: See fault.

Bug Seeding: See error seeding.

Capture/Playback Tool: A test tool that records test input as it is sent to the software under
test. The input cases stored can then be used to reproduce the test at a later time.

Capture/Replay Tool: See capture/playback tool.

Client/Server: A computer network configuration in which the "client" is a desktop computing


device or program "served" by another networked computing device. Computers are integrated
over the network by an application, which provides a single system image. The client can
request information or applications from the server and the server provides the information or
application.

Code Coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention.

Code-based Testing: Designing tests based on objectives derived from the implementation
(e.g., tests that execute specific control flow paths or use specific data items).

Component: A minimal software item for which a separate specification is available.

Component Testing: The testing of individual software components

Condition: An expression containing no Boolean operators. For example, the expression "IF A"
is a condition as it is a Boolean expression without Boolean operators which evaluates to either
True or False.

Control Flow: An abstract representation of all possible sequences of events in a program's


execution.

Control Flow Graph: The diagrammatic representation of the possible alternative control flow
paths through a component.

Control Flow Path: See path.

Correctness: The degree to which software conforms to its specification.

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test case suite.

Data Flow Coverage: Test coverage measure based on variable usage within the code.
Examples are data definition-use coverage, data definition P-use coverage, data definition C-use
coverage, etc.

Data Flow Testing: Testing in which test cases are designed based on variable usage within the
code.

Data Use: An executable statement where the value of a variable is accessed.


Dead Code: Executable machine code (not source code, although the machine code may have
resulted directly from the source code) which cannot, nor is intended to, be used in any
operational configuration of the target computer environment and is not traceable to a system
or software requirement. Dead code may be generated by compilers or linkers.

Debugging: The act of correcting errors during the development process

Decision: An expression comprising conditions and zero or more Boolean operators that is used
in a control construct (e.g. if...then...else; case statement) that determines the flow of
execution of the software program. A decision without a Boolean operator reduces to a
condition. For example, the expression "IF (A>B) or (B<C) THEN" is a decision, as is "FOR A>5
LOOP".

Decision Coverage: Every point of entry and exit within the software is invoked at least once,
and every decision in the software has taken all possible outcomes at least once. Source code
decision coverage, by definition, includes source level statement coverage, while instruction
decision coverage includes machine code decision coverage.

Decision/Condition Coverage: Every point of entry and exit within the software is invoked at
least once, every condition in a decision in the software has taken all possible outcomes at least
once, and every decision has taken all possible outcomes at least once.

Decision Outcome: The result of a decision (which therefore determines the control flow
alternative taken).

Design-based Testing: Designing tests based on objectives derived from the architectural or
detail design of the software (e.g., tests that execute specific invocation paths or probe the
worst case behavior of algorithms).

Desk Checking: The testing of software by the manual simulation of its execution.

Domain: The set from which values are selected.

Domain Testing: See equivalence partition testing.

Equivalence Class: An input domain ("class") for which each input yields the same
("equivalent") execution path regardless of which input from the class is chosen.

Equivalence Partition: See equivalence class.


Equivalence Partition Coverage: The percentage of equivalence classes generated for the
component, which have been exercised by a test case suite.

Equivalence Partition Testing: A test case design technique for a component in which test cases
are designed to execute representatives from equivalence classes.

Error: A human action that produces an incorrect result. [IEEE]

Error Guessing: A test case design technique where the experience of the tester is used to
postulate what faults might occur, and to design tests specifically to expose them.

Error Seeding: The process of intentionally adding known faults to those already in a computer
program for the purpose of monitoring the rate of detection and removal, and estimating the
number of faults remaining in the program. [IEEE]

Executable Statement: A statement which, when compiled, is translated into object code,
which will be executed procedurally when the program is running and may perform an action
on program data.

Exhaustive Testing: A test case design technique in which the test case suite comprises all
combinations of input values and preconditions for component variables.

Expected Outcome: See predicted outcome.

Failure: Deviation of the software from its expected delivery or service. [Fenton]

Fault: A manifestation of an error in software. A fault, if encountered may cause a failure.


[do178b]

Feasible Path: A path for which there exists a set of input values and execution conditions
which causes it to be executed.

Functional Specification: The document that describes in detail the characteristics of the
product with regard to its intended capability. [BS 4778, Part2]

Functional Test Case Design: Test case selection that is based on an analysis of the specification
of the component without reference to its internal workings.
Functional Testing: Verification of an item by applying test data derived from specified
functional requirements without consideration of the underlying product architecture or
composition.

Incremental Testing: Integration testing where system components are integrated into the
system one at a time until the entire system is integrated.

Independence: Separation of responsibilities which ensures the accomplishment of objective


evaluation after [do178b].

Infeasible Path: A path which cannot be exercised by any set of possible input values.

Input: A variable (whether stored within a component or outside it) that is read by the
component.

Input Domain: The set of all possible inputs.

Input Value: An instance of an input.

Inspection: A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both document
production and inspection) after [Graham]

Installability Testing: Testing concerned with the installation procedures for the system.

Instruction Coverage: Every machine code instruction in the software has been executed at
least once. Executing a machine instruction means that the instruction was processed.

Integration: The process of combining components into larger assemblies.

Integration Testing: Testing performed to expose faults in the interfaces and in the interaction
between integrated components.

Interface Testing: Integration testing where the interfaces between system components are
tested.

Output Domain: The set of all possible outputs.

Output Value: An instance of an output


Path: A sequence of executable statements of a component, from an entry point to an exit
point.

Path Coverage: The percentage of paths in a component exercised by a test case suite.

Path Sensitizing: Choosing a set of input values to force the execution of a component to take a
given path.

Path Testing: A test case design technique in which test cases are designed to execute paths of
a component.

Peer Review: A formal review of an item by a group of peers of the item's developer.

Performance Testing: Testing conducted to evaluate the compliance of a system or component


with specified performance requirements. [IEEE]

Portability Testing: Testing aimed at demonstrating the software can be ported to specified
hardware or software platforms.

Precondition: Environmental and state conditions which must be fulfilled before the
component can be executed with a particular input value.

Progressive Testing: Testing of new features after regression testing of previous features.
[Beizer]

Regression Testing: Re-execution of tests which have previously been executed correctly, in
order to verify a subsequent revision of that same product.

Requirements-based Testing: Designing tests based on objectives derived from requirements


for the software component (e.g., tests that exercise specific functions or probe the non-
functional constraints such as performance or security). See functional test case design.

Review: A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for comment or
approval. [ieee]

State Transition: A transition between two allowable states of a system or component.

State Transition Testing: A test case design technique in which test cases are designed to
execute state transitions.
Statement: An entity in a programming language which is typically the smallest indivisible unit
of execution.

Statement Coverage: Every statement in the software has been executed at least once.
Executing a statement means that the statement was encountered and evaluated during
testing.

Statement Testing: A test case design technique for a component in which test cases are
designed to execute statements.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Testing: Testing of an object without execution on a computer.

Statistical Testing: A test case design technique in which a model is used of the statistical
distribution of the input to construct representative test cases.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of
its specified requirements. [IEEE]

Structural Coverage: Coverage measures based on the internal structure of the component.

Structural Coverage Deviations: An acceptable rationale associated with an unexecuted


element which is not intended to be executed in any deliverable configuration, and for which
analysis shows that the element cannot be inadvertently executed.

Structural Test Case Design: Test case selection that is based on an analysis of the internal
structure of the component.

Structural Testing: See structural test case design.

Structured Basis Testing: A test case design technique in which test cases are derived from the
code logic to achieve 100% branch coverage.

Structured Walkthrough: See walkthrough.


Stub: Special code segments, or a subset of the final intended code which will simulate the
interface of that code to other entities. Used to prototype, simulate, or test in advance of
component completion.

System Testing: The process of testing an integrated system to verify that it meets specified
requirements. [Hetzel]

Test Automation: The use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions.

Test Case: A set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance with a
specific requirement. After [IEEE, do178b]

Test Case Design Technique: A method used to derive or select test cases.

Test Case Suite: A collection of one or more test cases for the software under test.

Test Comparator: A test tool that compares the actual outputs produced by the software under
test with the expected outputs for that test case.

Test Completion Criterion: A criterion for determining when planned testing is complete,
defined in terms of a test measurement technique.

Test Coverage: See coverage.

Test Driver: A program which sets up an environment and calls a module for test.

Test Environment: A description of the hardware and software environment in which the tests
will be run, and any other software with which the software under test interacts when under
test including stubs and test drivers.

Test Execution: The processing of a test case suite by the software under test, producing an
outcome.

Test Execution Technique: The method used to perform the actual test execution, e.g. manual,
capture/playback tool, etc.
Test Generator: A program that generates test cases in accordance to a specified strategy or
heuristic after [Beizer].

Test Harness: See Test Driver.

Test Measurement Technique: A method used to measure test coverage items.

Test Outcome: See outcome.

Test Plan: A record of the test planning process detailing the degree of tester independence,
the test environment, the test case design techniques and test measurement techniques to be
used, and the rationale for their choice.

Test Procedure: A document providing detailed instructions for the execution of one or more
test cases.

Test Records: For each test, an unambiguous record of the identities and versions of the
component under test, the test specification, and actual outcome.

Test Script: Commonly used to refer to the automated test procedure used with a test harness.

Testing: The process of exercising software to verify that it satisfies specified requirements and
to detect errors after [do178b]

Thread Testing: A variation of top-down testing where the progressive integration of


components follows the implementation of subsets of the requirements, as opposed to the
integration of components by successively lower levels.

Top-down Testing: An approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated until
the lowest level components have been tested.

Unit Testing: See component testing.

Usability Testing: Testing the ease with which users can learn and use a product.

Validation: The determination of correctness of an item based upon requirements, and the
sanctity of those requirements.

Verification: The demonstration of consistency, completeness, and correctness of an item.


Walk-Through: A manual analysis technique in which the item's developer describes the item's
structure and logic to a group of peers.

White Box Testing: Verification of an item by applying test data derived from analysis of the
item's underlying product architecture and composition.

What is VBScript?

 VBScript (short form of Visual Basic Script Edition), is a subset of Visual Basic Programming
Language.
 VBScript is a Microsoft proprietary language that does not work outside of Microsoft
programs.
 VBScript is a scripting language (light weight programming language) that can be developed
in any text editor.
 The component incorporation capabilities of VBScript introduce some special considerations
and trade-offs in page design.
 One of the powerful benefits of VBScript is its capability to ensure the validity of the data
the user enters.

Note: After you have written your VBScript code you need to download the Internet Explorer
to process your code. Firefox, Opera, Netscape, etc will not be able to run VBScript. It is not
possible to read and write files or databases in the normal fashion in VBScript.

VBScript Data Types

 A variant is a special type of variable that can store a wide variety of data types.
 Variants are not restricted to one type of data (such as integers, for example).
 The variant is used most often to store numbers and strings, but it can store a variety of
other types of data. These data types are often called subtypes because the variant can
represent them all internally.

The following table shows subtypes that the variant uses to represent the data that can be
stored in a variable:

Subtype Description

Empty . Expre The empty subtype is used for variables that have been
created but not yet assigned any data. Numeric variables are
assigned 0 and string variables are assigned "" in this
uninitialized condition.

The null subtype refers to variables that have been set to


Null contain no data. Unlike the empty subtype, the programmer
must specifically set a variable to null.

Boolean Contains either True or False.

The byte data type can store an integer value between 0 and
Byte 255. It is used to preserve binary data and store simple data
that doesn't need to exceed this range. The by

The integer data type is a number that cannot contain a


Integer
decimal point. Integers can range from -32,768 to +32,767.

Currency 922,337,203,685,477.5808 to 922,337,203,685,477.5807.

Variables of the long data type are also integers, but they have
Long a much higher range, -2,147,483,648 to 2,147,683,647 to be
exact.

The single subtype represents floating-point, or decimal,


values, which means that numbers represented by this
subtype include decimal points. The range jumps to a
Single
whopping -1.4E-45 to -3.4E38 for negative numbers and 1.4E-
45 to 3.4E38 for positive numbers (these numbers are
expressed in scientific notation).

Double is another floating-point data type, the range for the


Double double is -4.9E-324 to -1.8E308 for negative numbers and
4.9E-324to 1.8E308 for positive numbers.

Contains a number that represents a date between January 1,


Date
1000 to December 31, 9999.

The string data type is used to store alphanumeric data-that


String
is, numbers, letters, and symbols.

Object The object data type is a subtype used to reference entities


such as control or browser objects within a VBScript
application or another application.

The error subtype is used for error handling and debugging


Error
purposes.

Variables
 A variable is a virtual container in the computer's memory that's used to hold information.
 A computer program can store information in a variable and then access that information
later by referring to the variable's name.

Naming Restrictions

1) Must begin with a letter.


2) Cannot contain an embedded period.
3) Must not exceed 255 characters.
4) Must be unique in the scope in which it is declared.

Note: In VBScript, all variables are of type Variant that can store different types of data.

Declaring Variables

The variables can be declared with the Dim, Public or the Private statement. Like this:

Dim MyNumber

Dim MyArray(9)

Public statement variables are available to all procedures in all scripts

Public MyNumber

Public MyArray(9)

Public MyNumber, MyVar, YourNumber

Private statement variables are available to only the script in which they are declared.

Private MyNumber

Private Myarray(9)
Private MyNumber, MyVar, YourNumber

Note: Use OptionExplicit to avoid incorrectly typing the name of an existing variable in code
where the scope of the variable is not clear. If used, the OptionExplicit statement must appear
in a script before any other statement

Working with Constants

A constant is a variable within a program that never changes in value. Using the const
statement, string or numeric constants with meaningful names can be created. For example:

Const MyString = "This is my string."

Const MyAge = 49

Working with Operators

When several operations occur in an expression, each part is evaluated and resolved in a
predetermined order called operator precedence. Parentheses can be used to override the
order of precedence and force some parts of an expression to be evaluated before other parts.
Operations within parentheses are always performed before those outside. Within
parentheses, however, normal operator precedence is maintained.

When expressions contain operators from more than one category, arithmetic operators are evaluated
first, comparison operators are evaluated next, and logical operators are evaluated last. Comparison
operators all have equal precedence; that is, they are evaluated in the left-to-right order in which they
appear. Arithmetic and logical operators are evaluated in the following order of precedence:

Arithmetic Comparison Logical

Negation (-) Equality (=) Not

Exponentiation (^) Inequality (<>) And

Multiplication and division (*, /) Less than (<) Or

Integer division (\) Greater than (>) Xor

Modulus arithmetic (Mod) Less than or equal to (<=) Eqv

Addition and subtraction (+, -) Greater than or equal to (>=) Imp


String concatenation (&) Is &

Scope and Lifetime of Variables

A procedure is a block of code that accomplishes a specific goal. When a variable is created, it
can be used within a specific procedure or share it among all the procedures in the script. The
availability a variable has within its environment is referred to as the scope of a variable. When
a variable is declared inside a procedure, it has local scope and can be referenced only while
that procedure is executed. Local-scope variables are often called procedure-level variables
because they exist only in procedures. If a variable is declared outside a procedure, it
automatically has script-level scope and is available to all the procedures in your script.

The lifetime of a variable depends upon how long it exists. The lifetime of a script-level variable
extends from the time it is declared until the time the script is finished running.

A procedure level variable can only be accessed within that procedure. When the procedure
exits, the variable is destroyed.

Note: In order to inquire about the subtype of a variable, use the VarType function. This
function takes one argument: the variable name. The function then returns an integer value
that corresponds to the type of data storage VBScript is using for that variable.

Array Variables
 An array is a type of variable that ties together a series of data items and places them in
a single variable.
 An Array variable uses parentheses () following the variable name.

Creating Arrays

An array created with the Dim keyword exists as long as the procedure does and is destroyed
once the procedure ends. Two types of arrays can be created using VBScript namely fixed
arrays and dynamic arrays. Fixed arrays have a specific number of elements in them, whereas
dynamic arrays can vary in the number of elements depending on how many are stored in the
array

Fixed-Length Arrays

Fixed-length arrays can be created using the following syntax:


Dim Array_Name(count - 1)

Dim names(2)

The data can be assigned to the elements of an array as follows:

Dim names (2)

names (0) = "Tom"

names (1) = “James“

names (2) =”Harry”

The data can be retrieved from an array using an index into the particular array element. Like
this:

Friend = names (0)

An array can have up to 60 dimensions. Multiple dimensions are declared by separating the
numbers in the parentheses with commas. In the following example , MyTable is a two-
dimensional array consisting of 4 rows and 6 columns:

Dim MyTable (4, 6)

In a two-dimensional array, the first number is always the number of rows; the second number
is the number of columns.

Dynamic Arrays

A dynamic array is an array whose size changes during the run-time. The array is initially declared
within a procedure using either the Dim or using the ReDim statement. For example:

Dim MyArray ()

ReDim AnotherArray ()

A subsequent ReDim statement resizes the array, but uses the Preserve keyword to preserve the
contents of the array as the resizing takes place.

ReDim MyArray(10)

……….
ReDim Preserve MyArray (20)

Note: The size or number of dimensions is not placed inside the parentheses for a dynamic
array. There is no limit for resizing a dynamic array.

Procedures

In VBScript there are two kinds of procedures: the Sub procedure and the Function procedure.

A Sub Procedure:

 Is a series of VBScript statements (enclosed by the Sub and End Sub statements) that
perform actions but don’t return a value.
 Can take arguments that are passed to it by a calling procedure.
 Without arguments, must include an empty set of parentheses ().

Sub mysub ()

statement 1

statement 2

statement 3

..

statement n

End Sub

A Function Procedure
• Is a series of statements, enclosed by the Function and End
Function statements.
• Returns a value by assigning value to its name in one or more statements of the
procedure.
• Can take arguments passed by the calling procedures.
• Without arguments must include an empty set of parentheses ().

Function myfunction ()

statement 1
statement 2

statement 3

..

statement n

myfunction = value

End Function

Note:

The return type of Function is always a Variant.

If Statement

 Conditionally executes a group of statements, depending on the value of an expression.

If condition Then statements [Else elsestatements]

‘Or, the block form syntax:

If condition Then

[statements]

[ElseIf condition-n Then

[elseifstatements]]

[Else
[elsestatements]]

EndIf

The following examples illustrate the use of if … then statements:

a) If I=10 Then msgbox "Hello" b) If I=10 Then

msgbox "Hello"

Else

msgbox "Goodbye"

End If

c) If avg>75 Then

msgbox “Distinction"

ElseIf avg<75 and avg>=60

msgbox “First Class”

ElseIf avg<60 and avg>=50

msgbox “Second Class”

ElseIf avg<50 and avg>=35

msgbox “Pass”

Else

msgbox “Fail”

End If

Select Case Statement

 Executes one of several groups of statements, depending on the value of an expression.

Select case testexpression


[case expressionlist-n

[statements-n]] . . .

[case Else

[elsestatements-n]]

End Select

The following example illustrates the use of the Select Case statement.

Dim payment

payment = “Cash”

select case payment

case “Cash" msgbox “You are going to pay cash"

case “Visa" msgbox " You are going to pay with Visa"

case Else msgbox "Unknown method of payment"

End select

For . . . Next Statement

 Repeats a group of statements a specified number of times.

For counter = start To end [Step step]

[statements]

[Exit For]
[statements]

Next

Exit For can only be used within a For Each … Next or For … Next control structure to provide
an alternate way to exit. Any number of Exit For statements may be placed anywhere in the
loop.

In this example below, the counter variable (I) is increased by one each time the loop repeats.

For I =1 to 10

msgbox i

Next

For Each...Next statement

 Repeats a block of statements for each element in array or collection.

For each element in array

[statements]

[Exit For]

[statements]

Next [ element ]

The following example illustrates the use of For Each … Next statement:

Dim cars(2)

cars(0)="Volvo"

cars(1)="Saab"

cars(2)="BMW"

For Each x in cars

msgbox x
Next

Do … Loop statement

Repeats a block of statements while a condition is True or until a condition becomes True.

Do [{while | until} condition]

[statements]

[Exit Do]

[statements]

Loop

Or use this syntax

Do

[statements]

[Exit Do]

[statements]

Loop [{while | until} condition]

The following examples illustrate the use of Do … Loop statement:

a) Dim i

i=0

Do while i<=10

msgbox i

i=i+1

Loop

b) Dim i
i=0

Do

msgbox i

i=i+1

Loop while i<=10

c) Dim i

i=20

Do until i=10

i=i-1

Loop

d) Dim i

i=0

Do

i=i+1

Loop Until i=10

The Exit Do can only be used within a Do … Loop control structure to provide an alternate way
to exit a Do … Loop . Any number of Exit Do statements may be placed anywhere in the loop.

While … Wend statement

 Executes a series of statements as long as a given condition is True.

While condition

[statements]

Wend
The following example illustrates the use of Do … Loop statement:

Dim counter

counter = 0

while counter < 10

counter = counter +1

msgbox counter

Wend

Note: The Do … Loop statement provides a more structured and flexible way to perform
looping.

Date Function

 Returns the current system date.

Date
The following example uses the Date function to return the current system date:

Dim myDate

myDate = Date ‘myDate contains the current system date.

Time Function

 Returns a Variant of subtype Date indicating the current system time.

Date
The following example uses the Date function to return the current system date:

Dim myTime

myTime = Date ‘Return current system time

DateDiff Function
 Returns the number of intervals between two dates.
DateDiff(interval,date1,date2[,firstdayofweeek[,firstweekofyear]])

Arguments Description

interval Required. String expression that is the interval


need to be used to calculate the difference
between date1 and date2.

Can take the following values :

yyyy Year

m month

q Quarter

d Day

y Day of year

w Week day

ww Week of year

h Hour

n Minute

s Second

m Month

date1,date2 Required. Date expressions. two dates need to


be used in the calculation

firstdayofweek Optional. Constant that specifies the day of


week.

firstweekofyear Optional. Constant that specifies the first week


of the year.
The following example uses the DateDiff function to display the number of months between a
given date and today:

msgbox DateDiff(“ m”,Date,”12/31/2002”) ‘output is 1/14/02

DateAdd Function

 Returns a date to which specified time interval has been added.

DateAdd(interval,number,date)

Arguments Description

interval Required. String expression that is the interval


need to be added.

Can take the following values :

yyyy Year

m month

q Quarter

d Day

y Day of year

w Week day

ww Week of year

h Hour

n Minute

s Second

m Month

number Required. Numeric expression that is the


number of interval that has to be added.

day Required. Variant or literal representing the


date to which the interval is added.

The following example uses the DateAdd function to add a month to January 31, 2000.

msgbox DateAdd(“m”,1,”31-Jan-00”) ‘ output is 2/29/00

Conversion Functions

Asc Function

 Returns the ANSI character code corresponding to the first letter in a string.

Asc(string)

In the following example, Asc returns the ANSI character code of the first letter of each string:

Dim I

I = Asc(“A”) ‘ Returns 65

I = Asc(“Apple”) ‘ Returns 65

CInt Function

 Returns an expression that has been converted to a Variant of subtype Integer.

CInt (expression)

The following example uses the CInt function to convert a value to an Integer:

Dim I, k

I = 9.888

k=CInt (i) ‘k contains 10

CDate Function

 Returns an expression that has been converted to a Variant of subtype Date.

CDate (date)

The following example uses the CDate function to convert a string to a date.

Dim MyDate

MyDate=“October 19, 1962” ‘Define a date


MyShortDate=CDate(MyDate) ‘MyShortdate contains #10/19/1962#

Tip: Use the IsDate function to determine if date can be converted to a date or time.

Note: In general, hard coding dates and times as strings (as shown in this example) is not
recommended. Use date and time literals (such as #10/19/1962#, #4:45:23 PM#) instead.

CStr Function

 Returns an expression that has been converted to a Variant of subtype String.

CStr(expression)

The following example uses the CStr function to convert a numeric value to a String :

Dim I, j

I = 10

j=CStr (i) ‘j contains “10”

Hex Function

 Returns a string representing the hexadecimal value of a number.

Hex(number)

The following example uses the Hex function to return the hexadecimal value of a number:

Dim MyHex

MyHex = Hex(10) ‘ Returns 10

MyHex = Hex(459) ' Returns 1CB.

Format Functions

FormatCurrency Function

 Returns an expression formatted as a currency value using the currency symbol defined in
the system control panel.

FormatCurrency (Expression [, NumDigitsAfterDecimal[, IncludeLeadingDigit[,


UseParForNegativeNumbers[, GroupDigits]]]])

Arguments Description

Expression Required. Expre Required. Expression to be


formatted.

NumDigitsAfterDecimal Optional .Numeric value indicating how many


places to the right of the decimal are displayed.

Default value is -1.

IncludeLeadingDigit Optional. Tristate constant that indicates


whether or not a leading zero is displayed for
fractional values.

UseParForNegativeNumbers Optional. Tristate constant that indicates


whether or not to place negative values
within parentheses.

GroupDigits Optional. Tristate constant that indicates


whether or not numbers are grouped using the
group delimiter specified in the computer's
regional settings.

The following example uses the FormatCurrency function to format the expression as a
currency and assign it to MyCurrency:

Dim MyCurrency

MyCurrency = FormatCurrency(1000) ‘MyCurrency contains $1000 . 00

FormatNumber Function

 Returns an expression formatted as a number.

FormatNumber (Expression [, NumDigitsAfterDecimal[, IncludeLeadingDigit[,

UseParForNegativeNumbers[, GroupDigits]]]])
Arguments Description

Expression Required. Expre Required. Expression to be


formatted.

NumDigitsAfterDecimal Optional .Numeric value indicating how many


places to the right of the decimal are displayed.

Default value is -1.

IncludeLeadingDigit Optional. Tristate constant that indicates


whether or not a leading zero is displayed for
fractional values.

UseParForNegativeNumbers Optional. Tristate constant that indicates


whether or not to place negative values
within parentheses.

GroupDigits Optional. Tristate constant that indicates


whether or not numbers are grouped using the
group delimiter specified in the computer's
regional settings.

The following example uses the FormatNumber function to format a number to have four
decimal places:

Dim MyAngle,MySecant,MyNumber

MyAngle = 1.3 ‘Define angle in radians

MySecant = 1/Cos(MyAngle) ‘Calculate secant

MyNumber =FormatNumber(MySecant,4) ‘Format secant to four decimal places

FormatPercent Function

 Returns an expression formatted as a percentage (multiplied by 100) with a trailing %


character.
FormatPercent (Expression [, NumDigitsAfterDecimal[, IncludeLeadingDigit[,

UseParForNegativeNumbers[, GroupDigits]]]])

Arguments Description

Expression Required. Expression to be formatted.

NumDigitsAfterDecimal Optional. Numeric value indicating how many


places to the right of the decimal are displayed.

Default value is -1.

IncludeLeadingDigit Optional. Tristate constant that indicates


whether or not a leading zero is displayed for
fractional values.

UseParForNegativeNumbers Optional. Tristate constant that indicates


whether or not to place negative values
within parentheses.

GroupDigits Optional. Tristate constant that indicates


whether or not numbers are grouped using the
group delimiter specified in the computer's
regional settings.

The following example uses the FormatPercent function to format an expression as a percent:

Dim MyPercent

MyPercent = FormatPercent(2/32) ‘MyPercent contains 6.25%

FormatDateTime Function

 Returns an expression formatted as a date or time.

FormatDateTime(Date[, NamedFormat])

Arguments Description

Date Required. Date expression to be formatted.


NamedFormat Optional. Numeric value indicates the date/time
format used.

The following example uses the FormatDateTime function to format the expression as a long
date:

msgbox “The current date is”& FormatDateTime(Date() , 1)

Math Functions

Abs Function

 Returns the absolute value of a number.

Abs(number)

The following example uses the Abs function to compute the absolute value of a number:

Dim x

x=Abs(-10) ‘ x contains 10

Cos Function

 Returns the cosine of an angle.

Cos(number)

The following example uses the Cos function to return the cosine of an angle :

Dim MyAngle,MySecant

MyAngle =1.3 ‘Define angle in radians.

MySecant=1/Cos(MyAngle) ’Calculate Secant

Int Function

 Returns the Integer portion of a number.

Int(number)

The following example illustrates how the Int function returns the Integer portion of a number:

Dim MyNumber

MyNumber = Int(99.8) ‘ MyNumber contains 99


Sqr Function

 Returns the square root of a number.

Sqr(number)

The following example uses the Sqr function to calculate the square root of a number:

Dim I

I = Sqr(4) ‘I contains 2

Array Function

 Returns a Variant containing an array.

Array(arglist)

In the following example, the first statement creates a variable named x. The second statement
assigns an array to variable x. The last statement assigns the value contained in the second
array element to another variable.

Dim x

x=Array (10, 20, 30)

y = x(1) ‘y contrains 20

Split Function

 Returns a zero-based, one-dimensional array containing a specified number of substrings.

Split (expression [, delimiter [, count [, compare]]])

Arguments Description

expression Required. String expression containing


substrings and delimiters.

delimiter Optional. String character used to identify


substring limits If omitted, the
space character (“”) is assumed to be the
delimiter.

count Optional. Number of Strings to be returned; -1


indicates that all strings to be
returned.

compare Optional. Numeric value indicating the kind of


comparison to use when
evaluating substrings.

The following example uses the Split function to return an array from a string.

Dim MyDate,MyArray

MyDate =“7/10/06”

MyArray = Split(MyDate,”/”)

‘ MyArray(0) contains “7”

‘ MyArray(1) contains “10”

‘ MyArray(2) contains “06”

Filter Function

 Returns a zero-based array containing a subset of a string array based on specified filter
criteria.

Filter (InputStrings, Value [, Include [, Compare]])

Arguments Description

InputStrings Required. One-dimensional array of strings to


be searched.

Value Required. String to search for.

Compare Optional. Numeric value indicating the kind of


string comparison to use.

The following example uses the Filter function to return the array containing the search criteria
"Mon":

Dim MyIndex

Dim MyArray(3)

MyArray(0)=“Sunday”

MyArray(1)=“Monday”

MyArray(2)=“Tuesday”

MyIndex = Filter(MyArray,”Mon”) ‘MyIndex(0) contains Monday

String Function

 Returns a repeating character string of the length specified.

String (number ,character)

The following example uses the String function to return repeating character strings of the
length specified:

Dim MyString

MyString =String(5,”@”) ‘MyString contains “@@@@@”

InStr Function

 Returns the position of the first occurrence of one String within another.

InStr([start, ]string1,string2[, compare])

Arguments Description

Start Required. String expression containing


substrings and delimiters.

string1 Required. String expression being searched.

String2 Required. String expression searched for.

compare Optional. Numeric value indicating the kind of


comparison to use when evaluating substrings.

The following example uses InStr function to search a string:

Dim SearchText,MyPos

SearchText=“This is a beautiful day!”

MyPos=InStr(SearchText,”his”) ‘MyPos contains 2

Trim Function

 Removes all leading and trailing white-space characters from the string.

Trim (String)

The following example uses Trim function to removes spaces on both sides of the string:

Dim txt

txt=“ This is a beautiful day! “

Msgbox Trim(txt) ‘output : “This is a beautiful day!”

StrReverse Function

 Returns a string in which the character order of a specified string is reversed.

StrReverse(String)

The following example uses the StrReverse function to return a string in reverse order:

Dim MyStr

MyStr=“hello”

Msgbox StrReverse(MyStr) ‘output: “olleh”

Other Functions

RGB Function

Returns a whole number representing an RGB color value.


RGB (red, green, blue)

Arguments Description

red Required. Number in the range 0-255


representing the red component of the color.

green Required. Number in the range 0-255


representing the green component of the color.

blue Required. Number in the range 0-255


representing the blue component of the color.

msgbox rgb(255,0,0) ‘output :255

IsNull Function

 Returns a Boolean value that indicates whether a specified expression contains no valid
data (Null).

IsNull(expression)

The following example uses the IsNull function to determine whether a variable contains a Null:

Dim MyVar, MyCheck

MyCheck = IsNull(MyVar) ‘returns False

IsEmpty Function

 Returns a Boolean value indicating whether a variable has been initialized.

IsEmpty (expression)

The following example uses the IsEmpty function to determine whether a variable has been
initialized:

Dim MyVar, MyCheck

MyCheck = IsEmpty(MyVar) ' Returns True.

MsgBox Function
 Displays a message in a dialog box, waits for the user to click a button, and returns a value
indicating which button the user clicked.

MsgBox(prompt[, buttons][,title][,helpfile, context])

Arguments Description

prompt Required. String expression displayed as the


message in the dialog box. The maximum length
of prompt is approximately 1024 characters,
depending on the width of characters used.

buttons Numeric expression that is sum of values


specifying the number and type of buttons to
display, the icon style to use,the identity of the
default button , and the modality of the
message box.

title String expression displayed in the title bar of the


dialog box.

helpfile String expression that identifies the Help file to


use to provide context-sensitive Help for the
dialog box.

context Numeric expression that indicates the Help


context number assigned by the Help author to
the appropriate Help topic.

The following example uses the MsgBox function to display a message box and return a value
describing which button was clicked:

InputBox Function

 Displays a prompt in a dialog box, waits for the user to input text or click a button, and
returns the contents of the text box.
InputBox(prompt[,title][,default][,xpos][,ypos][,helpfile,context])

Arguments Description
prompt Required. String expression displayed as the
message in the dialog box. The maximum length
of prompt is approximately 1024 characters,
depending on the width of characters used.

title String expression displayed in the title bar of the


dialog box.

default String expression displayed in the text box as


the default response if no other input is
provided.

xpos Numeric expression that specifies, in twips, the


horizontal distance of the left edge of the dialog
box from the left edge of the screen.

ypos Numeric expression that specifies, in twips, the


horizontal distance of the left edge of the dialog
box from the top of the screen.

helpfile String expression that identifies the Help file to


use to provide context-sensitive Help for the
dialog box.

context Numeric expression that identifies the Help


context number assigned by the Help author to
the appropriate Help topic.

The following example uses the InputBox function to display an input box and assign the string
to the variable Input:

Dim Input

Input = InputBox("Enter your name")

msgbox ("You entered: " & Input)

You might also like