Manual Testing Book
Manual Testing Book
Table of Context
“Every program does something right, it just may not be the thing we want it to do”
Testing is not a newly added activity in software development; it’s been ignored for quite some time as
an expensive and not so important activity.
As industry grew from small to big, as software’s inherent feature of complexity is gradually increased
with more and more functionalities are added to it every now and then, delivering a correct product on
time and within cost became to trouble the development companies.
To overcome this problem there is a need for a process to make sure that you’re delivering a product,
that is of high quality, correctly working and to reduce the costs unnecessarily spent on correcting the
problems during its operational use and inconvenience provided to the customer. And it is nothing but
testing.
1.1 What is testing?
Testing is a process of
Or
1. Cover all the errors or defects present in the application prior to its deployment in
customer place.
2. Identify and describe the actual behavior of the application under test.
3. To assess the quality of the software to determine whether it meets customer
expectation.
Testing is not done to say that software is working correctly but it is done to identify defects in
given software. Testing is to identify defects not to correct the defects. Testing helps in
identifying the correctness, completeness, reliability and quality of a software.
1. Business Analyst
a. Meeting Client and Gathering Requirements from the client
b. System Documentation
c. Involved in preparation of Test Plan
d. Test conditions development
2. Project Manager
Test Methodology for project Total Quality:
a. Leading the Project
b. Conducting meetings with the client
c. Oversees the Life Cycle and Staff
d. Managing the complete project
3. Team Lead
a. Involved in meetings with the client
b. Preparation of Test Plan
c. Assigning the work to the Team members
4. Software Tester
a. Writing the Test Cases
b. Executing the Test Cases
c. Evaluating test execution
d. Raising the Defects
e. Closing the Defects
f. Sending weekly status reports to the Team Lead
In order to become a good tester, any tester should have the following characteristics:
A good test engineer has a 'test to break' attitude, an ability to take the point of view of
the customer, a strong desire for quality, and an attention to detail.
Judgement skills are needed to assess high-risk areas of an application on which to focus
testing efforts when time is limited.
”The earlier you identify a defect, lesser is the correction time and cost”
Testing is sometimes incorrectly thought as an” after the fact” activity: performed after
programming or coding is done for a product. Instead it should be conducted at every stage or
phase of development to effectively and efficiently test correctness and consistency of the
product at every step.
In order to know when testing should occur in a software development process, its mandatory to
have knowledge of various software process models.
This chapter provides a detailed information about the stages or phases through which a
software development process passes through called as “Software Development Life Cycle” and
when testing should occur.
2.1 What is “Software Development Life Cycle” (SDLC)?
Every living creature has a well defined life cycle in this world. Certain activities happen during
specific timeframe and the order of these activities follow a clear-cut path.
For example, if we take a typical life cycle of a butterfly, first the egg is formed, then it becomes
larva, it becomes pupa and then it becomes a full fledged butterfly. Same way, if we take the
life of a human being, first the baby is formed in the uterus, it develops the limbs and grows, it
is born, it crawls, walks, runs etc and then at old age it dies. These things happen in every
typical human being.
Software also has a life and it follows a typical life cycle depending upon the nature of the
software. The software undergoes a lot of phases before it is developed and it undergoes a lot
of modifications after it becomes productive and one day it becomes obsolete too.
From the inception of an idea to develop a product to till that product goes out of use the
software development process passes through a set of phases iteratively called as “Software
Life Cycle”.
The life cycle begins when an application is first conceived and ends when it is no longer in use.
The following sections describe the various stages in the Software Development Life Cycle.
1. Requirements Phase
2. Analysis Phase
3. Design Phase
4. Development or Coding Phase
5. Testing Phase
6. Maintenance Phase
1. Requirements Phase:
In a new software development process, the organization management gets the
proposals for software needs to be developed from different vendors. After receiving
the proposal the vendor Project Manager prepares the PIN (Project Initiation Notes)
document with overall plan of required resources.
If the PIN document is reasonable the Business Analyst or Functional Lead gathers
project requirements from the Customer. The gathered requirements are available on
BRS (Business Requirement Specifications)/CRS (Customer Requirements Specifications)
document.
2. Analysis Phase:
After gathering the requirements, Business Analyst and Project Manager conduct
analysis on those requirements to prepare SRS (Software Requirements Specifications).
After completion of BRS and SRS document, Business Analyst again reviews that
document for completeness and correctness. In this review they are verifying below
factors in BRS and SRS.
3. Design Phase:
After completion of BRS & SRS document and the reviews; the BA, PM and the senior
programmers prepare the project plan with required feasible schedule, H/W
requirements, S/W requirements etc.
After finalization of detailed project plan the senior programmers prepare HLD’S (High
Level Design) and LLD’S (Low Level Design).
The HLD indicates the overall architecture or blue print of complete software. This
design is also known as External Design (or) Architectural Design.
HLD:-
After completion of HLD preparation, the senior programmers prepare Low Level
Designs. Each LLD indicates in-depth logic of each module or function or unit.
In a project design HLD is system level and LLD is module or unit level. Programmers use
LLD for coding and HLD for Integration of that coded programs.
After completion of HLD and LLD preparation, the senior programmers conduct reviews
on those documents. In this review, they verify below factors in HLD and LLD’S.
5. Testing Phase:
After receiving the software build from programmers, the testing team starts their task
of software testing to validate customer requirements and customer expectations.
6. Maintenance Phase:
After s/w is released and training is provided to the customer on the software; the
project manager establishes a “Change Control Board” (CCB) team with some
representatives. If any changes are required, the CCB team representatives receive s/w
change request from the customer.
2.1.2 Common problems in the software development process:
Poor requirements - If requirements are unclear, incomplete, too general, and not
testable, there will be problems.
Unrealistic schedule - If too much work is crammed in too little time, problems are
inevitable.
Inadequate testing - No one will know whether or not the program is fully tested until
the customer complaints or systems crash.
Futurities - Requests to pile on new features after development is underway; extremely
common.
Miscommunication - If developers don't know what's needed or customers have
erroneous expectations, problems are guaranteed.
Design Phase
• Determine consistency of design with requirements.
• Determine adequacy of design.
• Generate structural and functional test conditions.
Testing Phase
• Determine adequacy of the test plan.
• Test application system.
Installation Phase
• Place tested system into production.
Maintenance Phase
• Modify and retest.
2.1.4 Types of Development Projects
Type Characteristics Test Tactic
2. Test Planning: During this phase Test Strategy is defined and Test Bed created. The Plan
should identify:
3. Test Environment Setup: A different testing server is prepared where the application
will be tested. It is an independent testing environment.
4. Designing Test Scenarios: The Team lead will identify and write the Test Scenarios. The
Project Manager is going to review the Test Scenarios written by the Team Lead.
5. Writing the Test Cases: Based on the Test Scenarios the QA’s or the Testers will write
the Test Cases. Once the Testers finish writing the Test Cases he will send the Test Cases
to the Team Lead and the concerned people in the Development Team for the Review of
the Test Cases. This is called as “Pre-Review”. Now they will review the Test Cases and
provide their comments and the Tester will incorporate the comments given by the
Team Lead and the Development Team.
After incorporating the comments given by them, now again the “Tester” will send the
modified Test Cases to the Team Lead and the Development Team. Again the Team Lead
and the Development Team will review the Test Cases. This is called as “Post-Review”.
After the “Post-Review” is done then only the Test Cases will get finalized for the
Testing.
6. Test Cases Execution: Now the Testers will execute the Test Cases and report any errors
found to the Development Team.
7. Defect Tracking: Raised Defects are tracked using some of the tools like Bugzilla, Quality
Center, TOM (Test Object Management Tool) etc.
8. Test Reports: As soon as testing is completed, Team Lead or Manager generates metrics
and make final reports for the whole testing effort.
Prototyping
Spiral Model
2.2.1 Water Fall Model
Waterfall model is same like as SDLC. This model is the first ever SDLC model proposed; hence it
is also called as “Classic Life Cycle Model”. This is a step by step model, after completion of one
phase the next phase is implemented.
A classic SDLC model, with a linear and sequential method that has goals for each
development phase.
The waterfall model simplifies task scheduling, because there are no iterative or
overlapping steps.
One drawback of the waterfall model is that it does not allow for much revision.
Advantages:
Allow processes to be managed.
Make processes more systematic.
Disadvantages:
2.2.2 V-Model
The V-model was originally developed from the waterfall software process model. The four
main process phases – requirements, analysis, design and coding – have a corresponding
verification and validation testing phase. In this methodology development and testing takes
place at the same time with the same kind of information in their hands. Typical "V" shows
Development Phases on the Left hand side and Testing Phases on the Right hand side.
Development Team follow "Do-Procedure" to achieve the goals of the company and
Testing Team follow "check-Procedure" to verify them.
Works well for small projects where requirements are easily understood.
Chapter 3 - How to test?
Testing, as it is an important activity consumes nearly 40% of the project time and effort. The
critical success criterion for a software project is the time of delivery. As time constraints are
imposed, no activity in a software development process can take its own time for completion;
rather it should be completed in an optimum time. With the appropriate use of tools and
reusable components significant time can be saved in development life cycle.
In the same way by following a proper process in a systematic way and doing ‘smart’ testing will
result in saving much time and effort still maintaining the quality. This idea leads to the
innovation of testing techniques and methodologies or strategies in a testing process.
This chapter gives in an insight into various testing techniques, types, methodologies and
strategies in a testing process.
3.1 Classification of Testing
There are several approaches, which can be adopted in testing an application.
The following figure shows the classification of testing approaches.
Testing
Static Dynamic
Dynamic
Reviews Static etc.
Reviews etc.
Static Analysis
Behavioural
Behavioural
Inspection Static Analysis
Inspection
Walkthroughs Non-functional
Walkthroughs Structural
Structural Non-functional Functional etc.
Functional
Desk-checking etc. etc.
Desk-checking etc. Equivalence
Control Usability Equivalence
DataFlow Control Usability
DataFlow Performance Partitioning
Flow Partitioning
Boundary
etc. Flow Performance
etc. Boundary
etc.
Symbolic Statement etc. Value Analysis
Symbolic Statement Value Analysis
Branch/Decision Arcs Cause-Effect Graphing
Execution Branch/Decision Arcs Cause-Effect Graphing
Execution
Definition- Use Branch Condition LCSAJ Random
Definition- Use Branch Condition LCSAJ Random
State Transition
Branch Condition Combination State Transition
Branch Condition Combination
The process of evaluating a system or component to determine whether the products of the
given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1].
“Verification” checks whether we are building the right system.
Validation
Determination of the correctness of the products of software development with respect to the
user needs and requirements [BS 7925-1]. “Validation” checks whether we are building the
system right.
Functional testing
Ensures that the requirements are properly satisfied by the application system. The functions
are those tasks that the system is designed to accomplish.
Structural testing
1. Unit Testing
Testing a piece of code is called Unit Testing. This is also known as Program Testing or Module
Testing. Unit Testing is done by developers.
In this Unit Testing the developers are validating programs using below techniques:
a. Basic Path Coverage
b. Control Structure Coverage
c. Program Technique Coverage
The unit level testing techniques or program level testing techniques are called as white box
testing techniques or Gray Box or clear box testing techniques.
2. Integration Testing
After completion of related programs writing and their unit testing, the corresponding
programs are inter connected to form an s/w build. Testing these inter connected modules is
called as Integration Testing. The Integration Testing is done by the developers. Usually, the
following methods of Integration testing are followed:
3. System Testing
After receiving an s/w build from developers, the testing team is concentrating on system
testing to validate customer requirements and customer expectations.
The following tests can be categorized under System testing:
1. Recovery Testing. 11. Conformance Testing
2. Security Testing. 12. Usability Testing
3. Compatibility Testing 13. End To End Testing
4. Configuration Testing 14. Regression Testing
5. Data Volume Testing 15. Re-Testing
6. Load Testing 16. Smoke/Sanity Testing
7. Stress Testing. 17. Alpha Testing
8. Performance Testing. 18. Beta Testing
9. Installation Testing 19. Functional Testing
10. Encryption/Decryption Testing
Test strategy identifies a best possible use of the available resources and time to achieve the
required testing coverage or identified testing goals. It decides on which parts and aspects of
the system the emphasis should fall.
Test Strategy determination is based on a number of factors, a few of them is as listed below.
Product Technology
Component selection
Product criticality
Product complexity
By reviewing relevant characteristics of a product, key test requirements are derived. Then the
most appropriate test method can be implemented to achieve a real goal: complete test
coverage.
The main intention of preparing the Test Plan is that everyone concerned with the project are
in sync with regards to the Scope, responsibilities, deadlines and deliverables for the project.
A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a
software product.
The completed document will help people outside the test group understand the ‘Why’ and
‘How’ of the product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it.
Purpose
This section should contain the purpose of preparing the test plan.
Scope
This section should talk about the areas of the application which are to be tested by the QA
team and specify those areas which are definitely out of scope (screens, database, mainframe
processes etc).
Test Approach
This would contain details on how the testing is to be performed and whether any specific
strategy is to be followed (including configuration management).
Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.) pre-
requisites. For example: Timely environment set up, starting the web server / app server,
successful implementation of the latest build etc.
Resources
This section should list out the people who would be involved in the project and their
designation etc.
Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the
various members in the project.
Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test
environment, database refresh etc.
Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met in the
course of the project.
Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.)
Quick Test Professional, Load Runner, Test Complete, Test Partner, Quality Center (or) Test
Director.
Deliverables
This section contains the various deliverables that are due to the client at various points of time
(i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans,
Test Procedure, Test Matrices, Status Reports and Test Scripts etc. Templates for all these could
also be attached.
References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)
Annexure
This could contain embedded documents or links to documents which have been / will be used
in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents
can also be attached here.
Sign-Off
This should contain the mutual agreement between the client and the QA team with both leads
/ managers signing off their agreement on the Test Plan.
Test Case:
The definition of “Test Case” differs from company to company, tester to tester and even
project to project.
A Test Case consists of set of inputs, execution steps and expected results developed for a
particular objective such as to exercise a particular program path or to verify compliance with a
specific requirement.
PROJECT COES
Document Références
MODULE Order Entry : COES SRS Ver1.2
Sec/Page REF NO.:
FORM REF Authentication 5.1.1 / 9
FUNCTIONAL
User Authentication REF NO:- 5.1.1.1
SPECIFICATION
To check whether
TEST the entered User
OBJECTIVE name and Password
are valid or Invalid
PREPAIRED BY Ashok
Actual
Step No Steps Data Expected Results
Results
Should Display
Enter User Name
User Name= Warning Message Box
1 and press LOGIN
COES "Please Enter User
Button
name and Password"
Should Display
Enter Password and Password= Warning Message Box
2
press LOGIN Button COES "Please Enter User
name and Password"
• It is testing without knowledge of the internal workings of the item being tested.
• The tester would only know the "legal" inputs and what the expected outputs should be,
but not how the program actually arrives at those outputs.
• It is because of this that black box testing can be considered testing with respect to the
specifications, no other knowledge of the program is necessary.
• The tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work.
• For this testing, test groups are often used.
Equivalence Partitioning
Boundary Value Analysis
Orthogonal Array Testing
Specialized Testing
Equivalence Partitioning:
Equivalence partitioning is a method for deriving test cases. In this method, classes of input
conditions called equivalence classes are identified such that each member of the class causes
the same kind of processing and output to occur.
In this method, the tester identifies various equivalence classes for partitioning. A class is a set
of input conditions that are is likely to be handled the same way by the system. If the system
were to handle one case in the class erroneously, it would handle all cases erroneously.
Black-box technique that focuses on the boundaries of the input domain rather than its center
BVA guidelines:
• Black-box technique that enables the design of a reasonably small set of test cases that
provides maximum test coverage.
• Focus is on categories of faulty logic likely to be present in the software component
(Without examining the code)
• Only a small number of possible inputs can actually be tested, to test every possible
input stream would take nearly forever.
• Without clear and concise specifications, test cases are hard to design.
• There may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried.
• May leave many program paths untested.
• Cannot be directed toward specific segments of code which may be very complex (and
therefore more error prone).
• Most testing related research has been directed toward glass box testing
• Software testing approaches that examine the program structure and derive test data
from the program logic.
• Structural testing is sometimes referred to as clear-box testing since Black boxes are
considered opaque and do not really permit visibility into the code.
Component
Component
Informal Reviews:
Anyone to one meeting that can happen between any two persons is called as “Informal
Reviews”. It is also called as “Peer Reviews”.
Peer reviews are not preplanned, details of discussion are not documented and the outcome of
the review is not reported. So it is called as “Informal” review.
Walkthroughs:
In a walkthrough the designer or programmer leads the members of the development team
and other interested parties through the software and the participants ask questions and make
comments about possible errors, violations of development standards and other problems.
Walkthroughs are still not preplanned so they are also called as ‘Semi formal Reviews”
Purpose of Walkthrough:
Participants:
1. Walkthrough Leader
2. Recorder
3. Author
4. Team Members.
Inspections:
A visual examination of a software product to detect and identify software anomalies, including
errors and deviations from standards and specifications
Purpose of Inspection:
1. Inspection Leader
2. Recorder
3. Reader
4. Author
5. Inspector
Responsibilities:
Each participant is given some responsibilities which he is required to carry during the
inspection.
Moderator- Planning and Preparation.
Recorder- Documentation works.
Reader-leads the team through the software.
Author-Checking that the software meets the entry criteria for inspection.
Inspector-identifies and describes the anomalies in the software.
Inspection process:
Author Moderator
Author
Examination
Reader
Moderator
Planning Recorder Moderator Follow Up
Inspector
Inspector Author
Reader
Overview meeting
Recorder
Preparation Moderator
Inspector
Author
Why to do inspections:
Cost of detecting and fixing defects is less during the earlier stages.
Testing alone cannot eliminate all defects.
It gives management an insight into the development process.
Quality can be maintained from the initial stages.
In unit testing, as and when builds/programs are coded completely, the testing starts. When
the related units are coded and unit tested, then the integration test starts. When all units are
unit tested and integration tested and then only the system test starts.
Before the test cases are executed, the Test Lead will allocate the test cases to different
individual testers, depending upon the test groups and availability of the testers. Also, the
Test Lead will fix the target dates for executing the test cases, for individual testers. This
comes under the work distribution and scheduling part of the Test Lead.
Before any one can start testing, the test environment must be ready. It is always advised to
use a separate test environment, which is different from the development environment. It
may be a different machine altogether or a different set of drivers/directories from which
the test cases are executed. The entire set of program files, database objects etc are to be
copied (or installed) in the test environment, before the test execution begins.
By doing this kind of different setup for the testing, the integrity of the software
components are ensured and at the same time, the programs and software are prevented
for getting overwritten when developer fix some bugs and recompile.
The test input section of the test cases, define what are the actual values that are to be fed
to the application programs / screens. This data can be identified either at the time of
writing the test cases itself or just before executing the test cases. Data that are very much
static can be identified while writing the test case itself (for example, name field or amount
field in a banking deposit screen). Data which are dynamic and configurable need more
analysis before preparation. For example, if a shop provides various percentage discounts to
the articles being sold, depending upon the season, the data is not static and it changes for
every season. To test such functionalities, multiple sets of data values are to be prepared.
Again, the values may not be fixed, but they may be configurable.
So, just before executing the test cases, these kinds of data are to be prepared. Preparation
of test data depends upon the functionality that is being tested.
4. Actual Test Execution
Once the data is ready, the tester’s job is to go thru the test pre-requisites and to make sure
that they exist. For example, if the test case is to withdraw some money from the account,
the user must have access rights to perform that operation and the account should have
enough money to be withdrawn. The tester will have to make these pre-requisites to exist
before starting the test case. This may include, going into a separate screen and then
feeding data or going to the database and then populating it manually etc.
INSTALLATION TESTING: Testing whether the software is able to install successfully or not, is
called Installation Testing. Below are some of the tips for performing Installation Testing:
Check if while installing application / Product checks for the dependent patches /
software.
Check whether the installer give a default installation path.
Installation should start automatically when the CD is inserted.
Installer should give the remove / repair options.
When you perform uninstallation, check for all the registry keys, files, Dll, Shortcuts,
active X components are removed from the system after uninstalling the Software.
USABILITY TESTING: Usability testing is testing for ‘user friendliness’. Clearly this is subjective
and depends on the targeted end-user or customer. User interviews, surveys, video recording
of user sessions and other techniques can be used. Test engineers are needed, because
programmers and developers are usually not appropriate as usability testers.
Black-box type testing geared to functional requirements of an application, this type of testing
is normally done by the testers.
Non FUNCTIONAL TESTING: Testing the application against client’s performance requirements.
Non-Functional testing is done based on the requirements and the test scenarios defined by the
client.
REGRESSION TESTING: In Regression Testing we are checking whether all the Bugs raised in the
previous build are fixed or not, and because of fixing of the bugs raised in previous build are
causing any new bugs or not.
The Regression test suit contains three different classes of test cases:
RECOVERY TESTING: Recovery testing is a system test that focuses the software to fail in a
variety of ways and verifies that recovery is properly performed. If it is automatic recovery then
re initialization, check pointing mechanisms, data recovery and restart should be evaluated for
correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.
SECURITY TESTING: Security testing attempts to verify whether the programs, data and
documents are safe from unauthorized access.
LOAD TESTING: Testing an application under heavy loads to calculate the response time when
an application is subjected to increase in the load.
STRESS TESTING: Also described as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs to find out the breakpoint at which the
application fails.
DATA VOLUME TESTING: This is also known as “Storage Testing”. During this testing the testers
are validating the maximum capacity of our s/w build to store user’s related data.
CONFIGURATION TESTING: This is also known as “Portability Testing”. During this testing the
testing team is validating that “Whether our s/w build is working in different customer
expected platforms or not?” Platforms mean the operating system, Browser, compilers and
other system software.
ENCRYPTION / DECRYPTION TESTING: When our s/w build is running in network the client
process of that s/w is encrypting the original data and the server process of that s/w is
decrypting to get original data to prevent other party accessing your data.
RE - TESTING: Testing the same module with different sets of data or with the different inputs is
called retesting.
Ad Hoc TESTING: Testing the application without having any test cases is called Ad Hoc Testing.
In Ad Hoc Testing the testers will try to ‘break’ the system by randomly trying the System’s
functionality.
DOCUMENTATION TESTING: Documentation testing is checking the user manuals, help screens
and setup instructions etc to verify they are correct and appropriate.
General Principles
1. Getting the Big Picture: Skim the table of contents to get a feel for the content of the
manual.
2. Logical Order: The order in which topics are discussed or in which steps are covered is very
important. You may find steps or topics that should be discussed before another topic.
3. Audience: Keep in mind the audience of this manual.
o Is the information appropriate for the person who will be using this manual?
o Is this person a system administrator, a developer, an end-user?
o Is the information too technical or not technical enough?
o What other types of information does this person need?
4. Formal Hand-off: When you have completed reviewing a manual, schedule a formal 10- to
15-minute to sit down with writer and review your comments and concerns with them.
NOTE: These types of manuals are very procedure-oriented and include installation guides,
administration tool guides, and end-user guides.
1. Overview and General Information: Please check this information for technical accuracy as
well—even if it seems marketing-oriented.
2. Checking Screens:
o Are screens up to date?
o Are all necessary screens included?
o Are screens correctly identified and labeled?
o Are screen elements and menu options correctly identified?
3. References to other sections or manuals:
o Do references take the reader to the correct page and section?
o If the reference is to another manual, is it the appropriate manual?
4. Technical Issues:
Are there paragraphs or sections that must be rewritten because of missing or inaccurate
technical information? If so, alert the writer immediately—do not wait until your reviews are
turned in.
If critical technical information is missing, please alert the writer immediately—do not wait until
your reviews are turned in.
If meetings with a developer or other information source are required in order to get
information, see if the writer can attend this with you. It will save you both some time. For the
first test draft—the Beta Draft—note bugs and other concerns on the manuscript.
For the final test draft, note bugs on the manuscript and in a formal bug-tracking system, if your
dept. uses one.
ALPHA TESTING: The Alpha testing is conducted at the developer sites and in a controlled
environment by the end-user of the software.
BETA TESTING: The Beta testing is conducted at one or more customer sites by the end-user of
the software. The beta test is a live application of the software in an environment that cannot
be controlled by the developer. Beta testing is the last stage of testing, and normally can
involve sending the product to beta test sites outside the company for real-world exposure or
offering the product for a free trial download over the Internet. Beta testing is often preceded
by a round of testing called alpha testing.
1. Missing Functionality
2. Extra Functionality
3. Wrongly Understood Functionality
A functionality that is specified in the product specification but not implemented in the
software product is called as “Missing Functionality”.
A Functionality that is not specified in the specification but found in the software is called as
“Extra Functionality”.
After a bug is identified it is reported to the concerned people through a document called as
“Bug Report”. Bug Report is a document that contains the details regarding the variations or
deviation identified in the application.
Severity Assigned by manager, this shows how bug is affecting the behavior
of the product.
Closed because Reason why bug was closed - May be duplicate bug.
Differed for The bug won’t be fixed in this version, these are minor errors like
font size, help contents etc.
Bug Lifecycle
When the bug is posted for the first time the status of the bug will be “New”, that
means the bug is not yet approved.
After a tester had posted the bug the “Team lead” of the tester will approve that the
bug is genuine and the Team Lead will change the status as “Open”.
One the “Team Lead” will change the status as open; he will assign the bug to the
corresponding developer’s team. The state of the bug is now changed to “Assigned”.
If the Developer feels that it is not a genuine bug he is going to reject the bug. The state
of the bug is now changed to “Rejected”.
If the bug is changed to “Differed” state means, the bug is expected to be fixed in next
releases. The reasons for changing the bug to this state have several reasons i.e. The
priority of bug might be low, Lack of time for the releases, The bug might not have
significant effect of the application.
Once the bug is resolved or fixed by the developer he is going to change the status as
“Fixed”.
Once the bug is fixed or resolved by the developer it comes to the Testing department.
Now the tester is going to test the fixed bug. After testing the fixed bug the tester is
going to change the status as “Verified”.
Once the developer had verified the fixed bug, if the same bug is not occurring again he
is going to change the status as “Closed”. If the tester is getting the same bug again he is
going to change the status as “Reopen”.
Defect Priority:
The priority level describes the time for resolution of the defect. The priority level would be
classified as follows:
Immediate : Resolve the defect with immediate effect.
At the Earliest : Resolve the defect at the earliest, on priority at the second
level.
Normal : Resolve the defect.
Later : Could be resolved at the later stages.
3.9 Test Reports
Individual testers will be sending their status on executing the test cases, to the Team Lead, on
daily basis or weekly basis. This will include, what are the test cases that the tester has
executed during that period, Total number of Test Cases passed, Number of test cases failed
and Total number of test cases pending. When the Team Lead gets this status from individual
testers, he will consolidate all these details and will arrive at various kinds of reports. This will
be used for tracking the testing activities and to plan further.
After the Test Execution phase, the following documents would be signed off.
It is a very document in the Testing phase because by seeing this document we can clearly
understand whether we have covered all the Test Cases for that particular requirement.
Fig: - Traceability Matrix Template
2) Test Planning is the next step. This is done in form of a “Test Plan”. In this, the following
items are addressed:-
a) The Scope of testing is discussed. Which modules are in the scope and which are out
of scope.
b) The risks involved in the test activity. Any risk that may affect the schedule, cost and
other resources must be clearly documented along with what could be the
mitigation plan against each risk.
c) The types of testing. Different types to be conducted are Installation Testing (all
installer packages), Functionality Testing (of course this is must), Security Testing
(application role based and NT authentication based), Volume Testing (For huge
number of database records), Stress Testing (For a large number of concurrent
users), Recovery Testing, Documentation Testing (Help Documents, Troubleshooting
manuals etc), Compatibility Testing (IE VS Netscape, different backend databases)
etc.
Note: All the above mentioned testing(s) are applicable to all projects.
d) Escalation Criteria - If things go wrong, who are all to be informed within what time
frame?
e) Resources: What is the hardware, software and human resources need to execute
the project.
f) Schedule of the testing activity with important milestones.
This Test Plan document is prepared by the Team Lead and is reviewed by the Project
Manager and the Development Lead also.
3) The Testers will write all “Testing Scenarios” with a brief description. Basically a document
all possible test scenarios in a nut shell. This is reviewed by the Team Lead.
4) The Testers write Detailed “Test Cases”. This will elaborate the Test Precondition, Test
Input, Test Steps and Expected Results. The Testers will send the Test Cases to the Team
Lead and the Developer’s Lead for reviewing. This is called as Pre-Review. They will review
the Test Cases and will tell their comments whether any thing needs to be added or
modified.
After incorporating the comments given by Team Lead and the Developer’s Lead the Tester
will again send the Test Cases for the review. This is called “Post Review”.
After Post review now the Test Cases are ready for the Execution.
5) When the Development Team gives the build to the testers, testers will execute the Test
Cases one by one. When they find problems in the application, the bugs are logged through
Excel Sheets (or) through any Bug Tracking System like Bugzilla, Quality Center, TOM (Test
Object Management) tools.
The Development Team fixes the bugs and in the next build, the testing team does
Regression Testing. This cycle continues until bugs go to zero or to a minimal acceptable
level.
6) When Regression test automation is an activity, when large number of tests needs to be
repeated. We may use some typical automation tools like Quick Test Professional or Test
Partner or Test Complete etc. The Testers will write Test Scripts (Recording and putting
verification points) and these automated test library will be in place for test execution in the
subsequent test cycles.
Glossary
Acceptance Testing: Formal testing conducted to enable a user, customer, or other authorized
entity to determine whether to accept a system or component. [IEEE].
Ad Hoc Testing: Testing carried out using no recognized test case design technique.
Alpha Testing: Simulated or actual operational testing at an in-house site not otherwise
involved with the software developers.
Behavior: The combination of input values and preconditions and the required response for a
function of a system. The full specification of a function would normally comprise one or more
behaviors.
Beta Testing: Operational testing at a site not otherwise involved with the software developers.
Big-bang Testing: Integration testing where no incremental testing takes place prior to all the
system's components being combined to form the system.
Bottom-up Testing: An approach to integration testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is tested.
Boundary Value: An input value or output value which is on the boundary between equivalence
classes, or an incremental distance either side of the boundary.
Boundary Value Analysis: A test case design technique for a component in which test cases are
designed which include representatives of boundary values.
Boundary Value Coverage: The percentage of boundary values of the component's equivalence
classes which have been exercised by a test case suite.
Boundary Value Testing: A test case selection technique in which test data is chosen to lie
among "boundaries" or extremes of input domain (or output range) classes, data structures,
procedure parameters, etc. Boundary value test cases often include the minimum and
maximum in-range values and the out-of-range values just beyond these values.
Branch: A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other statement
in the component except the next statement, or when a component has more than one entry
point, a transfer of control to an entry point of the component.
Branch Condition Combination Testing: A test case design technique in which test cases are
designed to execute combinations of branch condition outcomes.
Branch Condition Coverage: The percentage of branch condition outcomes in every decision
that have been exercised by a test case suite.
Branch Condition Testing: A test case design technique in which test cases are designed to
execute branch condition outcomes.
Branch Coverage: The percentage of branches that have been exercised by a test case suite.
Branch Testing: A test case design technique for a component in which test cases are designed
to execute branch outcomes.
Capture/Playback Tool: A test tool that records test input as it is sent to the software under
test. The input cases stored can then be used to reproduce the test at a later time.
Code Coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention.
Code-based Testing: Designing tests based on objectives derived from the implementation
(e.g., tests that execute specific control flow paths or use specific data items).
Condition: An expression containing no Boolean operators. For example, the expression "IF A"
is a condition as it is a Boolean expression without Boolean operators which evaluates to either
True or False.
Control Flow Graph: The diagrammatic representation of the possible alternative control flow
paths through a component.
Coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test case suite.
Data Flow Coverage: Test coverage measure based on variable usage within the code.
Examples are data definition-use coverage, data definition P-use coverage, data definition C-use
coverage, etc.
Data Flow Testing: Testing in which test cases are designed based on variable usage within the
code.
Decision: An expression comprising conditions and zero or more Boolean operators that is used
in a control construct (e.g. if...then...else; case statement) that determines the flow of
execution of the software program. A decision without a Boolean operator reduces to a
condition. For example, the expression "IF (A>B) or (B<C) THEN" is a decision, as is "FOR A>5
LOOP".
Decision Coverage: Every point of entry and exit within the software is invoked at least once,
and every decision in the software has taken all possible outcomes at least once. Source code
decision coverage, by definition, includes source level statement coverage, while instruction
decision coverage includes machine code decision coverage.
Decision/Condition Coverage: Every point of entry and exit within the software is invoked at
least once, every condition in a decision in the software has taken all possible outcomes at least
once, and every decision has taken all possible outcomes at least once.
Decision Outcome: The result of a decision (which therefore determines the control flow
alternative taken).
Design-based Testing: Designing tests based on objectives derived from the architectural or
detail design of the software (e.g., tests that execute specific invocation paths or probe the
worst case behavior of algorithms).
Desk Checking: The testing of software by the manual simulation of its execution.
Equivalence Class: An input domain ("class") for which each input yields the same
("equivalent") execution path regardless of which input from the class is chosen.
Equivalence Partition Testing: A test case design technique for a component in which test cases
are designed to execute representatives from equivalence classes.
Error Guessing: A test case design technique where the experience of the tester is used to
postulate what faults might occur, and to design tests specifically to expose them.
Error Seeding: The process of intentionally adding known faults to those already in a computer
program for the purpose of monitoring the rate of detection and removal, and estimating the
number of faults remaining in the program. [IEEE]
Executable Statement: A statement which, when compiled, is translated into object code,
which will be executed procedurally when the program is running and may perform an action
on program data.
Exhaustive Testing: A test case design technique in which the test case suite comprises all
combinations of input values and preconditions for component variables.
Failure: Deviation of the software from its expected delivery or service. [Fenton]
Feasible Path: A path for which there exists a set of input values and execution conditions
which causes it to be executed.
Functional Specification: The document that describes in detail the characteristics of the
product with regard to its intended capability. [BS 4778, Part2]
Functional Test Case Design: Test case selection that is based on an analysis of the specification
of the component without reference to its internal workings.
Functional Testing: Verification of an item by applying test data derived from specified
functional requirements without consideration of the underlying product architecture or
composition.
Incremental Testing: Integration testing where system components are integrated into the
system one at a time until the entire system is integrated.
Infeasible Path: A path which cannot be exercised by any set of possible input values.
Input: A variable (whether stored within a component or outside it) that is read by the
component.
Inspection: A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both document
production and inspection) after [Graham]
Installability Testing: Testing concerned with the installation procedures for the system.
Instruction Coverage: Every machine code instruction in the software has been executed at
least once. Executing a machine instruction means that the instruction was processed.
Integration Testing: Testing performed to expose faults in the interfaces and in the interaction
between integrated components.
Interface Testing: Integration testing where the interfaces between system components are
tested.
Path Coverage: The percentage of paths in a component exercised by a test case suite.
Path Sensitizing: Choosing a set of input values to force the execution of a component to take a
given path.
Path Testing: A test case design technique in which test cases are designed to execute paths of
a component.
Peer Review: A formal review of an item by a group of peers of the item's developer.
Portability Testing: Testing aimed at demonstrating the software can be ported to specified
hardware or software platforms.
Precondition: Environmental and state conditions which must be fulfilled before the
component can be executed with a particular input value.
Progressive Testing: Testing of new features after regression testing of previous features.
[Beizer]
Regression Testing: Re-execution of tests which have previously been executed correctly, in
order to verify a subsequent revision of that same product.
Review: A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for comment or
approval. [ieee]
State Transition Testing: A test case design technique in which test cases are designed to
execute state transitions.
Statement: An entity in a programming language which is typically the smallest indivisible unit
of execution.
Statement Coverage: Every statement in the software has been executed at least once.
Executing a statement means that the statement was encountered and evaluated during
testing.
Statement Testing: A test case design technique for a component in which test cases are
designed to execute statements.
Static Analysis: Analysis of a program carried out without executing the program.
Statistical Testing: A test case design technique in which a model is used of the statistical
distribution of the input to construct representative test cases.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of
its specified requirements. [IEEE]
Structural Coverage: Coverage measures based on the internal structure of the component.
Structural Test Case Design: Test case selection that is based on an analysis of the internal
structure of the component.
Structured Basis Testing: A test case design technique in which test cases are derived from the
code logic to achieve 100% branch coverage.
System Testing: The process of testing an integrated system to verify that it meets specified
requirements. [Hetzel]
Test Automation: The use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions.
Test Case: A set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance with a
specific requirement. After [IEEE, do178b]
Test Case Design Technique: A method used to derive or select test cases.
Test Case Suite: A collection of one or more test cases for the software under test.
Test Comparator: A test tool that compares the actual outputs produced by the software under
test with the expected outputs for that test case.
Test Completion Criterion: A criterion for determining when planned testing is complete,
defined in terms of a test measurement technique.
Test Driver: A program which sets up an environment and calls a module for test.
Test Environment: A description of the hardware and software environment in which the tests
will be run, and any other software with which the software under test interacts when under
test including stubs and test drivers.
Test Execution: The processing of a test case suite by the software under test, producing an
outcome.
Test Execution Technique: The method used to perform the actual test execution, e.g. manual,
capture/playback tool, etc.
Test Generator: A program that generates test cases in accordance to a specified strategy or
heuristic after [Beizer].
Test Plan: A record of the test planning process detailing the degree of tester independence,
the test environment, the test case design techniques and test measurement techniques to be
used, and the rationale for their choice.
Test Procedure: A document providing detailed instructions for the execution of one or more
test cases.
Test Records: For each test, an unambiguous record of the identities and versions of the
component under test, the test specification, and actual outcome.
Test Script: Commonly used to refer to the automated test procedure used with a test harness.
Testing: The process of exercising software to verify that it satisfies specified requirements and
to detect errors after [do178b]
Top-down Testing: An approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated until
the lowest level components have been tested.
Usability Testing: Testing the ease with which users can learn and use a product.
Validation: The determination of correctness of an item based upon requirements, and the
sanctity of those requirements.
White Box Testing: Verification of an item by applying test data derived from analysis of the
item's underlying product architecture and composition.
What is VBScript?
VBScript (short form of Visual Basic Script Edition), is a subset of Visual Basic Programming
Language.
VBScript is a Microsoft proprietary language that does not work outside of Microsoft
programs.
VBScript is a scripting language (light weight programming language) that can be developed
in any text editor.
The component incorporation capabilities of VBScript introduce some special considerations
and trade-offs in page design.
One of the powerful benefits of VBScript is its capability to ensure the validity of the data
the user enters.
Note: After you have written your VBScript code you need to download the Internet Explorer
to process your code. Firefox, Opera, Netscape, etc will not be able to run VBScript. It is not
possible to read and write files or databases in the normal fashion in VBScript.
A variant is a special type of variable that can store a wide variety of data types.
Variants are not restricted to one type of data (such as integers, for example).
The variant is used most often to store numbers and strings, but it can store a variety of
other types of data. These data types are often called subtypes because the variant can
represent them all internally.
The following table shows subtypes that the variant uses to represent the data that can be
stored in a variable:
Subtype Description
Empty . Expre The empty subtype is used for variables that have been
created but not yet assigned any data. Numeric variables are
assigned 0 and string variables are assigned "" in this
uninitialized condition.
The byte data type can store an integer value between 0 and
Byte 255. It is used to preserve binary data and store simple data
that doesn't need to exceed this range. The by
Variables of the long data type are also integers, but they have
Long a much higher range, -2,147,483,648 to 2,147,683,647 to be
exact.
Variables
A variable is a virtual container in the computer's memory that's used to hold information.
A computer program can store information in a variable and then access that information
later by referring to the variable's name.
Naming Restrictions
Note: In VBScript, all variables are of type Variant that can store different types of data.
Declaring Variables
The variables can be declared with the Dim, Public or the Private statement. Like this:
Dim MyNumber
Dim MyArray(9)
Public MyNumber
Public MyArray(9)
Private statement variables are available to only the script in which they are declared.
Private MyNumber
Private Myarray(9)
Private MyNumber, MyVar, YourNumber
Note: Use OptionExplicit to avoid incorrectly typing the name of an existing variable in code
where the scope of the variable is not clear. If used, the OptionExplicit statement must appear
in a script before any other statement
A constant is a variable within a program that never changes in value. Using the const
statement, string or numeric constants with meaningful names can be created. For example:
Const MyAge = 49
When several operations occur in an expression, each part is evaluated and resolved in a
predetermined order called operator precedence. Parentheses can be used to override the
order of precedence and force some parts of an expression to be evaluated before other parts.
Operations within parentheses are always performed before those outside. Within
parentheses, however, normal operator precedence is maintained.
When expressions contain operators from more than one category, arithmetic operators are evaluated
first, comparison operators are evaluated next, and logical operators are evaluated last. Comparison
operators all have equal precedence; that is, they are evaluated in the left-to-right order in which they
appear. Arithmetic and logical operators are evaluated in the following order of precedence:
A procedure is a block of code that accomplishes a specific goal. When a variable is created, it
can be used within a specific procedure or share it among all the procedures in the script. The
availability a variable has within its environment is referred to as the scope of a variable. When
a variable is declared inside a procedure, it has local scope and can be referenced only while
that procedure is executed. Local-scope variables are often called procedure-level variables
because they exist only in procedures. If a variable is declared outside a procedure, it
automatically has script-level scope and is available to all the procedures in your script.
The lifetime of a variable depends upon how long it exists. The lifetime of a script-level variable
extends from the time it is declared until the time the script is finished running.
A procedure level variable can only be accessed within that procedure. When the procedure
exits, the variable is destroyed.
Note: In order to inquire about the subtype of a variable, use the VarType function. This
function takes one argument: the variable name. The function then returns an integer value
that corresponds to the type of data storage VBScript is using for that variable.
Array Variables
An array is a type of variable that ties together a series of data items and places them in
a single variable.
An Array variable uses parentheses () following the variable name.
Creating Arrays
An array created with the Dim keyword exists as long as the procedure does and is destroyed
once the procedure ends. Two types of arrays can be created using VBScript namely fixed
arrays and dynamic arrays. Fixed arrays have a specific number of elements in them, whereas
dynamic arrays can vary in the number of elements depending on how many are stored in the
array
Fixed-Length Arrays
Dim names(2)
The data can be retrieved from an array using an index into the particular array element. Like
this:
An array can have up to 60 dimensions. Multiple dimensions are declared by separating the
numbers in the parentheses with commas. In the following example , MyTable is a two-
dimensional array consisting of 4 rows and 6 columns:
In a two-dimensional array, the first number is always the number of rows; the second number
is the number of columns.
Dynamic Arrays
A dynamic array is an array whose size changes during the run-time. The array is initially declared
within a procedure using either the Dim or using the ReDim statement. For example:
Dim MyArray ()
ReDim AnotherArray ()
A subsequent ReDim statement resizes the array, but uses the Preserve keyword to preserve the
contents of the array as the resizing takes place.
ReDim MyArray(10)
……….
ReDim Preserve MyArray (20)
Note: The size or number of dimensions is not placed inside the parentheses for a dynamic
array. There is no limit for resizing a dynamic array.
Procedures
In VBScript there are two kinds of procedures: the Sub procedure and the Function procedure.
A Sub Procedure:
Is a series of VBScript statements (enclosed by the Sub and End Sub statements) that
perform actions but don’t return a value.
Can take arguments that are passed to it by a calling procedure.
Without arguments, must include an empty set of parentheses ().
Sub mysub ()
statement 1
statement 2
statement 3
..
statement n
End Sub
A Function Procedure
• Is a series of statements, enclosed by the Function and End
Function statements.
• Returns a value by assigning value to its name in one or more statements of the
procedure.
• Can take arguments passed by the calling procedures.
• Without arguments must include an empty set of parentheses ().
Function myfunction ()
statement 1
statement 2
statement 3
..
statement n
myfunction = value
End Function
Note:
If Statement
If condition Then
[statements]
[elseifstatements]]
[Else
[elsestatements]]
EndIf
msgbox "Hello"
Else
msgbox "Goodbye"
End If
c) If avg>75 Then
msgbox “Distinction"
msgbox “Pass”
Else
msgbox “Fail”
End If
[statements-n]] . . .
[case Else
[elsestatements-n]]
End Select
The following example illustrates the use of the Select Case statement.
Dim payment
payment = “Cash”
case “Visa" msgbox " You are going to pay with Visa"
End select
[statements]
[Exit For]
[statements]
Next
Exit For can only be used within a For Each … Next or For … Next control structure to provide
an alternate way to exit. Any number of Exit For statements may be placed anywhere in the
loop.
In this example below, the counter variable (I) is increased by one each time the loop repeats.
For I =1 to 10
msgbox i
Next
[statements]
[Exit For]
[statements]
Next [ element ]
The following example illustrates the use of For Each … Next statement:
Dim cars(2)
cars(0)="Volvo"
cars(1)="Saab"
cars(2)="BMW"
msgbox x
Next
Do … Loop statement
Repeats a block of statements while a condition is True or until a condition becomes True.
[statements]
[Exit Do]
[statements]
Loop
Do
[statements]
[Exit Do]
[statements]
a) Dim i
i=0
Do while i<=10
msgbox i
i=i+1
Loop
b) Dim i
i=0
Do
msgbox i
i=i+1
c) Dim i
i=20
Do until i=10
i=i-1
Loop
d) Dim i
i=0
Do
i=i+1
The Exit Do can only be used within a Do … Loop control structure to provide an alternate way
to exit a Do … Loop . Any number of Exit Do statements may be placed anywhere in the loop.
While condition
[statements]
Wend
The following example illustrates the use of Do … Loop statement:
Dim counter
counter = 0
counter = counter +1
msgbox counter
Wend
Note: The Do … Loop statement provides a more structured and flexible way to perform
looping.
Date Function
Date
The following example uses the Date function to return the current system date:
Dim myDate
Time Function
Date
The following example uses the Date function to return the current system date:
Dim myTime
DateDiff Function
Returns the number of intervals between two dates.
DateDiff(interval,date1,date2[,firstdayofweeek[,firstweekofyear]])
Arguments Description
yyyy Year
m month
q Quarter
d Day
y Day of year
w Week day
ww Week of year
h Hour
n Minute
s Second
m Month
DateAdd Function
DateAdd(interval,number,date)
Arguments Description
yyyy Year
m month
q Quarter
d Day
y Day of year
w Week day
ww Week of year
h Hour
n Minute
s Second
m Month
The following example uses the DateAdd function to add a month to January 31, 2000.
Conversion Functions
Asc Function
Returns the ANSI character code corresponding to the first letter in a string.
Asc(string)
In the following example, Asc returns the ANSI character code of the first letter of each string:
Dim I
I = Asc(“A”) ‘ Returns 65
I = Asc(“Apple”) ‘ Returns 65
CInt Function
CInt (expression)
The following example uses the CInt function to convert a value to an Integer:
Dim I, k
I = 9.888
CDate Function
CDate (date)
The following example uses the CDate function to convert a string to a date.
Dim MyDate
Tip: Use the IsDate function to determine if date can be converted to a date or time.
Note: In general, hard coding dates and times as strings (as shown in this example) is not
recommended. Use date and time literals (such as #10/19/1962#, #4:45:23 PM#) instead.
CStr Function
CStr(expression)
The following example uses the CStr function to convert a numeric value to a String :
Dim I, j
I = 10
Hex Function
Hex(number)
The following example uses the Hex function to return the hexadecimal value of a number:
Dim MyHex
Format Functions
FormatCurrency Function
Returns an expression formatted as a currency value using the currency symbol defined in
the system control panel.
Arguments Description
The following example uses the FormatCurrency function to format the expression as a
currency and assign it to MyCurrency:
Dim MyCurrency
FormatNumber Function
UseParForNegativeNumbers[, GroupDigits]]]])
Arguments Description
The following example uses the FormatNumber function to format a number to have four
decimal places:
Dim MyAngle,MySecant,MyNumber
FormatPercent Function
UseParForNegativeNumbers[, GroupDigits]]]])
Arguments Description
The following example uses the FormatPercent function to format an expression as a percent:
Dim MyPercent
FormatDateTime Function
FormatDateTime(Date[, NamedFormat])
Arguments Description
The following example uses the FormatDateTime function to format the expression as a long
date:
Math Functions
Abs Function
Abs(number)
The following example uses the Abs function to compute the absolute value of a number:
Dim x
x=Abs(-10) ‘ x contains 10
Cos Function
Cos(number)
The following example uses the Cos function to return the cosine of an angle :
Dim MyAngle,MySecant
Int Function
Int(number)
The following example illustrates how the Int function returns the Integer portion of a number:
Dim MyNumber
Sqr(number)
The following example uses the Sqr function to calculate the square root of a number:
Dim I
I = Sqr(4) ‘I contains 2
Array Function
Array(arglist)
In the following example, the first statement creates a variable named x. The second statement
assigns an array to variable x. The last statement assigns the value contained in the second
array element to another variable.
Dim x
y = x(1) ‘y contrains 20
Split Function
Arguments Description
The following example uses the Split function to return an array from a string.
Dim MyDate,MyArray
MyDate =“7/10/06”
MyArray = Split(MyDate,”/”)
Filter Function
Returns a zero-based array containing a subset of a string array based on specified filter
criteria.
Arguments Description
The following example uses the Filter function to return the array containing the search criteria
"Mon":
Dim MyIndex
Dim MyArray(3)
MyArray(0)=“Sunday”
MyArray(1)=“Monday”
MyArray(2)=“Tuesday”
String Function
The following example uses the String function to return repeating character strings of the
length specified:
Dim MyString
InStr Function
Returns the position of the first occurrence of one String within another.
Arguments Description
Dim SearchText,MyPos
Trim Function
Removes all leading and trailing white-space characters from the string.
Trim (String)
The following example uses Trim function to removes spaces on both sides of the string:
Dim txt
StrReverse Function
StrReverse(String)
The following example uses the StrReverse function to return a string in reverse order:
Dim MyStr
MyStr=“hello”
Other Functions
RGB Function
Arguments Description
IsNull Function
Returns a Boolean value that indicates whether a specified expression contains no valid
data (Null).
IsNull(expression)
The following example uses the IsNull function to determine whether a variable contains a Null:
IsEmpty Function
IsEmpty (expression)
The following example uses the IsEmpty function to determine whether a variable has been
initialized:
MsgBox Function
Displays a message in a dialog box, waits for the user to click a button, and returns a value
indicating which button the user clicked.
Arguments Description
The following example uses the MsgBox function to display a message box and return a value
describing which button was clicked:
InputBox Function
Displays a prompt in a dialog box, waits for the user to input text or click a button, and
returns the contents of the text box.
InputBox(prompt[,title][,default][,xpos][,ypos][,helpfile,context])
Arguments Description
prompt Required. String expression displayed as the
message in the dialog box. The maximum length
of prompt is approximately 1024 characters,
depending on the width of characters used.
The following example uses the InputBox function to display an input box and assign the string
to the variable Input:
Dim Input