Software Testing Interview Questions
Software Testing Interview Questions
Interview Questions
Include
S. Koirala & S. Sheikh TestCo s C D with
m
Demo plete ™
FR E E S and a
Estima oftware
tion Bo
ok!
C O M P U T E R S C I E N C E S E R I E S
Software Testing
Interview Questions
LICENSE, DISCLAIMER OF LIABILITY, AND LIMITED WARRANTY
The CD-ROM that accompanies this book may only be used on a single PC. This
license does not permit its use on the Internet or on a network (of any kind). By
purchasing or using this book/CD-ROM package(the “Work”), you agree that this
license grants permission to use the products contained herein, but does not give
you the right of ownership to any of the textual content in the book or ownership
to any of the information or products contained on the CD-ROM. Use of third
party software contained herein is limited to and subject to licensing terms for the
respective products, and permission must be obtained from the publisher or the owner
of the software in order to reproduce or network any portion of the textual material or
software (in any media) that is contained in the Work.
Infinity Science Press LLC (“ISP” or “the Publisher”) and anyone involved in the
creation, writing or production of the accompanying algorithms, code, or computer
programs (“the software”) or any of the third party software contained on the CD-
ROM or any of the textual material in the book, cannot and do not warrant the
performance or results that might be obtained by using the software or contents of the
book. The authors, developers, and the publisher have used their best efforts to insure
the accuracy and functionality of the textual material and programs contained in this
package; we, however, make no warranty of any kind, express or implied, regarding
the performance of these contents or programs. The Work is sold “as is” without
warranty (except for defective materials used in manufacturing the disc or due to
faulty workmanship);
The authors, developers, and the publisher of any third party software, and anyone
involved in the composition, production, and manufacturing of this work will not be
liable for damages of any kind arising out of the use of (or the inability to use) the
algorithms, source code, computer programs, or textual material contained in this
publication. This includes, but is not limited to, loss of revenue or profit, or other
incidental, physical, or consequential damages arising out of the use of this Work.
The sole remedy in the event of a claim of any kind is expressly limited to
replacement of the book and/or the CD-ROM, and only at the discretion of the
Publisher.
The use of “implied warranty” and certain “exclusions” vary from state to state, and
might not apply to the purchaser of this product.
Software Testing
Interview Questions
By
S. Koirala
&
S. Sheikh
This publication, portions of it, or any accompanying software may not be reproduced in any way,
stored in a retrieval system of any type, or transmitted by any means or media, electronic or mechanical,
including, but not limited to, photocopy, recording, Internet postings or scanning, without prior
permission in writing from the publisher.
Infinity Science Press LLC
11 Leavitt Street
Hingham, MA 02043
Tel. 877-266-5796 (toll free)
Fax 781-740-1677
[email protected]
www.infinitysciencepress.com
Our titles are available for adoption, license or bulk purchase by institutions, corporations, etc. For
additional information, please contact the Customer Service Dept. at 877-266-5796 (toll free).
Requests for replacement of a defective CD-ROM must be accompanied by the original disc, your
mailing address, telephone number, date of purchase and purchase price. Please state the nature of the
problem, and send the information to Infinity Science Press, 11 Leavitt Street, Hingham,
MA 02043.
The sole obligation of Infinity Science Press to the purchaser is to replace the disc, based on defective
materials or faulty workmanship, but not based on the operation or functionality of the product.
What’s on the CD
The CD contains all that you need for software testing:
It’s very important during the interview to be clear about what position you
are targeting. Depending on the position you are targeting the interviewer
will ask you specific questions. For example, if you are looking for a project
manager testing position you will be asked around 20% technical questions
and 80% management questions.
Note: In small software houses and mid-scale software companies there are times
when they expect a program manager to be very technical. But in big software
houses the situation is very different; interviews are conducted according to
position.
Note: There are many small and medium software companies which do not
follow this hierarchy and have their own adhoc way of defining positions in the
company.
let’s look at both project and quality team hierarchies. The following are the
number of years of experience according to position for the projects team:
The quality hierarchy for various reasons comes under the project manager
of the project. Let’s start from the bottom of the hierarchy:
TH E C
N
D
O
Even before the interviewer meets you he will meet your resume. The
interviewer looking at your resume is 20% of the interview happening without
you even knowing it. With this in mind the following checklist should be
considered:
For example:
For instance, if you are looking for a senior position specify it explicitly:
‘looking for a senior position.’ Any kind of certification such as CSTE, etc.,
you should make visible in this section.
n nce you have specified briefly your goals and what you have
O
done it’s time to specify what type of technology you have
worked with. For instance, BVA, automated QA, processes
(Six Sigma, CMMI), TPA analysis, etc.
n After that you can give a run through of your experience
company-wise, that is, what company you have worked with,
year/month joined and year/month left. This will give an
overview to the interviewer of what type of companies you
have associated yourself with. Now it’s time to mention all the
xii Salary Negotiation
Salary Negotiation
n SQL server
n SQL server results
n Software testing
n Software testing results
The guidelines sheet defines the guidelines for the ratings. For every
question you can give a 1 to 5 rating. Ratings are based on the following
guidelines:
n 0-You have no knowledge of the question.
n 1-You know only the definition.
n 2-You know the concept but don’t have in-depth knowledge of
the subject.
n 3-You know the concept and have partial knowledge of the
concept.
n 4-You know the concept and have in-depth knowledge of the
subject.
n 5- You are an expert in this area.
The remaining eight sections are questions and results. For instance, we
have a software testing section and a software testing results section. The
software testing section will take in the rating inputs for every question and
software testing results will show the output. The same holds true for the
.NET, JAVA, and SQL server sections.
The figure shows how you have performed in every category and your
overall rating.
xvi Common questions asked during interviews
Note: While reading you will come across “Note” sections which highlight
special points.
Contents
What’s on the CD v
About the Book vi
Organizational Hierarchy vii
Resume Preparation Guidelines x
Salary Negotiation xii
Interview Rating Sheet xiii
Common Questions Asked During
Interviews xvi
How to Read This Book xvii
xviii
Contents xix
Chapter 4 CMMI 67
(B) What is CMMI and what’s the advantage
of implementing it in an organization? 67
xxii Contents
Chapter 1 T esting
B asics
OR
1
Software Testing Interview Questions
So now to answer our question, where does testing fit in….you guessed
it, the check part of the cycle. So developers and other stakeholders of the
project do the “planning and building,” while testers do the check part of
the cycle.
When a defect reaches the end customer it is called a failure and if the defect
is detected internally and resolved it’s called a defect.
A risk is a condition that can result in a loss. Risk can only be controlled in
different scenarios but not eliminated completely. A defect normally converts
Software Testing Basics
to a risk. For instance, let’s say you are developing an accounting application
and you have done the wrong tax calculation. There is a huge possibility that
this will lead to the risk of the company running under loss. But if this defect
is controlled then we can either remove this risk completely or minimize it.
The following diagram shows how a risk gets converted to a risk and with
proper testing how it can be controlled.
from a business angle. Running those critical test plans will assure that the
testing is properly done. The following graph explains the impact of under
testing and over testing. If you under test a system the number of defects will
increase, but if you over test a system your cost of testing will increase. Even
if your defects come down your cost of testing has gone up.
Note: This question will be normally asked to see whether you can independently
set up testing departments. Many companies still think testing is secondary.
That’s where a good testing manager should show the importance of testing.
Bringing in the attitude of testing in companies which never had a formal testing
department is a huge challenge because it’s not about bringing in a new process
but about changing the mentality.
The following are the important steps used to define a testing policy in
general. But it can change according to your organization. Let’s discuss in
detail the steps of implementing a testing policy in an organization.
Software Testing Basics
Definition: The first step any organization needs to do is define one unique
definition for testing within the organization so that everyone is of the same
mindset.
How to achieve: How are we going to achieve our objective? Is there going
to be a testing committee, will there be compulsory test plans which need to
be executed, etc?.
Evaluate: After testing is implemented in a project how do we evaluate it? Are
we going to derive metrics of defects per phase, per programmer, etc. Finally, it’s
important to let everyone know how testing has added value to the project?.
Standards: Finally, what are the standards we want to achieve by testing. For
instance, we can say that more than 20 defects per KLOC will be considered
below standard and code review should be done for it.
The previous methodology is from a general point of view. Note that you
should cover the steps in broader aspects.
Note: This question will normally be asked to judge whether you have a
traditional or modern testing attitude.
Software Testing Interview Questions
Testing after code and build is a traditional approach and many companies
have improved on this philosophy. Testing should occur in conjunction with
each phase as shown in the following figure.
In the requirement phase we can verify if the requirements are met
according to the customer needs. During design we can check whether
the design document covers all the requirements. In this stage we can also
generate rough functional data. We can also review the design document
from the architecture and the correctness perspectives. In the build and
execution phase we can execute unit test cases and generate structural and
functional data. And finally comes the testing phase done in the traditional
way. i.e., run the system test cases and see if the system works according to the
requirements. During installation we need to see if the system is compatible
with the software. Finally, during the maintenance phase when any fixes are
made we can retest the fixes and follow the regression testing.
Software Testing Basics
Note: This question is asked to see if you really know practically which phase
is the most defect prone.
The design phase is more error prone than the execution phase. One
of the most frequent defects which occur during design is that the product
does not cover the complete requirements of the customer. Second is wrong
or bad architecture and technical decisions make the next phase, execution,
more prone to defects. Because the design phase drives the execution phase
it’s the most critical phase to test. The testing of the design phase can be done
by good review. On average, 60% of defects occur during design and 40%
during the execution phase.
10 Software Testing Interview Questions
The product has to be used by the user. He is the most important person as
he has more interest than anyone else in the project. From the user we need
the following data:
A latent defect is an existing defect that has not yet caused a failure because
the exact set of conditions were never met.
A masked defect is an existing defect that hasn’t yet caused a failure
just because another defect has prevented that part of the code from being
executed.
The following flow chart explains latent defects practically. The
application has the ability to print an invoice either by laser printer or by
dot matrix printer. In order to achieve it the application first searches for
the laser printer. If it finds a laser printer it uses the laser printer and prints
it. If it does not find a laser printer, the application searches for dot matrix
printer. If the application finds a dot matrix printer (DMP) the application
prints using or an error is given.
Now for whatever reason this application never searched for the dot matrix
printer. So the application never got tested for the DMP. That means the exact
conditions were never met for the DMP. This is called a latent defect.
Now the same application has two defects: one defect is in the DMP
search and the other defect is in the DMP print. But because the search of
the DMP fails the print DMP defect is never detected. So the print DMP
defect is a masked defect.
12 Software Testing Interview Questions
If a defect is known at the initial stage then it should be removed during that
stage/phase itself rather than at some later stage. It’s a recorded fact that
if a defect is delayed for later phases it proves more costly. The following
figure shows how a defect is costly as the phases move forward. A defect
if identified and removed during the requirement and design phase is the
most cost effective, while a defect removed during maintenance is 20 times
costlier than during the requirement and design phases. For instance, if a
defect is identified during requirement and design we only need to change the
documentation, but if identified during the maintenance phase we not only
Software Testing Basics 13
need to fix the defect, but also change our test plans, do regression testing, and
change all documentation. This is why a defect should be identified/removed
in earlier phases and the testing department should be involved right from
the requirement phase and not after the execution phase.
The following figure shows all the steps required for a workbench.
In real scenarios projects are not made of one workbench but of many
connected workbenches. A workbench gives you a way to perform any kind
Software Testing Basics 15
of task with proper testing. You can visualize every software phase as a
workbench with execute and check steps. The most important point to note is
we visualize any task as a workbench by default we have the check part in the
task. The following figure shows how every software phase can be visualized
as a workbench. Let’s discuss the workbench concept in detail:
Alpha and beta testing has different meanings to different people. Alpha testing
is the acceptance testing done at the development site. Some organizations
OR
at the top, we have mapped the test cases with the requirement. With this
we can ensure that all requirements are covered by our test cases. As shown
we can have one or more test cases covering the requirements. This is also
called requirement coverage.
Note: Many professionals still think testing is executing test cases on the
application. But testing should be performed at all levels. In the requirement
phase we can use the review and traceability matrix to check the validity of our
project. In the design phase we can use the design review to check the correctness
of the design and so on.
The difference between pilot and beta testing is that pilot testing is nothing
but actually using the product (limited to some users) and in beta testing we
do not input real data, but it’s installed at the end customer to validate if the
product can be used in production.
Software Testing Basics 21
OR
Note: Here the interviewer is expecting a proper approach to rating risk to the
application modules so that while testing you pay more attention to those risky
modules, thus minimizing risk in projects.
Features
Add a user
Check user preferences
Login user
Add new invoice
Print invoice
Continued
22 Software Testing Interview Questions
Concerns
Maintainability
Security
Performance
The table shows features and concerns. Features are functionalities which
the end user will use, while concerns are global attributes of the project. For
instance, the security has to be applied to all the features listed.
n sing the priority rating table we have defined priority for the
U
following listed features. Depending on priority you can start
testing those features first.
n Once the priority is set you can then review it with your team
members to validate it.
24 Software Testing Interview Questions
The following figure shows the summary of the above steps. So list your
concerns, rate the probabilities of failures, provide an impact rating, calculate
risk/priority, and then review, review, and review.
Entry and exit criteria are a must for the success of any project. If you do
not know where to start and where to finish then your goals are not clear. By
defining exit and entry criteria you define your boundaries. For instance, you
can define entry criteria that the customer should provide the requirement
document or acceptance plan. If this entry criteria is not met then you will
not start the project. On the other end, you can also define exit criteria for
your project. For instance, one of the common exit criteria in projects is that
the customer has successfully executed the acceptance test plan.
Note: In projects the acceptance test plan can be prepared by numerous inputs.
It is not necessary that the above list be the only criteria. If you think you have
something extra to add, go ahead.
The following diagram shows the most common inputs used to prepare
acceptance test plans.
OR
Regression testing is used for regression defects. Regression defects are defects
occur when the functionality which was once working normally has stopped
working. This is probably because of changes made in the program or the
environment. To uncover such kind of defect regression testing is conducted.
The following figure shows the difference between regression and
confirmation testing. If we fix a defect in an existing application we use
confirmation testing to test if the defect is removed. It’s very possible because
of this defect or changes to the application that other sections of the application
are affected. So to ensure that no other section is affected we can use regression
testing to confirm this.
Note: We will be covering coverage tools in more detail in later chapters, but
for now let’s discuss the fundamentals of how a code coverage tool works.
30 Software Testing Interview Questions
While doing testing on the actual product, the code coverage testing tool
is run simultaneously. While the testing is going on, the code coverage tool
monitors the executed statements of the source code. When the final testing
is completed we get a complete report of the pending statements and also
get the coverage percentage.
i.e., Phase 1, Phase 2, etc. You can baseline your software product after every
phase. In this way you will now be able to track the difference between
Phase 1 and Phase 2. Changes can be in various sections. For instance, the
requirement document (because some requirements changed), technical
(due to changes in the architecture), source code (source code changes),
test plan changes, and so on.
For example, consider the following figure which shows how an accounting
application had undergone changes and was then baselined with each
version. When the accounting application was released it was released with
ver 1.0 and baselined. After some time some new features where added and
version 2.0 was generated. This was again a logical end so we again baselined
the application. So now in case we want to trace back and see the changes
from ver 2.0 to ver 1.0 we can do so easily. After some time the accounting
application went through some defect removal, ver 3.0 was generated, and
again baselined and so on.
The following figure depicts the various scenarios.
Figure 36 Baseline
Note: This answer varies from project to project and company to company. You
can tailor this answer according to your experience. This book will try to answer
the question from the authors view point.
There are a minimum of four test plan documents needed in any software
project. But depending on the project and team members agreement some
of the test plan documents can be deleted.
Central/Project test plan: The central test plan is one of the most important
communication channels for all project participants. This document can have
essentials such as resource utilization, testing strategies, estimation, risk,
priorities, and more.
Acceptance test plan: The acceptance test plan is mostly based on user
requirements and is used to verify whether the requirements are satisfied
according to customer needs. Acceptance test cases are like a green light for
the application and help to determine whether or not the application should
go into production.
System test plan: A system test plan is where all main testing happens. This
testing, in addition to functionality testing, has also load, performance, and
reliability tests.
Unit testing: Unit testing is done more on a developer level. In unit testing
we check the individual module in isolation. For instance, the developer can
check his sorting function in isolation, rather than checking in an integrated
fashion.
The following figure shows the interaction between the entire project
test plan.
Software Testing Basics 33
The following figure shows pictorially how test documents span across the
software development lifecycle. The following discusses the specific testing
OR
OR
The following are three important steps for doing analysis and design for
testing:
Test objectives: These are broad categories of things which need to be tested
in the application. For instance, in the following figure we have four broad
categories of test areas: polices, error checking, features, and speed.
Inventory: Inventory is a list of things to be tested for an objective. For
instance, the following figure shows that we have identified inventory such as
Software Testing Basics 35
add new policy, which is tested for the object types of policies. Change/add
address and delete customer is tested for the features objective.
Tracking matrix: Once we have identified our inventories we need to map
the inventory to test cases. Mapping of inventory to the test cases is called
calibration.
Figure 40 Calibration
The inventory tracking matrix gives us a quick global view of what is pending
and hence helps us to also measure coverage of the application. The following
figure shows the “delete a customer” inventory is not covered by any test case
thus alerting us of what is not covered.
Note: During the interview try to explain all of the above three steps because
that’s how testing is planned and designed in big companies. Inventory forms
the main backbone of software testing.
Normally black box test cases are written first and white box test cases later. In
order to write black box test cases we need the requirement document and,
design or project plan. All these documents are easily available at the initial start
of the project. White box test cases cannot be started in the initial phase of the
project because they need more architecture clarity which is not available at the
start of the project. So normally white box test cases are written after black box
Software Testing Basics 37
test cases are written. Black box test cases do not require system understanding
but white box testing needs more structural understanding. And structural
understanding is clearer in the later part of project, i.e., while executing or
designing. For black box testing you need to only analyze from the functional
perspective which is easily available from a simple requirement document.
When we install the application at the end client it is very possible that on
the same PC other applications also exist. It is also very possible that those
applications share common DLLs, resources etc., with your application. There
is a huge chance in such situations that your changes can affect the cohabiting
software. So the best practice is after you install your application or after any
changes, tell other application owners to run a test cycle on their application.
Normally, the impact ratings for defects are classified into three types:
n inor: Very low impact but does not affect operations on a
M
large scale.
n Major: Affects operations on a very large scale.
n Critical: Brings the system to a halt and stops the show.
The IEEE Std. 829-1998 defines a test log as a chronological record of relevant
details about the execution of test cases. It’s a detailed view of activity and
events given in chronological manner. The following figure shows a test log
and is followed by a sample test log.
OR
OR
OR
OR
OR
end customer are completed the project is finished. In the figure we have
the entry criteria as an estimation document and the exit criteria as a signed
document by the end client saying the software is delivered.
The following figure shows the typical flow in the SDLC which has six
main models. Developers can select a model for their project.
n Waterfall model
n Big bang model
n Phased model
n Iterative model
n Spiral model
n Incremental model
Waterfall Model
Let’s have a look at the Waterfall Model which is basically divided into two
subtypes: Big Bang waterfall model and the Phased waterfall model.
As the name suggests waterfall means flow of water which always goes
in one direction so when we say Waterfall model we expect that every phase/
stage is frozen.
In the Waterfall Big Bang model, it is assumed that all stages are frozen
which means it’s a perfect world. But in actual projects such processes are
impractical.
Iterative Model
The Iterative model was introduced because of problems occuring in the
Waterfall model.
Now let’s take a look at the Iterative model which also has a two
subtypes:
Incremental Model
In this model work is divided into chunks like the Phase Waterfall model but
the difference is that in the Incremental model one team can work on one or
many chunks unlike in the Phase Waterfall model.
Spiral Model
This model uses a series of prototypes which refine our understanding of what
we are actually going to deliver. Plans are changed if required per refining
of the prototype. So everytime refining of the prototype is done the whole
process cycle is repeated.
Evolutionary Model
In the Incremental and Spiral model the main problem is for any changes done
in the between the SDLC we need to iterate a whole new cycle. For instance,
during the final (deliver) stage, if the customer demands a change we have to
iterate the whole cycle again which means we need to update all the previous
(requirement, technical documents, source code & test plan) stages.
In the Evolutionary model, we divide software into small units which
can be delivered earlier to the customer’s end. In later stages we evolve the
software with new customer needs.
V-model
This type of model was developed by testers to emphasize the importance
of early testing. In this model testers are involved from the requirement
stage itself. The following diagram (V-model cycle diagram) shows how for
every stage some testing activity is done to ensure that the project is moving
forward as planned.
44 Software Testing Interview Questions
Unit Testing
Starting from the bottom the first test level is “Unit Testing.” It involves
checking that each feature specified in the “Component Design” has been
implemented in the component.
In theory, an independent tester should do this, but in practice the
developer usually does it, as they are the only people who understand how a
component works. The problem with a component is that it performs only a
small part of the functionality of a system, and it relies on cooperating with
other parts of the system, which may not have been built yet. To overcome this,
the developer either builds, or uses, special software to trick the component
into believing it is working in a fully functional system.
Integration Testing
As the components are constructed and tested they are linked together to
make sure they work with each other. It is a fact that two components that
have passed all their tests, when connected to each other, produce one new
component full of faults. These tests can be done by specialists, or by the
developers.
Integration testing is not focused on what the components are doing but on
how they communicate with each other, as specified in the “System Design.”
The “System Design” defines relationships between components.
The tests are organized to check all the interfaces, until all the components
have been built and interfaced to each other producing the whole system.
System Testing
Once the entire system has been built then it has to be tested against the
“System Specification” to see if it delivers the features required. It is still
developer focused, although specialist developers known as systems testers
are normally employed to do it.
In essence, system testing is not about checking the individual parts of
the design, but about checking the system as a whole. In fact, it is one giant
component.
System testing can involve a number of special types of tests used to see if
all the functional and non-functional requirements have been met. In addition
46 Software Testing Interview Questions
There are many others, the need for which is dictated by how the system is
supposed to perform.
In the previous section we looked through all the models. But in actual
projects, hardly one complete model can fulfill the entire project requirement.
In real projects, tailored models are proven to be the best, because they
share features from The Waterfall, Iterative, Evolutionary models, etc., and
can fit into real life time projects. Tailored models are most productive and
beneficial for many organizations. If it’s a pure testing project, then the V
model is the best.
Software Testing Basics 47
When it comes to testing everyone in the world can be involved right from
the developer to the project manager to the customer. But below are different
types of team groups which can be present in a project.
Isolated test team: This is a special team of testers which do only testing.
The testing team is not related to any project. It’s like having a pool of testers
in an organization, which are picked up on demand by the project and after
completion again get pushed back to the pool. This approach is costly but the
most helpful because we have a different angle of thinking from a different
group, which is isolated from development.
Inside test team: In this approach we have a separate team, which belongs to
the project. The project allocates a separate budget for testing and this testing
team works on this project only. The good side is you have a dedicated team
and because they are involved in the project they have strong knowledge of
it. The bad part is you need to budget for them which increases the project
cost.
QA/QC team: In this approach the quality team is involved in testing. The
good part is the QA team is involved and a good quality of testing can be
expected. The bad part is that the QA and QC team of any organization is
also involved with many other activities which can hamper the testing quality
of the project.
48 Software Testing Interview Questions
OR
49
50 Software Testing Interview Questions
Any values beyond 2000 and below 20 are invalid. In the following scenario
the tester has made four test cases:
Test cases 3 and 4 give the same outputs so they lie in the same partition.
In short, we are doing redundant testing. Both TC3 and TC4 fall in one
equivalence partitioning, so we can prepare one test case by testing one value
in between the boundary, thus eliminating redundancy testing in projects.
52 Software Testing Interview Questions
Now that we are clear about state and transition, how does it help us in
testing? By using states and transitions we can identify test cases. So we can
identify test cases either using states or transitions. But if we use only one entity,
i.e., either state or transition, it is very possible that we can miss some scenarios.
In order to get the maximum benefit we should use the combination of state
and transition. The following figure shows that if we only use state or transition
in isolation it’s possible that we will have partial testing. But the combination
of state and transition can give us better test coverage for an application.
Testing Techniques 53
OR
(B) Can you explain monkey testing?
A negative test is when you put in an invalid input and recieve errors.
A positive test is when you put in a valid input and expect some action to
be completed in accordance with the specification.
Exploratory testing is also called adhoc testing, but in reality it’s not completely
adhoc. Ad hoc testing is an unplanned, unstructured, may be even an impulsive
journey through the system with the intent of finding bugs. Exploratory
testing is simultaneous learning, test design, and test execution. In other
words, exploratory testing is any testing done to the extent that the tester
proactively controls the design of the tests as those tests are performed and
Testing Techniques 55
OR
Now let’s try to apply an orthogonal array in actual testing field. Let’s say
we have a scenario in which we need to test a mobile handset with different
plan types, terms, and sizes. Below are the different situations:
n Handset (Nokia, 3G and Orange).
n Plan type (4 x 400, 4 x 300, and 2 x 270).
Testing Techniques 57
Orthogonal arrays are very useful because most defects are pairs-wise defects
and with orthogonal arrays, we can reduce redundancy to a huge extent.
58 Software Testing Interview Questions
As the name suggests they are tables that list all possible inputs and all possible
outputs. A general form of decision table is shown in the following figure.
Condition 1 through Condition N indicates various input conditions. Action 1
through Condition N are actions that should be taken depending on various
input combinations. Each rule defines unique combinations of conditions
that result in actions associated with that rule.
decision table. But just imagine that you have a huge amount of possible inputs
and outputs. For such a scenario decision tables gives you a better view.
The following is the decision table for the scenarios described above. In
the top part we have put the condition and below are the actions which occur
as a result of the conditions. Read from the right and move to the left and
then to the action. For instance, Married ‡ Yes ‡ then discount. The same
goes for the student condition. Using the decision table we can ensure, to a
good extent, that we do not skip any validation in a project.
61
62 Software Testing Interview Questions
the process and the second branch shows a sample risk mitigation process
for an organization. For instance, the risk mitigation process defines what
step any department should follow to mitigate a risk. The process is as
follows:
Below are some of the cost elements involved in the implementing process:
Figure 68 Model
Many companies reinvent the wheel rather than following time tested
models in the industry.
64 Software Testing Interview Questions
Figure 71 Tailoring
Figure 73 CMMI
OR
There are two models in CMMI. The first is “staged” in which the maturity
level organizes the process areas. The second is “continuous” in which the
capability level organizes the process area.
The following figure shows how process areas are grouped in both
models.
Let’s try to understand both the models in more detail. Before we move
ahead let’s discuss the basic structure of the CMMI process, goal, and practices.
A process area, as said previously, is a group of practices or activities performed
to achieve a specific objective. Every process area has specific as well as generic
goals that should be satisfied to achieve that objective. To achieve those goals we
need to follow certain practices. So again to achieve those specific goals we have
specific practices and to achieve generic goals we have generic practices.
In one of our previous questions we talked about implementation and
institutionalization. Implementation can be related to a specific practice while
institutionalization can be related to generics practices. This is the common
basic structure in both models: Process area ‡ Specific/Generic goals ‡
Specific/Generic practice.
Now let’s try to understand model structures with both types of
representations. In a staged representation, we revolve around the maturity
level as shown in the following figure. All processes have to be at one maturity
level, while in a continuous representation we try to achieve capability levels in
those practices. The following diagram shows how continuous representation
revolves around capability. Continuous representation is used when the
CMMI 71
organization wants to mature and perform in one specific process area only. Let’s
say for instance in a non-IT organization we want to only improve the supplier
agreement process. So that particular organization will only concentrate on the
SAM and try to achieve a good capability level in that process area.
The continuous model is the same as the staged model only that the arrangement
is a bit different. The continuous representation/model concentrates on the
action or task to be completed within a process area. It focuses on maturing
the organizations ability to perform, control, and improve the performance
in that specific performance area.
may not be stable and probably does not meet objectives such as quality, cost,
and schedule, but still the task can be done.
All 25 process areas in CMMI are classified into four major sections.
Process management
This process areas contain all project tasks related to defining, planning,
executing, implementing, monitoring, controlling, measuring, and making
better processes.
76 Software Testing Interview Questions
Project Management
Project management process areas cover the project management activities
related to planning, monitoring, and controlling the project.
Engineering
Engineering process areas were written using general engineering terminology
so that any technical discipline involved in the product development process
(e.g., software engineering or mechanical engineering) can use them for
process improvement.
Support
Support process areas address processes that are used in the context of
performing other processes. In general, the support process areas address
processes that are targeted toward the project and may address processes
that apply more generally to the organization. For example, process and
product quality assurance can be used with all the process areas to provide
an objective evaluation of the processes and work products described in all
the process areas.
The following diagram shows the classification and representation of the
process areas.
The following table defines all the abbreviations of the process areas.
issues that affect day to day routines. It has seven process areas as shown in
the figure below.
So in the short difference between level 1 and level 2 is related to immature
and mature organizations.
Level 2 to Level 3
Now that in Level 2 good practices are observed at the project level, it is time
to move these good practices to the organization level so that every one can
benefit from the same. So the biggest difference between Level 2 and Level 3
is good practices from the projects that are bubbled up to organization level.
The organization approach of doing business is documented. To perform
Maturity level 3, first Maturity 2 must be achieved with the 14 processes as
shown in the given figure.
Level 3 to Level 4
Maturity level 4 is all about numbers and statistics. All aspects of the project
are managed by numbers. All decisions are made by numbers. Product quality
and process are measured by numbers. So in Level 3 we say this is of good
quality; in Level 4 we say this is of good quality because the defect ratio is less
than 1 %. So there are two process areas in Level 4 as shown below. In order
to move to Level 4, you should have achieved all the PA’s of Level 3 and also
the two process areas below.
Level 4 to Level 5
Level 5 is all about improvement as compared to Level 4. Level 5 concentrates
on improving quality of organization process by identifying variation, by
looking at root causes of the conditions and incorporating improvements for
improve process. Below are the two process areas in Level 5 as shown in figure
below. In order to get level 5 all level 4 PA’s should be satisfied. So the basic
difference between level 4 and level 5 is in Level 4 we have already achieved
a good level of quality, and in level 5 we are trying to improve the quality.
There are three different sources from which an appraiser can verify that an
organization followed the process or not.
Instruments: An instrument is a survey or questionnaire provided to the
organization, project, or individuals before starting the assessment so that
beforehand the appraiser knows some basic details of the project.
Interview: An interview is a formal meeting between one or more members
of the organization in which they are asked some questions and the appraiser
makes some judgments based on those interviews. During the interview the
member represents some process area or role which he performs. For instance,
the appraiser may interview a tester or programmer asking him indirectly what
metrics he has submitted to his project manager. By this the appraiser gets a
fair idea of CMMI implementation in that organization.
Documents: A document is a written work or product which serves as evidence
that a process is followed. It can be hard copy, Word document, email, or any
type of written official proof.
The following figure is the pictorial view of the sources used to verify how
compliant the organization is with CMMI.
OR
Figure 88 SCAMPI
First Strategy
Use Class B to initiate a process improvement plan. After that apply Class C
to check readiness for Class B or Class A. The following diagram shows this
strategy.
Second Strategy
Class C appraisal is used on a subset of an organization. From this we get an
aggregation of weakness across the organization. From this we can prepare a
process improvement plan. We can then apply a Class B appraisal to see if we
are ready for Class A appraisal. The following diagram shows the strategy.
Third Strategy
Class A is used to initiate an organization level process. The process
improvement plan is based on an identified weakness. Class B appraisal should
be performed after six months to see the readiness for the second Class A
appraisal rating. The following diagram shows this strategy.
Once the PII documents are filed we can rate whether the organization
is compliant or not. Below are the steps to be followed during the
SCAMPI:
n Gather documentation.
n Conduct interviews.
n Discover and document strengths and weaknesses.
n Communicate/present findings.
CMMI 85
Note: This question will be asked to judge whether you have actually
implemented CMMI in a proper fashion in your oganization. To answer this
question, we will be using SAM as the process area. But you can answer with
whatever process area you have implemented in your organization.
For the following SAM process there are the two SG1 and SG2 practices which
need to be implemented to satisfy the process area. SAM helps us to define
our agreement with the supplier while procuring products in the company.
Let’s see, in the next step, how we have mapped our existing process with the
SAM practices defined in CMMI.
send the complete delivery of all products. The product is accepted in the
organization by issuing the supplier a proper invoice. The invoice document
says that the product is accepted by the organization officially. When the
product is installed in the organization then either someone from the supplier
side comes for the demo or a help brochure is shipped with the product.
The above explanation is from the perspective of the how the organization
manages its transactions with the supplier. Now let’s try to map how the above
process fits in the CMMI model. In the above diagram the circled descriptions
are process areas of CMMI.
(B) what are all the process areas and goals and
practices?
OR
Note: No one is going to ask such a question, but they would like to know at
least the purpose of each KPA. Second, they would like to know what you did to
attain compatibility in these process areas. For instance, you say that you did an
organizational process. They would like to know how you did it. You can justify
it by saying that you made standard documents for coding standards which was
then followed at the organization level for reference. Normally everyone follows
a process; only they do not realize it. So try to map the KPA to the process that
you followed.
Each process area is defined by a set of goals and practices. There are two
categories of goals and practices: generic and specific. Generic goals and
practices are a part of every process area. Specific goals and practices are
specific to a given process area. A process area is satisfied when company
processes cover all of the generic and specific goals and practices for that
process area.
Generic goals and practices are a part of every process area. They include
the following:
GG 1 Achieve Specific Goals
GP 1.1 Perform Base Practices
GG 2 Institutionalize a Managed Process
GP 2.1 Establish an Organizational Policy
GP 2.2 Plan the Process
GP 2.3 Provide Resources
GP 2.4 Assign Responsibility
GP 2.5 Train People
GP 2.6 Manage Configurations
GP 2.7 Identify and Involve Relevant Stakeholders
GP 2.8 Monitor and Control the Process
GP 2.9 Objectively Evaluate Adherence
CMMI 89
Process Areas
The CMMI contains 25 key process areas indicating the aspects of product
development that are to be covered by company processes.
Purpose
The purpose of Causal Analysis and Resolution (CAR) is to identify causes of
defects and other problems and take action to prevent them from occurring
in the future.
Purpose
The purpose of Configuration Management (CM) is to establish and maintain
the integrity of work products using configuration identification, configuration
control, configuration status accounting, and configuration audits.
Purpose
The purpose of Decision Analysis and Resolution (DAR) is to analyze
possible decisions using a formal evaluation process that evaluates identified
alternatives against established criteria.
Purpose
The purpose of Integrated Project Management (IPM) is to establish and
manage the project and the involvement of the relevant stakeholders according
to an integrated and defined process that is tailored from the organization’s
set of standard processes.
Purpose
The purpose of Integrated Supplier Management (ISM) is to proactively identify
sources of products that may be used to satisfy the project’s requirements and
to manage selected suppliers while maintaining a cooperative project-supplier
relationship.
92 Software Testing Interview Questions
Purpose
The purpose of Integrated Teaming (IT) is to form and sustain an integrated
team for the development of work products.
Purpose
The purpose of Measurement and Analysis (MA) is to develop and sustain
a measurement capability that is used to support management information
needs.
CMMI 93
Purpose
The purpose of the Organizational Process Definition (OPD) is to establish
and maintain a usable set of organizational process assets.
Purpose
The purpose of Organizational Process Focus (OPF) is to plan and implement
organizational process improvement based on a thorough understanding of
CMMI 95
Purpose
The purpose of Organizational Process Performance (OPP) is to establish and
maintain a quantitative understanding of the performance of the organization’s
set of standard processes in support of quality and process-performance
objectives, and to provide the process performance data, baselines, and models
to quantitatively manage the organization’s projects.
Purpose
The purpose of Organizational Training (OT) is to develop the skills and
knowledge of people so that they can perform their roles effectively and
efficiently.
Purpose
The purpose of Product Integration (PI) is to assemble the product from the
product components, ensure that the product is integrated, functions properly,
and delivers the product.
Purpose
The purpose of Project Monitoring and Control (PMC) is to provide an
understanding of the project’s progress so that appropriate corrective actions can
be taken when the project’s performance deviates significantly from the plan.
Purpose
The purpose of Process and Product Quality Assurance (PPQA) is to provide
staff and management with objective insight into processes and associated
work products.
Purpose
The purpose of the Quantitative Project Management (QPM) process area is
to quantitatively manage the project’s defined process to achieve the project’s
established quality and process-performance objectives.
Purpose
The purpose of Requirements Development (RD) is to produce and analyze
customer, product, and product-component requirements.
Purpose
The purpose of Requirements Management (REQM) is to manage the
requirements of the project’s products and product components and to
identify inconsistencies between those requirements and the project’s plans
and work products.
Purpose
The purpose of Risk Management (RSKM) is to identify potential problems
before they occur so that risk-handling activities can be planned and invoked
as needed across the life of the product or project to mitigate adverse impacts
on achieving objectives.
Purpose
The purpose of the Supplier Agreement Management (SAM) is to manage
the acquisition of products from suppliers for which there exists a formal
agreement.
Purpose
The purpose of the Technical Solution (TS) is to design, develop, and
implement solutions to requirements. Solutions, designs, and implementations
encompass products, product components, and product-related life-cycle
processes either alone or in appropriate combinations.
102 Software Testing Interview Questions
Purpose
The purpose of Validation (VAL) is to demonstrate that a product or
product component fulfills its intended use when placed in its intended
environment.
Verification (VER)
An engineering process area at Maturity Level 3.
Purpose
The purpose of Verification (VER) is to ensure that a selected work product
meets their specified requirements.
The main focus of Six Sigma is to reduce defects and variations in the processes.
DMAIC and DMADV are the models used in most Six Sigma initiatives.
105
106 Software Testing Interview Questions
DMADV is the model for designing processes while DMAIC is used for
improving the process.
The DMADV model includes the following five steps:
Six Sigma is not only about techniques, tools, and statistics, but also about
people. In Six Sigma there are five key players:
n Executive leaders
n Champions
n Master black belts
n Black belts
n Green belts
108 Software Testing Interview Questions
Variation is the basis of Six Sigma. It defines how many changes are happening
in the output of a process. So if a process is improved then this should reduce
variations. In Six Sigma we identify variations in the process, control them, and
reduce or eliminate defects. Now let’s discuss how we can measure variations.
There are four basic ways of measuring variations: Mean, Median, Mode, and
Range. Let’s discuss each of these variations in more depth for better analysis.
Mean: In the mean measurement the variations are measured and compared
using averaging techniques. For instance, you can see from the following figures
which shows two weekly measures, how many computers are manufactured.
We have tracked two weeks; one we have named Week 1 and the other Week
2. So to calculate variation using mean we calculate the mean of Week 1 and
Week 2. You can see from the calculations in the following figure we have
5.083 for Week 1 and 2.85 for Week 2. So we have a variation of 2.23.
Range: Range is nothing but the spread of values for a particular data
range. In short, it is the difference between the highest and lowest values in
a particular data range. For instance, you can see for the recorded computer
data of Week 2 we have found the range of values by subtracting the highest
value from the lowest.
Mode: Mode is nothing but the most frequently occurring values in a data
range. For instance, in our computer manufacturing data, range 4 is the most
occurring value in Week 1 and 3 is the most occurring value in Week 2. So
the variation is 1 between these data ranges.
The first step is to calculate the mean. This can be calculated by adding up
all the observed values and dividing them by the number of observed values.
The second step is to subtract the average from each observation, square
them, and then sum them. Because we square them we will not get negative
In the final step we take the square root which gives the standard
deviation.
Six Sigma 115
There are situations where we need to analyze what caused the failure or
problem in a project. The fish bone or Ishikawa diagram is one important
concept which can help you find the root cause of the problem. Fish bone
was conceptualized by Ishikawa, so in honor of its inventor, this concept was
named the Ishikawa diagram. Inputs to conduct a fish bone diagram come
from discussion and brainstorming with people involved in the project. The
following figure shows the structure of the Ishikawa diagram.
The main bone is the problem which we need to address to know what
caused the failure. For instance, the following fish bone is constructed to find
what caused the project failure. To know this cause we have taken four main
bones as inputs: Finance, Process, People, and Tools. For instance, on the
people front there are many resignations ‡ this was caused because there was
no job satisfaction ‡ this was caused because the project was a maintenance
project. In the same way causes are analyzed on the Tools front also. In Tools
‡ No tools were used in the project ‡ because no resource had enough
knowledge of them ‡ this happened because of a lack of planning. In the
Process front the process was adhoc ‡ this was because of tight deadlines ‡
this was caused because marketing people over promised and did not negotiate
properly with the end customer.
Now once the diagram is drawn the end bones of the fish bone signify
the main cause of project failure. From the following diagram here’s a list of
causes:
117
118 Software Testing Interview Questions
The following are three simple tables which show the number of defects
SDLC phase-wise, module-wise and developer-wise.
This is one of the most effective measures. The number of defects found in a
production is recorded. The only issue with this measure is it can have latent and
masked defects which can give us the wrong value regarding software quality.
with defects and then see if the defect is found or not. So, for instance, if we
have injected 100 defects we try to get three values. First how many seeded
defects were discovered, how many were not discovered, and how many new
defects (unseeded) are discovered. By using defect seeding we can predict
the number of defects remaining in the system.
Let’s discuss the concept of defect seeding by doing some detailed
calculations and also try to understand how we can predict the number of
defects remaining in a system. The following is the calculation used:
1. First, calculate the seed ratio using the following formula, i.e., number of
seed bugs found divided by the total number of seeded bugs.
2. After that we need to calculate the total number of defects by using the
formula (number of defects divided by the seed ratio).
3. Finally, we can know the estimated defects by using the formula (total
number of defects2the number of defect calculated by Step 3).
The following figure shows a sample with the step-by-step calculation.
You can see that first we calculate the seed ratio, then the total number of
defects, and finally, we get the estimated defects.
But the success of DRE depends on several factors. The following are
some of them:
particular phase. For instance, in the requirement phase 100 defects were
present, but 20 defects are removed from the requirement phase due to a
code review. So if 20 defects are removed then 80 defects get carried to the
new phase (design) and so on.
First, let’s calculate simple DRE of the above diagram. DRE will be the
total bugs found in testing divided by the total bugs found in testing plus the
total bugs found by the user, that is, during acceptance testing. So the following
diagram gives the DRE for the those values.
Metrics 123
Defect age is also called a phase age or phage. One of the most important things
to remember in testing is that the later we find a defect the more it costs to fix
it. Defect age and defect spoilage metrics work with the same fundamental,
i.e., how late you found the defect. So the first thing we need to define is what
is the scale of the defect age according to phases. For instance, the following
table defines the scale according to phases. So, for instance, requirement
defects, if found in the design phase, have a scale of 1, and the same defect,
if propagated until the production phase, goes up to a scale of 4.
Once the scale is decided now we can find the defect spoilage. Defect
spoilage is defects from the previous phase multiplied by the scale. For
instance, in the following figure we have found 8 defects in the design phase
from which 4 defects are propagated from the requirement phase. So we
multiply the 4 defects with the scale defined in the previous table, so we get
the value of 4. In the same fashion we calculate for all the phases. The following
is the spoilage formula. It’s the ratio of the sum of defects passed from the
previous phase multiplied by the discovered phase then finally divided by the
total number of defects. For instance, the first row shows that total defects are
27 and the sum of passed on defects multiplied by their factor is 8 (4 3 1 5
4 1 2 3 2 5 4). In this way we calculate for all phases and finally the total.
The optimal value is 1. A lower value of spoilage indicates a more effective
defect discovery process.
OR
Automation is the integration of testing tools into the test environment in such
a manner that the test execution, logging, and comparison of results are done
with little human intervention. A testing tool is a software application which
helps automate the testing process. But the testing tool is not the complete
answer for automation. One of the huge mistakes done in testing automation is
automating the wrong things during development. Many testers learn the hard
way that everything cannot be automated. The best components to automate are
repetitive tasks. So some companies first start with manual testing and then see
which tests are the most repetitive ones and only those are then automated.
As a rule of thumb do not try to automate:
All repetitive tasks which are frequently used should be automated. For
instance, regression tests are prime candidates for automation because they’re
typically executed many times. Smoke, load, and performance tests are other
examples of repetitive tasks that are suitable for automation. White box testing
can also be automated using various unit testing tools. Code coverage can also
be a good candidate for automation. The following figure shows, in general,
the type of tests which can be automated.
Note: For this book we are using AutomatedQA as the tool for testing automation.
So we will answer this question from the point of view of the AutomatedQA tool.
You can install the AutomationQA tool and practice for yourself to see how it
really works.
Automated Testing 129
Note: To answer this answer in detail we have used the FileSearch application.
You can experiment with any other application installed on your system such as
a messenger or office application.
Let’s go step by step to learn to use the AutomatedQA tool to automate our
testing process. First, start the tool by clicking all programs ‡ AutomatedQA
‡ TestComplete 5. Once the tool is started you will get a screen as shown
130 Software Testing Interview Questions
here. We first need to create a new project by using the New Project menu
as shown in the following figure.
After clicking on the new project we will be prompted for what kind of
testing we are looking at, i.e., load testing, general testing, etc. Currently, we
will select only General-Purpose Test project. At this moment, you can also
specify the project name, location, and the language for scripting (Select
VBscript, currently).
Once the project name and path are given you will then be prompted
with a screen as shown here. These are project items which we need
to be included in your project depending on the testing type. Because
currently we are doing a Windows application test we need to select the
project items as shown in the figure. Please note events have to be selected
compulsorily.
Once you have clicked finished you will get the Test Complete Project
Explorer as shown here. The Test Complete Project Explorer is divided into
three main parts: Events, Scripts, and TestedApps. Script is where all the
programming logic is present. In TestedApps we add the applications that we
want to test. So let’s first add the application in TestedApps.
132 Software Testing Interview Questions
You will then be prompted with a screen as shown here. Browse to your
application EXE file and add it to the TestedApps folder.
Once the recording toolbar is seen right click on the application added
and run your test. In this scenario you can see the WindowsFileSearch
application running. In this we have recorded a complete test in which we
gave the folder name and keyword, and then tried to see if we were getting
proper results. Your application being tested can be something different so
your steps may vary.
Once the test is complete you can stop the recording using the button on
the recording toolbar. Once you stop, the recording tool will generate script of
all your actions done as shown in the figure. You can view the programming
script as shown here.
Automated Testing 135
Once the script is recorded you can run the script by right clicking and
running it. Once you run it the script tool will playback all the test steps which
you recorded.
If everything goes right you can see the test log as shown here which
signifies that your script has run successfully.
So once the tool captures the request and response, we just need to
multiply the request and response with the virtual user. Virtual users are logical
users which actually simulate the actual physical user by sending in the same
request and response. If you want to do load testing with 10,000 users on an
application it’s practically impossible. But by using the load testing tool you
only need to create 1000 virtual users.
Note: As said previously we will be using the AutomatedQA tool for automation
in this book. So let’s try to answer this question from the same perspective. You
can get the tool from the CD provided with the book.
After that select HTTP Load Testing from the project types.
Once you click “OK” you will get different project items which you need
for the project. For load testing only select three, i.e., Events, HTTP Load
Testing, and Script as shown here.
This project has the following items: Stations, Tasks, Tests, and Scripts.
Stations basically define how many users the load testing will be performed
for. Task has the request and response captured. Tests and Scripts have the
Script which is generated when we record the automated test.
You need to specify the number of virtual users, tasks, and the browser
type such as Internet Explorer, Opera, etc.
Figure 146 Assign the number of virtual users and the browser
As said previously the basic idea in load testing is the request and response
which need to be recorded. That can be done by using the recording taskbar
and clicking the icon shown.
Once you click on the icon you need to enter the task name for it.
In order to record the request and response the tool changes the proxy
setting of the browser. So you can see from the screen here just click yes and
let the next screen change the proxy settings.
Once the setting is changed you can then start your browser and make
some requests and responses. Once that is done click on the stop button to
stop the recording.
The tool actually generates a script for the task recorded. You can see the
script and the code generated in the following figure. To view the code you
can double click on the Test2 script (here we have named it Test2 script).
If you double click the test you can see the code.
Right click on the task and run it and you will see a summary report as
shown in the figure.
OR
[This question is left to the user. Please install the tool and try for yourself.]
Chapter 8 T esting
E stimation
Note: Below we have listed the most used estimation methodologies in testing.
As this is an interview question book we limit ourselves to TPA which is the most
preferred estimation methodology for black box testing.
TPA is a technique used to estimate test efforts for black box testing. Inputs
for TPA are the counts derived from function points (function points will be
discussed in more detail in the next sections).
147
148 Software Testing Interview Questions
Note: In the following section we will look into how to estimate function
points.
Note: It’s rare that someone will ask you to give the full definition of function
points. They will rather ask about specific sections such as GSC, ILF, etc. The
main interest of the interviewer will be how you use the function point value in
TPA analysis. Function point analysis is mainly done by the development team
so from a testing perspective you only need to get the function point value and
then use TPA to get the black box testing estimates.
Note: This document contains material which has been extracted from the
IFPUG Counting Practices Manual. It is reproduced in this document with the
permission of IFPUG.
Testing Estimation 149
Note: The best way to understand any complicated system is to break the system
down into smaller subsystems and try to understand those smaller sub-systems
first. In a function point you break complicated systems into smaller systems
and estimate those smaller pieces, then total up all the subsystem estimates to
come up with a final estimate.
Application Boundary
The first step in FPA is to define the boundary. There are two types of major
boundaries:
OR
(I) Can you explain FTR, ILF, EIF, EI, EO, EQ, and GSC?
n hese files are logically related data from the user point of
T
view.
n EIFs reside in the external application boundary.
n EIFs are used only for reference purposes and are not
maintained by internal applications.
n EIFs are maintained by external applications.
Please note the whole database is one supplier ILF as all belong to one logical
section. The RET quantifies the relationship complexity of ILF and EIF.
n EIs may maintain the ILF of the application, but it’s not a
compulsory rule.
n Example: A calculator application does not maintain any data,
but still the screen of the calculator will be counted as EI.
n Most of the time user screens will be EI, but again it’s not a
hard and fast rule. Example: An import batch process running
from the command line does not have a screen, but still
should be counted as EI as it helps pass data from the external
application boundary to the internal application boundary.
Note: There are no hard and fast rules that only simple reports are EQs. Simple
view functionality can also be counted as an EQ.
The major difference between EO and EQ is that data passes across the
application boundary.
Data Communications
How many communication facilities are there to aid in the transfer or exchange
of information with the application or system?
Testing Estimation 157
Rating Description
0 pplication uses pure batch processing or a stand-alone PC.
A
1 Application uses batch processing but has remote data
entry or remote printing.
2 Application uses batch processing but has remote data
entry and remote printing.
3 Application includes online data collection or TP
(Teleprocessing) front-end to a batch process or query
system.
4 Application is more than a front-end, but supports only one
type of TP communications protocol.
5 Application is more than a front-end, and supports more
than one type of TP communications protocol.
Rating Description
0 pplication does not aid the transfer of data or processing functions
A
between components of the system.
1 Application prepares data for end-user processing on another
component of the system such as PC spreadsheets or PC DBMS.
2 Data is prepared for transfer, then is transferred and processed on
another component of the system (not for end-user processing).
3 Distributed processing and data transfer are online and in one direction
only.
4 Distributed processing and data transfer are online and in both
directions.
5 Processing functions are dynamically performed on the most appropriate
component of the system.
Performance
Did the user require response time or throughput?
Rating Description
0 o special performance requirements were stated by the user.
N
1 Performance and design requirements were stated and reviewed but no
special actions were required.
2 Response time or throughput is critical during peak hours.No special design
for CPU utilization was required. Processing deadline is for the next business
day.
3 Response time or throughput is critical during all business hours. No special
design for CPU utilization was required. Processing deadline requirements
with interfacing systems are constraining.
4 In addition, stated user performance requirements are stringent enough to
require performance analysis tasks in the design phase.
5 In addition, performance analysis tools were used in the design,
development, and/or implementation phases to meet the stated user
performance requirements.
Table 7 Performance
Heavily Used Configuration
How heavily used is the current hardware platform where the application
will be executed?
Rating Description
0 No explicit or implicit operational restrictions are included.
1 Operational restrictions do exist, but are less restrictive than a typical
application. No special effort is needed to meet the restrictions.
2 Some security or timing considerations are included.
3 Specific processor requirement for a specific piece of the application is
included.
4 Stated operation restrictions require special constraints on the application
in the central processor or a dedicated processor.
5 In addition, there are special constraints on the application in the
distributed components of the system.
Transaction Rate
How frequently are transactions executed; daily, weekly, monthly, etc.?
Rating Description
0 No peak transaction period is anticipated.
1 Peak transaction period (e.g., monthly, quarterly, seasonally, annually) is
anticipated.
2 Weekly peak transaction period is anticipated.
3 Daily peak transaction period is anticipated.
4 High transaction rate(s) stated by the user in the application requirements
or service-level agreements are high enough to require performance analysis
tasks in the design phase.
5 High transaction rate(s) stated by the user in the application requirements
or service-level agreements are high enough to require performance analysis
tasks and, in addition, require the use of performance analysis tools in the
design, development, and/or installation phases.
Rating Description
0 All transactions are processed in batch mode.
1 1% to 7% of transactions are interactive data entry.
2 8% to 15% of transactions are interactive data entry.
3 16% to 23% of transactions are interactive data entry.
4 24% to 30% of transactions are interactive data entry.
5 More than 30% of transactions are interactive data entry.
End-user Efficiency
Was the application designed for end-user efficiency? There are seven end-
user efficiency factors which govern how this point is rated.
160 Software Testing Interview Questions
Rating Description
0 None of the above.
1 One to three of the above.
2 Four to five of the above.
3 Six or more of the above, but there are no specific user requirements related
to efficiency.
4 Six or more of the above and stated requirements for end-user efficiency are
strong enough to require design tasks for human factors to be included (for
example, minimize keystrokes, maximize defaults, use of templates).
5 Six or more of the above and stated requirements for end-user efficiency are
strong enough to require use of special tools and processes to demonstrate
that the objectives have been achieved.
Online Update
How many ILFs are updated by online transactions?
Rating Description
0 None of the above.
1 Online update of one to three control files is included. Volume of updating
is slow and recovery is easy.
2 Online update of four or more control files is included. Volume of updating
is low and recovery is easy.
3 Online update of major internal logical files is included.
4 In addition, protection against data lost is essential and has been specially
designed and programmed in the system.
5 In addition, high volumes bring cost considerations into the recovery
process. Highly automated recovery procedures with minimum operator
intervention are included.
Complex Processing
Does the application have extensive logical or mathematical processing?
Rating Description
0 None of the above.
1 Any one of the above.
2 Any two of the above.
3 Any three of the above.
4 Any four of the above.
5 All five of the above
Reusability
Was the application developed to meet one or many users needs?
Rating Description
0 No reusable code.
1 Reusable code is used within the application.
2 Less than 10% of the application considers more than one user’s needs.
3 Ten percent or more of the application considers more than one user’s
needs.
4 The application was specifically packaged and/or documented to ease
re-use, and the application is customized by the user at a source-code level.
5 The application was specifically packaged and/or documented to ease
re-use, and the application is customized for use by means of user
parameter maintenance.
Table 16 Reusability
Installation Ease
How difficult is conversion and installation?
Testing Estimation 163
Rating Description
0 o special considerations were stated by the user, and no special setup is
N
required for installation.
1 No special considerations were stated by the user but special setup is
required for installation.
2 Conversion and installation requirements were stated by the user and
conversion and installation guides were provided and tested. The impact of
conversion on the project is not considered to be important.
3 Conversion and installation requirements were stated by the user, and
conversion and installation guides were provided and tested. The impact of
conversion on the project is considered to be important.
4 In addition to 2 above, automated conversion and installation tools were
provided and tested.
5 In addition to 3 above, automated conversion and installation tools were
provided and tested.
Operational Ease
How effective and/or automated are start-up, back-up, and recovery
procedures?
Rating Description
0 o special operational considerations other than the normal back-up
N
procedures were stated by the user.
1–4 One, some, or all of the following items apply to the application. Select
all that apply. Each item has a point value of one, except where noted
otherwise.
Effective start-up, back-up, and recovery processes were provided, but
operator intervention is required.
Effective start-up, back-up, and recovery processes were provided, and no
operator intervention is required (count as two items).
The application minimizes the need for tape mounts.
The application minimizes the need for paper handling.
Continued
164 Software Testing Interview Questions
Rating Description
5 he application is designed for unattended operation. Unattended
T
operation means no operator intervention is required to operate the system
other than to start-up or shut-down the application.
Automatic error recovery is a feature of the application.
Multiple Sites
Was the application specifically designed, developed, and supported to be
installed at multiple sites for multiple organizations?
Description Rating
0 ser requirements do not require consider the needs of more than one user/
U
installation site.
1 Needs of multiple sites were considered in the design, and the application
is designed to operate only under identical hardware and software
environments.
2 Needs of multiple sites were considered in the design, and the application
is designed to operate only under similar hardware and/or software
environments.
3 Needs of multiple sites were considered in the design, and the application
is designed to operate under different hardware and/or software
environments.
4 Documentation and support plans are provided and tested to support the
application at multiple sites and the application is as described by 1 or 2.
5 Documentation and support plans are provided and tested to support the
application at multiple sites and the application is as described by 3.
Facilitate Change
Was the application specifically designed, developed, and supported to
facilitate change?
Testing Estimation 165
Rating Description
0 None of the above.
1 Any one of the above.
2 Any two of the above.
3 Any three of the above.
4 Any four of the above.
5 All five of the above
All of the above GSCs are rated from 0-5.Then VAFs are calculated from
the equation below:
VAF 5 0.65 1 (sum of all GSC factor)/100).
166 Software Testing Interview Questions
Note: GSC has not been widely accepted in the software industry. Many software
companies use unadjusted function points rather than adjusted. ISO has also
removed the GSC section from its books and only kept unadjusted function
points as the base for measurement.
The following are the look-up tables which will be referred to during
counting.
EI Rating Table
Data Elements
This table says that in any EI (External Input), if your DET count (Data
Element) and FTR (File Type Reference) exceed these limits, then this should
be the FP (Function Point). For example, if your DET exceeds >15 and the
FTR is greater than 2, then the function point count is 6. The following tables
also show the same things. These tables should be there before us when we
are doing function point counting. The best way is to put these values in Excel
EO Rating Table
Data Elements
with the formula so that you only have to put the quantity in the appropriate
section and you get the final value.
EQ Rating Table
Data Elements
Data Elements
RET 1 to 19 20 to 50 51 or more
1 RET 7 7 10
2 to 5 7 10 15
Greater than 6 10 15 15
2. T hen put rating values 0 to 5 for all 14 GSCs. Add the total of all 14 GSCs
to set the total VAF. The formula for VAF 5 0.65 1 (sum of all GSC
factors/100).
3. Finally, make the calculation of adjusted function points. Formula: Total
function point 5 VAF * unadjusted function point.
4. Make an estimation of how many function points you will cover per day.
This is also called the “performance factor.”
5. On the basis of the performance factor, you can calculate men/days.
Let’s try to implement these details in a sample customer project.
ILF Customer
Description
The credit card information referenced
is an EIF. Note this file is only referenced
for the credit card check. 1 1
There’s only one textbox credit card
number and hence one DET is put
in the side column. And RET is 0.
Looking at the rating table the
total FP is 5.
So according to the above ranking table Total function 5
Description
There are total 9 DETs, all add and
update buttons, even the credit check
button, the address list box, check
box active, all text boxes.
There are 3 FTRs, one is the address
and the second is the credit card
information, and the third is the
customer itself. 9 3
So according to the above ranking table Total function 6
EI Update Customer
Credit card check processes can be complex as credit card API complexity is
still not known. Credit card information crosses from the credit card system
to the customer system.
172 Software Testing Interview Questions
So now let’s add the total FPs from the previous tables:
Table 33 GSC
174 Software Testing Interview Questions
This factor affects the whole FP so be very careful with this factor. So
now, calculating the
adjusted FPs 5 VAFs * total unadjusted. Now we know that the complete
FPs for the customer GUI is 27 FPs. Now calculating the efficiency factor,
we say that we will complete 3 FPs per day, that is, 9 working days. So, the
whole customer GUI is of 9 working days (note: do not consider Saturday
and Sunday as working days).
The above sample is 100% of the distribution of effort across various phases.
But note that function points or any other estimation methodology only gives
you the total execution estimation. So you can see in the above distribution we
have given coding 100%. But as previously said it is up to the project manager
to change according to scenarios. From the above function point estimation
the estimation is 7 days. Let’s try to divide it across all phases.
The table shows the division of project men/days across the project. Now
let’s put down the final quotation. But first just a small comment about test
cases.
The total number of Test Cases 5 (Function Points) raised to a power of 1.2.
Final Quotation
One programmer will work on the project for $1,000 a month. So his 10.6 days
of salary comes to around $318.00. The following quotation format is a simple
176 Software Testing Interview Questions
FP GSC
78 0
84 5
90 10
96 15
102 20
108 25
114 30
120 35
126 40
132 45
138 50
144 55
150 60
156 65
162 70
The following are the observations from the table and plot:
Readers must be wondering why 0.65? There are 14 GSC factors from
zero to five. So the maximum value of VAF 5 0.65 1 (70/100) 5 1.35. So that
VAF does not have any affect, i.e., UAFP = FP, the VAF should be one. The
VAF will be one when the GSC is 35, i.e., half of 70. So, in order to complete
value “1”, value “0.65” is taken. Note that the value is 0.35 when the GSC is
35, to complete the one factor, “0.65” is required.
But the following is the main problem related to the GSCs. The GSCs
are applied throughout FPs even when some GSCs do not apply to whole
function points. Here’s the example to demonstrate the GSC problem.
Let’s take the 11th GSC factor “installation ease.” The project is of 100
UAFP and there is no consideration of the installation done previously by the
client so the 11th factor is zero.
continued
Testing Estimation 179
So VAFs 5 0.65 1 (23/100) 5 0.88 so the FPs 5 100 * 0.88 5 88. The
difference is only 5 FPs which in no way is a proper effort estimate. You cannot
make an auto update for a software version in 5 function points. Just think
about downloading the new version, deleting the old version, updating any
databases, structure changes, etc. So that’s the reason GSCs are not accepted
in the software industry. It is best to baseline your UAFPs.
ADD: This is where new function points added. This value is achieved by
counting all new EPs (elementary processes) given in a change request.
CHGA: Function points which are affected due to CR. This value is achieved
by counting all DET, FTR, ILF, EI, EO, and EQ which are affected. Do not
count elements that are not affected.
VAFA: This is a VAF which occurs because of CR. The example previously
given was the desktop application that was changed to a web application so
the GSC factor is affected.
DELFP: When CR is used for removing some functionality this value is
counted. It’s rare that the customer removes functionalities, but if they ever
do the estimator has to take note of it by counting the deleted elementary
processes.
VAFB: Again removal affects value added factors.
Once we are through with calculating enhanced function points, it is
time to count the total function points of the application. The formula is as
follows:
Testing Estimation 181
There are three main elements which determine estimates for black box
testing: size, test strategy, and productivity. Using all three elements we can
determine the estimate for black box testing for a given project. Let’s take a
look at these elements.
Size: The most important aspect of estimating is definitely the size of the
project. The size of a project is mainly defined by the number of function
points. But a function point fails or pays the least attention to the following
factors:
Test strategy: Every project has certain requirements. The importance of all
these requirements also affects testing estimates. Any requirement importance
is from two perspectives: one is the user importance and the other is the user
usage. Depending on these two characteristics a requirement rating can be
generated and a strategy can be chalked out accordingly, which also means
that estimates vary accordingly.
Productivity: This is one more important aspect to be considered while
estimating black box testing. Productivity depends on many aspects. For
Testing Estimation 183
instance, if your project has new testers your estimates shoot up because you
will need to train the new testers in terms of project and domain knowledge.
Productivity has two important aspects: environment and productivity
figures. Environmental factors define how much the environment affects a
project estimate. Environmental factors include aspects such as tools, test
environments, availability of testware, etc. While the productivity figures
depend on knowledge, how many senior people are on the team, etc.
The following diagram shows the different elements that constitute TPA
analysis as discussed.
The following are point requirements gathered from the end customer:
(1) T he account code entered in the voucher entry screen should be a valid
account code from the defined chart of accounts given by the customer.
(2) The user should be able to add, delete, and modify the account code
from the chart of the account master (this is what the second screen
defines).
(3) The user will not be able to delete the chart of account codes if he has
already entered transactions for in vouchers.
(4) The chart of account code master will consist of account codes and
descriptions of the account code.
(5) The account code cannot be greater than 10.
Testing Estimation 185
(6) T
he voucher data entry screen consists of the debit account code, credit
account code, date of transaction, and amount.
(7) Once the user enters voucher data he should be able to print it in the
future at any time.
(8) The debit and credit account are compulsory.
(9) The amount value should not be negative.
(10) After pressing the submit button the value should be seen in the grid.
(11) The amount is compulsory and should be more than zero.
(12) The debit and credit account should be equal in value.
(13) Only numeric and non-negative values are allowed in the amount
field.
(14) Two types of entries are allowed: sales and commissions.
(15) Date, amount, and voucher number are compulsory.
(16) The voucher number should be in chronological order and the system
should auto increment the voucher number with every voucher
added.
(17) No entries are allowed for previous months.
186 Software Testing Interview Questions
(18) U
sers should be able to access data from separate geographical locations.
For instance, if one user is working in India and the other in China,
then both users should be able to access each other’s data through their
respective location.
Now that we have all the requirements let’s try to estimate how we can
use TPA to do get the actual men days needed to complete a project. The
following figure shows our road map and how we will achieve using TPA.
There are in all ten steps needed to achieve our goal.
Note: You will understand this section if you have not read the function points
explanation given previously.
EI Calculation
The following are the EI entries for the accounting application. Currently,
we have two screens: one is the master screen and one is the voucher
transaction screen. In the description we have also described which DETs
we have considered. For the add voucher screen we have 7 DETs (note
the buttons are also counted as DETs) and for the account code master we
have 4 DETs.
EIF
There are no EIFs in the system because we do not communicate with any
external application.
EO
EOs are nothing but complex reports. In our system we have three complex
reports: trial balance, profit and loss, and balance sheet. By default we have
assumed 20 fields which makes it a complex report (when we do estimations
sometimes assumptions are okay).
EQ
EQs are nothing but simple output sent from the inside of the application to
the external world. For instance, a simple report is a typical type of EQ. In our
current accounting application we have one simple form that is the print voucher.
We have assumed 20 DETs so that we can move ahead with the calculation.
GSC Calculation
As said in the FPA tutorial previously given the GSC factor defines the other
factors of the projects which the FP counting does not accommodate. For the
accounting application we have kept all the GSC factors as 1 except for data
communications and performance. We have kept communication as 2 because,
one the requirement point is that we need application data to be accessed
from multiple centers which increases the data communication complexity
and also because the requirement of the end customer is that performance
should be mediumly good. The following figure shows the GSC entries.
Testing Estimation 189
Total Calculation
Now that we have filled in all the details we need to calculate the total man days.
The following figure explains how the calculations are done. The first five rows,
i.e., ILF, EIF, EO, EQ, and EI, are nothing but the total of the individual entries.
A total unadjusted function point is the total of ILF 1 EIF 1 EO 1 EQ 1 EI.
We get the total adjusted function which is nothing but the total un-adjusted
function points multiplied by the GSC factors. Depending on the organizations
baseline we define how many FPs can be completed by a programmer in one
day. For instance, for the following accounting application we have 1.2 FPs per
day. Depending on the FPs per day we get the total man days. Once we have the
total man days we distribute these values across the phases. We have just found
the total execution time. So we have assigned the total man days to the execution
phase. From the execution phase and man days we distribute 20 percent to the
requirement phase, 20 percent to technical design, and 5 percent to testing.
190 Software Testing Interview Questions
The testing estimates derived from function points are actually the estimates
for white box testing. So in the following figure the man days are actually the
estimates for white box testing of the project. It does not take into account
black box testing estimation.
Total acceptance test cases 5 total adjusted function points multiplied by 1.2:
The total estimate for this project is 37.7 man days.
Now that we have completed the function point analysis for this project let’s
move on to the second step needed to calculate black box testing using TPA.
Testing Estimation 191
But we have still not seen how Df will be calculated. Df is calculated using
four inputs: user importance, usage intensity, interfacing, and complexity.
The following figure shows the different inputs in a pictorial manner. All four
factors are rated as low, normal, and high and assigned to each function are
factors derived from the function points. Let’s take a look at these factors.
User importance (Ue): How important is this function factor to the user
compared to other function factors? The following figure shows how they are
rated. Voucher data, print voucher, and add voucher are rated with high user
importance. Without these the user cannot work at all. Reports have been
rated low because they do not really stop the user from working. The chart
of accounts master is rated low because the master data is something which
is added at one time and can also be added from the back end.
Usage intensity (Uy): This factor tells how many users use the application
and how often. The following figure shows how we have assigned the values
to each function factor. Add voucher, Print Voucher, and voucher data are the
most used function factors. So they are rated high. All other function factors
are rated as low.
Interfacing (I): This factor defines how much impact this function
factor has on other parts of the system. But how do we now find the impact?
In TPA, the concept of LDS is used to determine the interfacing rating.
LDS stands for Logical Data Source. In our project we have two logical data
sources: one is voucher data and the other is account code data (i.e., chart
of accounts data). The following are the important points to be noted which
determine the interfacing:
The following is the table which defines the complexity level according
to the number of LDSs and functions impacting on LDS.
So now depending on the two points defined above let’s try to find out the
interfacing value for our accounting project. As said previously we have two
functions which modify LDS in our project: one is the add voucher function
which affects the voucher data and the add account code which affects the
chart of accounts code (i.e., the accounts code master). The add voucher
function primarily affects voucher data LDFs. But other functions such as
reports and print also use the LDS. So in total there are five functions and
one LDS. Now looking at the number of LDSs and the number of functions
the impact complexity factor is Low.
The other function which modifies is the Add account code function. The
LDS affected is the chart of account code and the function which affects it is
the Add account code function. There are other functions that indirectly affect
this function, too. For instance, Report which needs to access account code,
Print voucher which uses the account code to print account description and
also the Add voucher function which uses the chart of accounts code LDS to
verify if the account code is correct. So we can see the look-up table and the
impact complexity factor is Average.
The other function factors do not modify any data so we give them a Low
rating. The following is the interfacing complexity factors assigned.
196 Software Testing Interview Questions
Complexity (C): This factor defines how complex the algorithm for
the particular function factor is. Add voucher is the most complex function
in the project and it can have more than 11 conditions so we have rated the
complexity factor the highest. Reports are mildly complex and can be rated
as average in complexity. So as discussed we have assigned values accordingly
as shown in the figure.
Uniformity (U): This factor defines how reusable a system is. For
instance, if a test case written for one function can be applied again then
it affects the testing estimates accordingly. Currently, for this project, we
have taken a uniformity factor of 1. So, for example, if the customer had a
requirement to also update the accounts code then we could have used two
functions, Add voucher and Update voucher.
One we have all the five factors we apply the following formula to calculate
Df for all the function factors:
Df 5 [(Ue 1 Uy 1 I 1 C)/16] * U
Step 3: Calculate Qd
The third step is to calculate Qd. Qd, i.e, dynamic quality characteristics, have
two parts: explicit characteristics (Qde) and implicit characteristics (Qdi).
Qde has five important characteristics: Functionality, Security, Suitability,
Performance, and Portability. The following diagram shows how we rate those
ratings. Qdi defines the implicit characteristic part of the Qd. These are not
standard and vary from project to project. For instance, we have identified
for this accounting application four characteristics: user friendly, efficiency,
performance, and maintainability. From these four characteristics we assign
198 Software Testing Interview Questions
each 0.02 value. We can see from the following figure for user friendliness we
have assigned 0.02 value. In the Qde part we have given functionality normal
importance and performance as relatively unimportant but we do need to
account for them. Once we have Qde and Qdi then Qd 5 Qde 1 Qdi. For
this sample you can see that the total value of Qd is 1.17 (which is obtained
from 1.15 1 0.02).
Qd is calculated using the rating multiplied by the value. The following
table shows the rating and the actual value. So the 1.15 has come from the
following formula:
((5 * 0.75) 1 (3 * 0.05) 1 (4 * 0.10) 1 (3 * 0.10) 1 (3 * .10)) / 4
For this project we have good resources with great ability. So we have
entered a value of 1.50 which means we have high productivity.
Finally, we distribute this number across the phases. So the total black
box testing estimate for this project in man hours is 101.73 man hours, a
13-man day approximately.
205
Index
A C
Acceptance document, 25 Calculating variations, 109–112
Acceptance plan, 25–26 Calibration, 34–35
Acceptance test input criteria, 26 Test objectives, 34
Acceptance test plan, 32–34 Inventory, 34–35
Acceptance testing, 39, 46 Tracking matrix, 35–36
ADD, 180–181 Capability levels, 73–75
Ad hoc testing, 54 Capability level 0, 73
Alpha testing, 16–17 Capability level 1, 73–74
Application boundary, 149–150 Capability level 2, 74
Appraisal methods, 81–83 Capability level 3, 74
AutomatedQA, 128–145 Capability level 4, 74
Automation testing, 127–145 Capability level 5, 75
Automation tools, 128–129 Capability maturity model integration
(CMMI), 64–68, 70, 75–76,
77–80, 85–87
B Casual analysis resolution
Baselines, 30–31 (CAR), 89
Beta testing, 16–17, 20–21 Categories of defects, 4
Big-bang waterfall model, 39, 41–42 Central/project test plan, 32–34
Requirement stage, 41 Champions, 107–109
Design stage, 41 CHGA, 180–181
Build stage, 41 CMMI, 64–68, 70, 75–80, 85–87
Test stage, 41 Process areas of, 64
Deliver stage, 42 Systems engineering, 67
Black belts, 107–109 Software engineering, 67
Black box test cases, 36–37 Integrated product and process
Black box testing, 2–3, 147, 181–189 development (IPPD), 67
Creating an estimate for, 183–189 Software acquisition, 68
Boundary value, 49–51 Process management, 75
Boundary value analysis, 49–51 Project management, 76
207
208 Index
G
General system characteristics (GSC), J
156, 166, 168, 176–180, 188–189 Junior engineer, ix
calculation of, 188–189 Junior tester, ix
Gradual implementation, 18
Gray box testing, 2–3
Green belts, 107–109 K
KPA, 88
I
ILF rating table, 167 L
Impact and probability rating, 23 Latent defects, 11
Impact ratings, 38 Launch strategies, 19
Implementation, 68–69 Load testing, 136–144
Incremental model, 39, 43 Logical data source (LDS), 193–195
210 Index
M P
Maintenance phase workbench, 16 Pair-wise defect, 56
Manual testing, 127–128 Parallel implementation, 19
Masked defects, 11 Path coverage, 29
Master black belts, 107–109 PDCA cycle, 1–2
Maturity levels, 63, 70–74 Performance, 46
Mean measurement, 110 Phased implementation, 18
Measurement and analysis (MA), 92 Phased waterfall model, 39, 42–43
Measuring defects, 118–119 Phase-wise distribution, 174–175
Measuring test effectiveness, 124–125 Pilot testing, 20–21
Median measurement, 110 Planning and control tools, 204
Metrics, 117 Positive testing, 54
Mode measurement, 112 Practice implementation indicators
Modern way of testing, 8–9 (PII), 84–85
Monkey testing, 53–54 Priority rating, 23
Priority set, 24
Probability of failure, 22
N Process and product quality assurance
Negative testing, 54 (PPQA), 98
Process area abbreviations, 77
Process areas, 73–75, 88–103
O Casual analysis resolution (CAR), 89
Online data entry, 159 Decision analysis resolution
Online updates, 161 (DAR), 90
Operational ease, 163–164 Integrated project management
Optimizing process, 75 (IPM), 90–91
Organizational environment for Integrated supplier management
integration (OEI), 93 (ISM), 91–92
Organizational hierarchy, vii–viii Integrated teaming (IT), 92
Organizational innovation and Measurement and analysis (MA), 92
deployment (OID), 93–94 Organizational environment for
Organizational process focus (OPF), integration (OEI), 93
94–95 Organizational innovation and
Organizational process performance deployment (OID), 93–94
(OPP), 95 Organizational process focus (OPF),
Organizational training (OT), 95–96 94–95
Orthogonal arrays, 56–57 Organizational process performance
Outsource, 47 (OPP), 95
Index 211
SG2, 85 T
SGI, 85 Table-driven testing, 145–146
Six sigma, 105–112 Tailoring, 65, 74
Key players, 107–109 TC1, 49–51
Variations in, 109–112 TC2, 49–51
Software acquisition, 68 TC3, 49–51
Software development lifecycle, TC4, 49–51
30–31, 33–34, 39–43 Team leader tester, ix–x
Software engineer, ix Technical solution (TS), 101–102
Software engineering, 67 Test basis, 202
Software process, 61–62 Test complete project explorer,
Software testing teams, 47–48 131–136, 138–144
Isolated test team, 47 Test documents across phases, 33–34
Outsource, 47 Test environment, 202
Inside test team, 47 Test log, 38–39
Developers as testers, 47 Test objectives, 34
QA/QC, 47 Test phases, 26–27
Spiral model, 39, 43 Test plan documents, 32–33
Spoilage formula, 126 Central/project test plan, 32–34
SQA, x Acceptance test plan, 32–34
Staged models, 70–73 System test plan, 32–34
Staging, 75 Integration testing, 32–34
Standard CMMI appraisal method for Unit testing, 32–34
process improvement, 81–82 Test strategy, 182
Standard deviation formula, Test tools, 201
113–115 Testers, vii
Standard deviations, 112–113 Testing analysis and design, 24, 34
State transition diagrams, 52–53 Testing cost curve, 6
Statement coverage, 29 Testing phase workbench, 16
Supplier agreement management Testing policy, 6–7
(SAM), 71, 85–86, 101 Testware, 201
Support process area, 76 Tool costs, 62
System development lifecycle, 174 TPA, 147–148, 181–186, 193
System test plan, 32–34 Analysis, 147–148
System testing, 39, 45–46, 123 Parameters, 183
Systems engineering, 67 TPf calculation, 199
Index 213