Software Testing Interview Questions and Answers
Software Testing Interview Questions and Answers
Software Testing Interview Questions and Answers
1. Can you explain the PDCA cycle and where testing fits in?
1. Plan: Define the goal and the plan for achieving that goal.
2. Do/Execute: Depending on the plan strategy decided during the plan stage we
do execution accordingly in this phase.
3. Check: Check/Test to ensure that we are moving according to plan and are
getting the desired results.
4. Act: During the check cycle, if any issues are there, then we take appropriate
action accordingly and revise our plan again.
So developers and other stakeholders of the project do the "planning and building,"
while testers do the check part of the cycle. Therefore, software testing is done in check
part of the PDCA cyle.
2. What is the difference between white box, black box, and gray box testing?
Black box testing is a testing strategy based solely on requirements and specifications.
Black box testing requires no knowledge of internal paths, structures, or implementation
of the software being tested.
White box testing is a testing strategy based on internal paths, code structures, and
implementation of the software being tested. White box testing generally requires
detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the "box"
being tested just long enough to understand how it has been implemented. Then we
close up the box and use our knowledge to choose more effective black box tests.
The above figure shows how both types of testers view an accounting application during
testing. Black box testers view the basic accounting application. While during white box
testing the tester knows the internal structure of the application. In most scenarios white
box testing is done by developers as they know the internals of the application. In black
box testing we check the overall functionality of the application while in white box testing
we do code reviews, view the architecture, remove bad code practices, and do
component level testing.
Usability testing is a testing methodology where the end customer is asked to use the
software to see if the product is easy to use, to see the customer's perception and task
time. The best way to finalize the customer point of view for usability is by using
prototype or mock-up software during the initial stages. By giving the customer the
prototype before the development start-up we confirm that we are not missing anything
from the user point of view.
The following are the important steps used to define a testing policy in general. But it
can change according to your organization. Let's discuss in detail the steps of
implementing a testing policy in an organization.
Definition: The first step any organization needs to do is define one unique
definition for testing within the organization so that everyone is of the same
mindset.
How to achieve: How are we going to achieve our objective? Is there going to
be a testing committee, will there be compulsory test plans which need to be
executed, etc?.
Evaluate: After testing is implemented in a project how do we evaluate it? Are
we going to derive metrics of defects per phase, per programmer, etc. Finally, it's
important to let everyone know how testing has added value to the project?.
Standards: Finally, what are the standards we want to achieve by testing? For
instance, we can say that more than 20 defects per KLOC will be considered
below standard and code review should be done for it.
In any project the acceptance document is normally prepared using the following inputs.
This can vary from company to company and from project to project.
The following diagram shows the most common inputs used to prepare acceptance test
plans.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can
arise and more defects injected. So whenever changes are done it should be done in a
controlled fashion and with proper versioning. At any moment of time we should be able
to revert back to the old version. The main intention of configuration management is to
track our changes if we have issues with the current system. Configuration management
is done using baselines.
While doing testing on the actual product, the code coverage testing tool is run
simultaneously. While the testing is going on, the code coverage tool monitors the
executed statements of the source code. When the final testing is completed we get a
complete report of the pending statements and also get the coverage percentage.
9. Which is the best testing model?
In real projects, tailored models are proven to be the best, because they share features
from The Waterfall, Iterative, Evolutionary models, etc., and can fit into real life time
projects. Tailored models are most productive and beneficial for many organizations. If
it's a pure testing project, then the V model is the best.
When a defect reaches the end customer it is called a failure and if the defect is detected
internally and resolved it's called a defect.
11. Should testing be done only after the build and execution phases are
complete?
In traditional testing methodology testing is always done after the build and execution
phases.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost
effective it is. For instance, fixing a defect in maintenance is ten times more costly than
fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the
customer needs. During design we can check whether the design document covers all
the requirements. In this stage we can also generate rough functional data. We can also
review the design document from the architecture and the correctness perspectives. In
the build and execution phase we can execute unit test cases and generate structural
and functional data. And finally comes the testing phase done in the traditional way. i.e.,
run the system test cases and see if the system works according to the requirements.
During installation we need to see if the system is compatible with the software. Finally,
during the maintenance phase when any fixes are made we can retest the fixes and
follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software
development.
12. Are there more defects in the design phase or in the coding phase?
The design phase is more error prone than the execution phase. One of the most
frequent defects which occur during design is that the product does not cover the
complete requirements of the customer. Second is wrong or bad architecture and
technical decisions make the next phase, execution, more prone to defects. Because the
design phase drives the execution phase it's the most critical phase to test. The testing
of the design phase can be done by good review. On average, 60% of defects occur
during design and 40% during the execution phase.
When it comes to testing everyone in the world can be involved right from the developer
to the project manager to the customer. But below are different types of team groups
which can be present in a project.
Minor: Very low impact but does not affect operations on a large scale.
Major: Affects operations on a very large scale.
Critical: Brings the system to a halt and stops the show.
No an increase in testing does not always mean improvement of the product, company,
or project. In real test scenarios only 20% of test plans are critical from a business
angle. Running those critical test plans will assure that the testing is properly done. The
following graph explains the impact of under testing and over testing. If you under test a
system the number of defects will increase, but if you over test a system your cost of
testing will increase. Even if your defects come down your cost of testing has gone up.
16. What's the relationship between environment reality and test phases?
Environment reality becomes more important as test phases start moving ahead. For
instance, during unit testing you need the environment to be partly real, but at the
acceptance phase you should have a 100% real environment, or we can say it should be
the actual real environment. The following graph shows how with every phase the
environment reality should also increase and finally during acceptance it should be 100%
real.
18. How do test documents in a project span across the software development
lifecycle?
The following figure shows pictorially how test documents span across the software
development lifecycle. The following discusses the specific testing documents in the
lifecycle:
Central/Project test plan: This is the main test plan which outlines the
complete test strategy of the software project. This document should be prepared
before the start of the project and is used until the end of the software
development lifecycle.
Acceptance test plan: This test plan is normally prepared with the end
customer. This document commences during the requirement phase and is
completed at final delivery.
System test plan: This test plan starts during the design phase and proceeds
until the end of the project.
Integration and unit test plan: Both of these test plans start during the
execution phase and continue until the final delivery.
19. Which test cases are written first: white boxes or black boxes?
Normally black box test cases are written first and white box test cases later. In
order to write black box test cases we need the requirement document and,
design or project plan. All these documents are easily available at the initial start
of the project. White box test cases cannot be started in the initial phase of the
project because they need more architecture clarity which is not available at the
start of the project. So normally white box test cases are written after black box
test cases are written.
Black box test cases do not require system understanding but white box testing
needs more structural understanding. And structural understanding is clearer i00n
the later part of project, i.e., while executing or designing. For black box testing
you need to only analyze from the functional perspective which is easily available
from a simple requirement document.
Acceptance testing - Testing to ensure that the system meets the needs of the
organization and the end user or customer (i.e., validates that the right system
was built).
21. What is a test log?
The IEEE Std. 829-1998 defines a test log as a chronological record of relevant
details about the execution of test cases. It's a detailed view of activity and
events given in chronological manner.
The following figure shows a test log and is followed by a sample test log.
If the tester gets involved right from the requirement phase then requirement
traceability is one of the important reports that can detail what kind of test
coverage the test cases have.
23. What does entry and exit criteria mean in a project?
Entry and exit criteria are a must for the success of any project. If you do not
know where to start and where to finish then your goals are not clear. By defining
exit and entry criteria you define your boundaries.
For instance, you can define entry criteria that the customer should provide the
requirement document or acceptance plan. If this entry criteria is not met then
you will not start the project. On the other end, you can also define exit criteria
for your project. For instance, one of the common exit criteria in projects is that
the customer has successfully executed the acceptance test plan.
A latent defect is an existing defect that has not yet caused a failure because the sets of
conditions were never met.
A masked defect is an existing defect that hasn't yet caused a failure just because
another defect has prevented that part of the code from being executed.
It includes tracing the accuracy of the devices used in the production, development and
testing. Devices used must be maintained and calibrated to ensure that it is working in
good order.
Alpha and beta testing has different meanings to different people. Alpha testing is the
acceptance testing done at the development site. Some organizations have a different
visualization of alpha testing. They consider alpha testing as testing which is conducted
on early, unstable versions of software. On the contrary beta testing is acceptance
testing conducted at the customer end.
In short, the difference between beta testing and alpha testing is the location where the
tests are done.
28. How does testing affect risk?
A risk is a condition that can result in a loss. Risk can only be controlled in different
scenarios but not eliminated completely. A defect normally converts to a risk.
29. What is coverage and what are the different types of coverage techniques?
Coverage is a measurement used in software testing to describe the degree to which the
source code is tested. There are three basic types of coverage techniques as shown in
the following figure:
Statement coverage: This coverage ensures that each line of source code has
been executed and tested.
Decision coverage: This coverage ensures that every decision (true/false) in the
source code has been executed and tested.
Path coverage: In this coverage we ensure that every possible route through a
given part of code is executed and tested.
30. A defect which could have been removed during the initial stage is removed
in a later stage. How does this affect cost?
If a defect is known at the initial stage then it should be removed during that
stage/phase itself rather than at some later stage. It's a recorded fact that if a defect is
delayed for later phases it proves more costly. The following figure shows how a defect is
costly as the phases move forward. A defect if identified and removed during the
requirement and design phase is the most cost effective, while a defect removed during
maintenance is 20 times costlier than during the requirement and design phases.
For instance, if a defect is identified during requirement and design we only need to
change the documentation, but if identified during the maintenance phase we not only
need to fix the defect, but also change our test plans, do regression testing, and change
all documentation. This is why a defect should be identified/removed in earlier phases
and the testing department should be involved right from the requirement phase and not
after the execution phase.
31. What kind of input do we need from the end user to begin proper testing?
The product has to be used by the user. He is the most important person as he has more
interest than anyone else in the project.
Input: Every task needs some defined input and entrance criteria. So for every
workbench we need defined inputs. Input forms the first steps of the workbench.
Execute: This is the main task of the workbench which will transform the input
into the expected output.
Check: Check steps assure that the output after execution meets the desired
result.
Production output: If the check is right the production output forms the exit
criteria of the workbench.
Rework: During the check step if the output is not as desired then we need to
again start from the execute step.
33. Can you explain the concept of defect cascading?
Defect cascading is a defect which is caused by another defect. One defect triggers the
other defect. For instance, in the accounting application shown here there is a defect
which leads to negative taxation. So the negative taxation defect affects the ledger
which in turn affects four other modules.
The difference between pilot and beta testing is that pilot testing is nothing but actually
using the product (limited to some users) and in beta testing we do not input real data,
but it's installed at the end customer to validate if the product can be used in
production.
36. What are the different strategies for rollout to end users?
System testing checks that the system that was specified has been
delivered. Acceptance testing checks that the system will deliver what was
requested. The customer should always do Acceptance testing and not the
developer.
The customer knows what is required from the system to achieve value in the
business and is the only person qualified to make that judgement. This testing is
more about ensuring that the software is delivered as defined by the customer.
It's like getting a green light from the customer that the software meets
expectations and is ready to be used.
38. Can you explain regression testing and confirmation testing?
Regression testing is used for regression defects. Regression defects are defects
occur when the functionality which was once working normally has stopped
working. This is probably because of changes made in the program or the
environment. To uncover such kind of defect regression testing is conducted.
The following figure shows the difference between regression and confirmation
testing.
If we fix a defect in an existing application we use confirmation testing to test if
the defect is removed. It's very possible because of this defect or changes to the
application that other sections of the application are affected. So to ensure that
no other section is affected we can use regression testing to confirm this.
Testing Techniques –The Software Process Testing Interview Questions and Answers
A software process is a series of steps used to solve a problem. The following figure
shows a pictorial view of how an organization has defined a way to solve risk problems.
In the diagram we have shown two branches: one is the process and the second branch
shows a sample risk mitigation process for an organization. For instance, the risk
mitigation process defines what step any department should follow to mitigate a risk.
The process is as follows:
Identify the risk of the project by discussion, proper requirement gathering, and
forecasting.
Once you have identified the risk prioritize which risk has the most impact and
should be tackled on a priority basis.
Analyze how the risk can be solved by proper impact analysis and planning.
Finally, using the above analysis, we mitigate the risk.
Salary: This forms the major component of implementing any process, the salary
of the employees. Normally while implementing a process in a company either
organization can recruit full-time people or they can share resources part-time for
implementing the process.
Consultant: If the process is new it can also involve consultants which are again
an added cost.
Training Costs: Employees of the company may also have to undergo training in
order to implement the new process
Tools: In order to implement the process an organization will also need to buy
tools which again need to be budgeted for.
4. What is a model?
A model is nothing but best practices followed in an industry to solve issues and
problems. Models are not made in a day but are finalized and realized by years of
experience and continuous improvements.
Many companies reinvent the wheel rather than following time tested models in the
industry.
A process area is the area of improvement defined by CMMI. Every maturity level
consists of process areas. A process area is a group of practices or activities performed
collectively to achieve a specific objective. For instance, you can see from the following
figure we have process areas such as project planning, configuration management, and
requirement gathering.
As the name suggests, tailoring is nothing but changing an action to achieve an objective
according to conditions. Whenever tailoring is done there should be adequate reasons for
it. Remember when a process is defined in an organization it should be followed
properly. So even if tailoring is applied the process is not bypassed or omitted.
Six Sigma - Software Testing Interview Questions and Answers
2. Can you explain the different methodology for the execution and the design
process stages in Six Sigma?
The main focus of Six Sigma is to reduce defects and variations in the processes. DMAIC
and DMADV are the models used in most Six Sigma initiatives.
DMADV is the model for designing processes while DMAIC is used for improving the
process.
Define: Determine the project goals and the requirements of customers (external
and internal).
Measure: Assess customer needs and specifications.
Analyze: Examine process options to meet customer requirements.
Design: Develop the process to meet the customer requirements.
Verify: Check the design to ensure that it's meeting customer requirements
The DMAIC model includes the following five steps:
Define the projects, goals, and deliverables to customers (internal and external).
Describe and quantify both the defects and the expected improvements.
Measure the current performance of the process. Validate data to make sure it is
credible and set the baselines.
Analyze and determine the root cause(s) of the defects. Narrow the causal factors
to the vital few.
Improve the process to eliminate defects. Optimize the vital few and their
interrelationships.
Control the performance of the process. Lock down the gains.
There are situations where we need to analyze what caused the failure or problem in a
project. The fish bone or Ishikawa diagram is one important concept which can help you
find the root cause of the problem. Fish bone was conceptualized by Ishikawa, so in
honor of its inventor, this concept was named the Ishikawa diagram. Inputs to conduct a
fish bone diagram come from discussion and brainstorming with people involved in the
project. The following figure shows the structure of the Ishikawa diagram.
The main bone is the problem which we need to address to know what caused the
failure. For instance, the following fish bone is constructed to find what caused the
project failure. To know this cause we have taken four main bones as inputs: Finance,
Process, People, and Tools.
Variation is the basis of Six Sigma. It defines how many changes are happening in the
output of a process. So if a process is improved then this should reduce variations. In
Six Sigma we identify variations in the process, control them, and reduce or eliminate
defects.
There are four basic ways of measuring variations: Mean, Median, Mode, and Range.
5. Can you explain standard deviation?
Automation is the integration of testing tools into the test environment in such a manner
that the test execution, logging, and comparison of results are done with little human
intervention. A testing tool is a software application which helps automate the testing
process. But the testing tool is not the complete answer for automation. One of the huge
mistakes done in testing automation is automating the wrong things during
development. Many testers learn the hard way that everything cannot be automated.
The best components to automate are repetitive tasks. So some companies first start
with manual testing and then see which tests are the most repetitive ones and only
those are then automated.
Websites have software called a web server installed on the server. The user sends a
request to the web server and receives a response. So, for instance, when you type
www.google.com the web server senses it and sends you the home page as a response.
This happens each time you click on a link, do a submit, etc. So if we want to do load
testing you need to just multiply these requests and responses "N" times. This is what
an automation tool does. It first captures the request and response and then just
multiplies it by "N" times and sends it to the web server, which results in load
simulation.
So once the tool captures the request and response, we just need to multiply the request
and response with the virtual user. Virtual users are logical users which actually simulate
the actual physical user by sending in the same request and response. If you want to do
load testing with 10,000 users on an application it's practically impossible. But by using
the load testing tool you only need to create 1000 virtual users.
3. Can you explain data-driven testing?
Normally an application has to be tested with multiple sets of data. For instance, a
simple login screen, depending on the user type, will give different rights. For example,
if the user is an admin he will have full rights, while a user will have limited rights and
support if he only has read-only support rights. In this scenario the testing steps are the
same but with different user ids and passwords. In data-driven testing, inputs to the
system are read from data files such as Excel, CSV (comma separated values), ODBC,
etc. So the values are read from these sources and then test steps are executed by
automated testing.
In some projects there are scenarios where we need to do boundary value testing. For
instance, let's say for a bank application you can withdraw a maximum of 25000 and a
minimum of 100. So in boundary value testing we only test the exact boundaries rather
than hitting in the middle. That means we only test above the max and below the max.
This covers all scenarios. The following figure shows the boundary value testing for the
bank application which we just described. TC1 and TC2 are sufficient to test all
conditions for the bank. TC3 and TC4 are just duplicate/redundant test cases which
really do not add any value to the testing. So by applying proper boundary value
fundamentals we can avoid duplicate test cases, which do not add value to the testing.
In equivalence partitioning we identify inputs which are treated by the system in the
same way and produce the same results. You can see from the following figure
applications TC1 and TC2 give the same results (i.e., TC3 and TC4 both give the same
result, Result2). In short, we have two redundant test cases. By applying equivalence
partitioning we minimize the redundant test cases.
This kind of testing is really of no use and is normally performed by newcomers. Its best
use is to see if the system will hold up under adverse effects.
As the name specifies semi-random testing is nothing but controlling random testing and
removing redundant test cases. So what we do is perform random test cases and
equivalence partitioning to those test cases, which in turn removes redundant test cases,
thus giving us semi-random test cases.
A negative test is when you put in an invalid input and receive errors.
A positive test is when you put in a valid input and expect some action to be completed
in accordance with the specification.
Exploratory testing is also called adhoc testing, but in reality it's not completely adhoc.
Ad hoc testing is an unplanned, unstructured, may be even an impulsive journey through
the system with the intent of finding bugs. Exploratory testing is simultaneous learning,
test design, and test execution. In other words, exploratory testing is any testing done
to the extent that the tester proactively controls the design of the tests as those tests
are performed and uses information gained while testing to design better tests.
Exploratory testers are not merely keying in random data, but rather testing areas that
their experience (or imagination) tells them are important and then going where those
tests take them.
As the name suggests they are tables that list all possible inputs and all possible
outputs. A general form of decision table is shown in the following figure.
Condition 1 through Condition N indicates various input conditions. Action 1 through
Condition N are actions that should be taken depending on various input combinations.
Each rule defines unique combinations of conditions that result in actions associated with
that rule.
Software acquisition: Many times an organization has to acquire products from other
organizations. Acquisition is itself a big step for any organization and if not handled in a
proper manner means a disaster is sure to happen.
Both of these concepts are important while implementing a process in any organization.
Any new process implemented has to go through these two phases.
There are five maturity levels in a staged representation as shown in the following
figure.
Maturity Level 3 (Defined): To reach this level the organization should have already
achieved level 2. In the previous level the good practices and process were only done at
the project level. But in this level all these good practices and processes are brought to
the organization level. There are set and standard practices defined at the organization
level which every project should follow. Maturity Level 3 moves ahead with defining a
strong, meaningful, organizational approach to developing products. An important
distinction between Maturity Levels 2 and 3 is that at Level 3, processes are described in
more detail and more rigorously than at Level 2 and are at an organization level.
Maturity Level 5 (Optimized): The organization has achieved goals of maturity levels
2, 3, and 4. In this level, processes are continually improved based on an understanding
of common causes of variation within the processes. This is like the final level; everyone
on the team is a productive member, defects are minimized, and products are delivered
on time and within the budget boundary.
The following figure shows, in detail, all the maturity levels in a pictorial fashion.
The second is "continuous" in which the capability level organizes the process area.
SCAMPI stands for Standard CMMI Appraisal Method for Process Improvement. SCAMPI
is an assessment process used to get CMMI certified for an organization.
There are three classes of CMMI appraisal methods: Class A, Class B, and Class C. Class
A is the most aggressive, while Class B is less aggressive, and Class C is the least
aggressive.
Class A: This is the only method that can provide a rating and get you a CMMI
certificate. It requires all three sources of data instruments, interviews, and documents.
Class B: This class requires only two sources of data (interviews and either documents
or instruments). But please note you do not get rated with Class B appraisals. Class B is
just a warm-up to see if an organization is ready for Class A. With less verification the
appraisal takes less time. In this class data sufficiency and draft presentations are
optional.
Class C: This class requires only one source of data (interviews, instruments, or
documents). Team consensus, validation, observation, data sufficiency, and draft
presentation are optional.
Normally, organizations use a mix of the classes to achieve process improvement. The
following are some of the strategies which an organization uses:
First Strategy: Use Class B to initiate a process improvement plan, after that apply
Class C to check readiness for Class B or Class A. The following diagram shows this
strategy.
Third Strategy: Class A is used to initiate an organization level process. The process
improvement plan is based on an identified weakness. Class B appraisal should be
performed after six months to see the readiness for the second Class A appraisal rating.
The following diagram shows this strategy.
There are three different sources from which an appraiser can verify that an organization
followed the process or not.
So when your organization should only concentrate on specific process areas you will
likely go for the continuous model. But if you want your organization to have a specific
plan and to achieve not only the specific process but also any interlinked process within
that process area you should go for the continuous model.
The continuous model is the same as the staged model only that the arrangement is a
bit different. The continuous representation/model concentrates on the action or task to
be completed within a process area. It focuses on maturing the organizations ability to
perform, control, and improve the performance in that specific performance area.
Capability Level 0 Incomplete: This level means that any generic or specific practice
of capability level 1 is not performed.
Capability Level 3: Defined: The defined process is a managed process that is tailored
from an organization standard. Tailoring is done by justification and documentation
guidelines. For instance your organization may have a standard that we should get an
invoice from every supplier. But if the supplier is not able to supply the invoice then he
should sign an agreement in place of the invoice. So here the invoice standard is not
followed but the deviation is under control.
Measures are quantitatively unit defined elements, for instance, hours, km, etc. Metrics
are basically comprised of more than one measure. For instance, we can have metrics
such as km/hr, m/s etc.
The number of defects is one of the measures used to measure test effectiveness. One
of the side effects of the number of defects is that all bugs are not equal. So it becomes
necessary to weight bugs according to their criticality level. If we are using the number
of defects as the metric measurement the following are the issues:
The number of bugs that originally existed significantly impacts the number of
bugs discovered, which in turns gives a wrong measure of the software quality.
All defects are not equal so defects should be numbered with a criticality level to
get the right software quality measure.
DRE is also useful to measure the effectiveness of a particular test such as acceptance,
unit, or system testing. The following figure shows defect numbers at various software
cycle levels. The 1 indicates that defects are input at the phase and2indicates that these
many defects were removed from that particular phase. For instance, in the requirement
phase 100 defects were present, but 20 defects are removed from the requirement
phase due to a code review. So if 20 defects are removed then 80 defects get carried to
the new phase (design) and so on.
DRE (Defect Removal Efficiency) is a powerful metric used to measure test effectiveness.
From this metric we come to know how many bugs we found from the set of bugs which
we could have found. The following is the formula for calculating DRE. We need two
inputs for calculating this metric: the number of bugs found during development and the
number of defects detected at the end user.
But the success of DRE depends on several factors. The following are some of them:
Defect age is also called a phase age or phage. One of the most important things to
remember in testing is that the later we find a defect the more it costs to fix it. Defect
age and defect spoilage metrics work with the same fundamental, i.e., how late you
found the defect. So the first thing we need to define is what is the scale of the defect
age according to phases. For instance, the following table defines the scale according to
phases. So, for instance, requirement defects, if found in the design phase, have a scale
of 1, and the same defect, if propagated until the production phase, goes up to a scale of
4.
Once the scale is decided now we can find the defect spoilage. Defect spoilage is defects
from the previous phase multiplied by the scale. For instance, in the following figure we
have found 8 defects in the design phase from which 4 defects are propagated from the
requirement phase. So we multiply the 4 defects with the scale defined in the previous
table, so we get the value of 4. In the same fashion we calculate for all the phases. The
following is the spoilage formula.
Defect seeding is a technique that was developed to estimate the number of defects
resident in a piece of software. It's an offline technique and should not be used by
everyone. The process is the following: we inject the application with defects and then
see if the defect is found or not. So, for instance, if we have injected 100 defects we try
to get three values. First how many seeded defects were discovered, how many were not
discovered, and how many new defects (unseeded) are discovered. By using defect
seeding we can predict the number of defects remaining in the system.
Test effectiveness is the measure of the bug-finding ability of our tests. In short, it
measures how good the tests were. So effectiveness is the ratio of the measure of bugs
found during testing to the total bugs found. Total bugs are the sum of new defects
found by the user plus the bugs found in the test. The following figure explains the
calculations in a pictorial format.
Testing Estimation - Software Testing Interview Questions and Answers
TPA is a technique used to estimate test efforts for black box testing. Inputs for TPA are
the counts derived from function points.
The testing estimates derived from function points are actually the estimates for white
box testing. So in the following figure the man days are actually the estimates for white
box testing of the project. It does not take into account black box testing estimation.
5. Can you explain the various elements of function points FTR, ILF, EIF, EI, EO,
EQ, and GSC?
The first step in FPA is to define the boundary. There are two types of major boundaries:
The external application boundary can be identified using the following litmus test:
1. Does it have or will it have any other interface to maintain its data, which was not
developed by you?.
2. Does your program have to go through a third party API or layer? In order for
your application to interact with the tax department application your code has to
interact with the tax department API.
3. The best litmus test is to ask yourself if you have full access to the system. If you
have full rights to make changes then it is an internal application boundary,
otherwise it is an external application boundary.
There are three main elements which determine estimates for black box testing: size,
test strategy, and productivity. Using all three elements we can determine the estimate
for black box testing for a given project. Let's take a look at these elements.
1. Size: The most important aspect of estimating is definitely the size of the project.
The size of a project is mainly defined by the number of function points. But a
function point fails or pays the least attention to the following factors:
1. Complexity: Complexity defines how many conditions exist in function
points identified during a project. More conditions means more test cases
which means more testing
estimates.
2. Interfacing: How much does one function affect the other part of the
system? If a function is modified then accordingly the other systems have
to be tested as one function always impacts another.
3. Uniformity: How reusable is the application? It is important to consider
how many similar structured functions exist in the system. It is important
to consider the extent to which the system allows testing with slight
modifications.
2. Test strategy: Every project has certain requirements. The importance of all
these requirements also affects testing estimates. Any requirement importance is
from two perspectives: one is the user importance and the other is the user
usage. Depending on these two characteristics a requirement rating can be
generated and a strategy can be chalked out accordingly, which also means that
estimates vary accordingly.
3. Productivity: This is one more important aspect to be considered while
estimating black box testing. Productivity depends on many
aspects.
1. First Count ILF, EIF, EI, EQ, RET, DET, FTR and use the rating tables. After you
have counted all the elements you will get the unadjusted function points.
2. Put rating values 0 to 5 to all 14 GSC. Adding total of all 14 GSC to come out with
total VAF. Formula for VAF = 0.65 + (sum of all GSC factor/100).
3. Finally, make the calculation of adjusted function point. Formula: Total function
point = VAF * Unadjusted function point.
4. Make estimation how many function points you will do per day. This is also called
as "Performance factor".On basis of performance factor, you can calculate
Man/Days.
Function points are a unit measure for software much like an hour is to measuring time,
miles are to measuring distance or Celsius is to measuring temperature. Function Points
are an ordinal measure much like other measures such as kilometers, Fahrenheit, hours,
so on and so forth.
This approach computes the total function points (FP) value for the project, by totaling
the number of external user inputs, inquiries, outputs, and master files, and then
applying the following weights: inputs (4), outputs (5), inquiries (4), and master files
(10). Each FP contributor can be adjusted within a range of +/-35% for a specific project
complexity.
Thanks
Good Luck