Stqa Paper Solution (Des 16)
Stqa Paper Solution (Des 16)
QUESTION 1:
2. This type of testing validates that the 2. This type of testing validates
product cannot be crashed or destroyed in
all possible random situations and ● Testing in each phase of the
conditions Development cycle to ensure that
the “bugs”(defects) are eliminated
at the earliest
● Testing to ensure no “bugs”
creep through in the final product
● Testing to ensure the reliability of
the software
● Above all testing to ensure that
the user expectations are met
4. Benefits 4. Benefits
It ensures that high priority tasks will be ● effective Testing Strategy and Process
tested for sure helps to minimize or eliminate these
● Improved customer defects.
satisfaction ● The extent to which it eliminates these
post-production defects (Design
● Count of UAT defects Defects/Coding Defects/etc) is a good
would be reduced and measure of the effectiveness of the
would also result in Testing Strategy and Process.
productive testing
● Improved exhaustive
testing can be performed
with less manual effort
Verification Validation
2. It does not involve executing the code. 2. It always involves executing the code.
5. It can catch errors that validation 5. It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level Exercise.
1.Top down Testing: In this approach testing is 1. Bottom up testing: In this approach testing is
conducted from main module to sub module. if the conducted from sub module to main module, if the
sub module is not developed a temporary program main module is not developed a temporary
called STUB is used for simulate the submodule. program called DRIVERS is used to simulate the
main module.
2. Advantages: 2. Advantages:
3. Disadvantages: 3. Disadvantages:
● Stub modules must be produced
● Stub Modules are often more complicated ● Driver Modules must be produced.
than they first appear to be. ● The program as an entity does not exist
● Before the I/O functions are added, until the last module is added.
representation of test cases in stubs can
be difficult.
● Test conditions ma be impossible, or very
difficult, to create.
● Observation of test output is more
difficult.
● Allows one to think that design and
testing can be overlapped.
● Induces one to defer completion of the
testing of certain modules.
Que 2.
1. Unit regression – Unit regression testing, executed during the unit testing phase, tests the
2. Partial regression – Partial regression is performed after impact analysis. In this testing process,
the new addition of the code as a unit is made to interact with other parts of older existing code, in
order to determine that even with code modification, the system functions in silos as desired.
3. Complete regression – Complete regression testing is often carried out when the code changes
for modification or the software updates seep way back into the roots.
● It is also carried out in case there are multiple changes to the existing code. It gives a
comprehensive view of the system as a whole and weeds out any unforeseen problems.
● A sort of a “final” regression testing is implemented to certify that the build (new lines of
code) has not been altered for a period of time. This final version is then deployed to
end-users.
Following steps are involved in Software Testing Life Cycle (STLC). Each step is
have its own Entry Criteria and deliverable.
● Requirement Analysis
● Test Planning
● Test Case Development
● Environment Setup
● Test Execution
● Test Cycle Closure
Requirement Analysis:
Requirement Analysis is the very first step in Software Testing Life Cycle (STLC). In this
step Quality Assurance (QA) team understands the requirement in terms of what we will
testing & figure out the testable requirements. If any conflict, missing or not understood any
requirement, then QA team follow up with the various stakeholders like Business Analyst,
System Architecture, Client, Technical Manager/Lead etc to better understand the detail
knowledge of requirement.
From very first step QA involved in the where STLC which helps to prevent the introducing
defects into Software under test. The requirements can be either Functional or
Non-Functional like Performance, Security testing. Also requirement and Automation
feasibility of the project can be done in this stage (if applicable)
Activities: Prepare the list of questions or queries and get resolved from Business Analyst,
System Architecture, Client, Technical Manager/Lead etc. Make out the list for what all
Types of Tests performed like Functional, Security, and Performance
Deliverable:List of questions with all answers to be resolved from business i.e. testable
requirements Automation feasibility report (if applicable)
Test Planning:Test Planning is most important phase of Software testing life cycle where
all testing strategy is defined. This phase also called as Test Strategy phase. In this phase
typically Test Manager (or Test Lead based on company to company) involved to determine
the effort and cost estimates for entire project. This phase will be kicked off once the
requirement gathering phase is completed & based on the requirement analysis, start
preparing the Test Plan. The Result of Test Planning phase will be Test Plan or Test strategy
& Testing Effort estimation documents. Once test planning phase is completed the QA team
can start with test cases development activity.
The test case development activity is started once the test planning activity is
finished. This is the phase of STLC where testing team write down the detailed test
cases. Along with test cases testing team also prepare the test data if any required
for testing. Once the test cases are ready then these test cases are reviewed by
peer members or QA lead.
report.
Re-requisite test data
preparation for
executing test cases.
Setting up the test environment is vital part of the STLC. Basically test
environment decides on which conditions software is tested. This is independent
activity and can be started parallel with Test Case Development. In process of
setting up testing environment test team is not involved in it. Based on company
to company may be developer or customer creates the testing environment. Mean
while testing team should prepare the smoke test cases to check the readiness of
the test environment setup.
Test Execution:
Once the preparation of Test Case Development and Test Environment setup is
completed then test execution phase can be kicked off. In this phase testing team
start executing test cases based on prepared test planning & prepared test cases
in the prior step.
Once the test case is passed then same can be marked as Passed. If any test case
is failed then corresponding defect can be reported to developer team via bug
tracking system & bug can be linked for corresponding test case for further
analysis. Ideally every failed test case should be associated with at least single
bug. Using this linking we can get the failed test case with bug associated with it.
Once the bug fixed by development team then same test case can be executed
based on your test planning.
If any of the test cases are blocked due to any defect then such test cases can be
marked as Blocked, so we can get the report based on how many test cases
passed, failed, blocked or not run etc. Once the defects are fixed, same Failed or
Blocked test cases can be executed again to retest the functionality.
Test Plan or Test Based on test planning Test case execution
strategy document. execute the test cases. report.
Call out the testing team member meeting & evaluate cycle completion criteria
based on Test coverage, Quality, Cost, Time, Critical Business Objectives, and
Software. Discuss what all went good, which area needs to be improve & taking
the lessons from current STLC as input to upcoming test cycles, which will help to
improve bottleneck in the STLC process. Test case & bug report will analyze to find
out the defect distribution by type and severity. Once complete the test cycle then
test closure report & Test metrics will be prepared. Test result analysis to find out
the defect distribution by type and severity.
Mutation testing
• Mutation testing is a technique that focuses on measuring the adequacy of test data
(or test cases).
• The original intention behind mutation testing was to expose and locate weaknesses in
test cases.
• Thus, mutation testing is a way to measure the quality of test cases, and the actual
testing of program units is an added benefit.
• Mutation testing is not a testing strategy like control flow or data flow testing. It should
be used to supplement traditional unit testing techniques.
• The mutations introduced to the source code are designed to imitate common
programming errors.
• A good unit test suite typically detects the program mutations and fails automatically.
• Mutation testing is used on many different platforms , including Java, C++, C# and
Ruby.
• Killed mutant: A mutant is said to be killed when the execution of a test case causes
it to fail and the mutant is considered to be dead.
• Equivalent mutants: Some mutants are equivalent to the given program, that is, such
mutants always produce the same output as the original program.
• Killable or stubborn mutants: The result of executing a mutant may be different from
the expected result, but a test suite does not detect the failure because it does not have
the right test case. In this scenario, the mutant is called killable or stubborn, that is, the
existing set of test cases is insufficient to kill it.
• Mutation Score :
A mutation score for a set of test cases is the percentage of non-equivalent mutants
killed by the test suite.The test suite is said to be mutation adequate if its mutation score
is 100 %.
Where D is the dead mutants, N is the total number of mutants and E is the number of
equivalent mutants.
inta,b,c ;
if( a>b && a > c)
else if ( b>c)
else
The following table identifies the killed and killable mutants for the above
program:
N: Total no of mutants
Mutation Score
= 100 * D/(N-E)
=100 *5/(5-0)
=100
ANS :
● Software Testing Metrics are useful for evaluating the health, quality, and progress of a
software testing effort.
● Without metrics, it would be almost impossible to quantify, explain, or demonstrate software
quality.
● Metrics also provide a quick insight into the status of software testing efforts, hence resulting
in better control through smart decision making.
● Traditional Software testing metrics dealt with the ones’ based on defects that were used to
measure the team’s effectiveness.
● It usually revolved around the number of defects that got leaked to production named as
Defect Leakage or the Defects that were missed during a release, reflects the team’s ability
and product knowledge.
● The other team metrics was with respect to percentage of valid and invalid defects. These
metrics can also be captured at an individual level, but generally are measured at a team
level.
Software Testing Metrics had always been an integral part of software testing projects, but the nature
and type of metrics collected and shared have changed over time.
● Helps to analyze metrics in every phase of testing to improve defect removal efficiency
● Enforces better relationship between testing coverage, risks and complexities of the systems
Question 5
● If a software system satisfies all the functional requirements, the system is said
to be correct.
2.Reliability
● Customers may still consider an incorrect system to be reliable if the failure rate
is very small and it does not adversely affect their mission objectives.
considered to be reliable.
3.Efficiency:
functionalities.
4.Integrity:
● In other words, integrity refers to the extent to which access to software or data
5.Usability:
6.Maintainability:
● Maintenance refers to how easily and inexpensively the maintenance tasks can
be performed.
7.Testability:
8.Flexibility:
● In order to measure the flexibility of a system, one has to find an answer to the
9.Portability
business.
10.Reusability
● Reusability saves the cost and time to develop and test the component being
reused.
11.Interoperability :
input to another system, it is likely that the two systems run on different
system.
1.Access Audit : Ease with which the software and data can be checked for compliance
with standards.
2.Access Control : Provisions for control and protection of the software
been achieved.
5.Communicativeness : Ease with which the inputs and outputs can be assimilated.
conditions.
expanded.
underlying hardware.
Programming remains the biggest & most critical component of test case automation.
Hence designing & coding of test cases is extremely important, for their execution and
maintenance to be effective. Seven fundamental attributes/guidelines of good
automated test cases are discussed below :
1) Simplicity:
The test case should have a single objective. Multi-objective test cases are difficult to
understand and design. There should not be more than 10-15 test steps per test case,
excluding the setup and clean up steps. Multi-purpose test cases are likely to break or
give misleading results. If the execution of a complex test leads to a system failure, it is
difficult to isolate the cause of the failure.
2)Modularity:
Each test case should have a setup and cleanup phase before and after the execution
test steps, respectively. The setup phase ensures that the initial conditions are met
before the start of the test steps. Similarly, the cleanup phase puts the system back in
the initial state, that is, the state prior to setup. Each test step should be small and
precise. One input stimulus should be provided to the system at a time and the
response verified (if applicable) with an interim verdict. The test steps are building
blocks from reusable libraries that are put together to form multi-step test cases.
A test case verdict (pass or fail) should be assigned in such a way that it should be
unambiguous and understandable. Robust test cases can ignore trivial failures such as
one pixel mismatch in a graphical display. Care should be taken so that false test
results are minimized. The test cases must have built-in mechanisms to detect and
recover from errors. For example, a test case need not wait indefinitely if the software
under test has crashed. Rather, it can wait for a while and terminate an indefinite wait
by using a timer mechanism.
4) Reusability:
The test steps are built to be configurable, that is, variables should not be hard coded.
They can take values from a single configurable file. Attention should be given while
coding test steps to ensure that a single global variable is used, instead of multiple,
decentralized, hard-coded variables. Test steps are made as independent of test
environments as possible. The automated test cases are categorized into different
groups so that subsets of test steps and test cases can be extracted to be reused for
other platforms and/or configurations. Finally, in GUI automation hard-coded screen
locations must be avoided.
5) Maintainability:
Any changes to the software under test will have an impact on the automated test cases
and may require necessary changes to be done to the affected test cases. Therefore, it
is required to conduct an assessment of the test cases that need to be modified before
an approval of the project to change the system. The test suite should be organized and
categorized in such a way that the affected test cases are easily identified. If a particular
test case is data driven, it is recommended that the input test data be stored separately
from the test case and accessed by the test procedure as needed. The test cases must
comply with coding standard formats. Finally, all the test cases should be controlled with
a version control system.
6) Documented:
The test cases and the test steps must be well documented. Each test case gets a
unique identifier, and the test purpose is clear and understandable. Creator name, date
of creation, and the last time it was modified must be documented. There should be
traceability to the features and requirements being checked by the test case. The
situation under which the test case cannot be used is clearly described. The
environment requirements are clearly stated with the source of input test data (if
applicable). Finally, the result, that is, pass or fail, evaluation criteria are clearly
described.
Each test case is designed as a cohesive entity, and test cases should be largely
independent of each other. Each test case consists of test steps, which are naturally
linked together. The predecessor and successor of a test step within a test case should
be clearly
Following three important rules are to be kept in mind while automating the test
cases:
Rule-1: Independent of data value: The possible corruption of data associated with
one test case should have no impact on other test cases.
Rule-2: Independent of failure: The failure of one test case should not cause a ripple
of failures among a large number of subsequent test cases.
Rule-3: Independent of final state: The state in which the environment is left by a test
case should have no impact on test cases to be executed later.
QUESTION 6
1.)Test suite is a container that has a set of tests which helps testers in executing and
reporting the test execution status. It can take any of the three states namely Active, In
Progress and completed.
2.)A Test case can be added to multiple test suites and test plans. After creating a test
plan, test suites are created which in turn can have any number of tests.
3.)Test suites are created based on the cycle or based on the scope. It can contain any
type of tests, viz - functional or Non-Functional.
• Detail test groups and subgroups are outlined in the test suite structure section based
on the test categories identified in the test approach section.
• Test objectives are created for each test group and subgroup based on the system
requirements and functional specification.
• If some existing test cases, automated or manual, need to be run as regression tests,
those test cases must be included in the test suite.
6.)If each test case represents a piece of a scenario, such as the elements that simulate
a completing a transaction, use a test suite. For instance, a test suite might contain four
test cases, each with a separate test script:
7.)Test suites can identify gaps in a testing effort where the successful completion of
one test case must occur before you begin the next test case. For instance, you cannot
add new products to a shopping cart before you successfully log in to the application.
When you run a test suite in sequential mode, you can choose to stop the suite
execution if a single test case does not pass. Stopping the execution is useful if running
a test case in a test suite depends on the success of previous test cases.
8.) Test suites often need to adapt to the software that it is intended to test. The core
software changes and grows, and as such, its test suite also needs to change and grow.
However, the test suites can often grow so large as to be unmaintainable.
9.) Maintaining the suite becomes too expensive. Once we have a suite in place, we
have to maintain it. As the size of the suite grows, the amount of maintenance of
existing test grows. It grows in proportion to the number of tests in the suite and the rate
of change of the underlying software being tested. As the amount of maintenance
grows, so does its expense.
10.) We can developed techniques to assist in the maintenance of these test suites,
specifically byn allowing for test-suite reduction (while preserving coverage adequacy)
and test-suite prioritization.
ANS:
Traditional testing methods are not directly applicable to OO programs as they involve
lead to issues, which are yet to be resolved. Some of these issues are listed below.
1.)Basic unit of unit testing.
● When individually validated classes are used to create more complex classes in
2.)Implication of Encapsulation.
● In addition, the state of object at the time of invocation of method affects its
behavior. Hence, testing depends not only on the object but on the state of object
● Test cases designed for base class are not applicable to derived class always
(especially, when derived class is used in different context). Thus, most testing
environment.
4.)Implication of Genericity.
● But UniqueTable class would need to evaluate the equivalence of elements and
5.)Implications of Polymorphism
cases.
● Many server classes may need to be integrated before a client class can be
tested.
C) Acceptance testing
Testing. Testing does not normally follow a strict procedure and is not
tested for acceptability. The purpose of this test is to evaluate the system’s
Definition by ISTQB
Analogy
During the process of manufacturing a ballpoint pen, the cap, the body, the
tail and clip, the ink cartridge and the ballpoint are produced separately and
unit tested separately. When two or more units are ready, they are
Tasks
● Acceptance Test Plan
○ Prepare
○ Review
○ Rework
○ Baseline
○ Prepare
○ Review
○ Rework
○ Baseline
● Acceptance Test
○ Perform
When is it performed?
after System Testing and before making the system available for actual use.
Who performs it?
software but who are not directly involved in the project (Development
the organization that developed the software. They are the ones
● They have realized the need to test this data to ensure data completeness
and data integrity. They have also realized the fact that comprehensive
testing of data at every point throughout the ETL process is important and
inevitable, as more of this data is being collected and used for strategic
the final production phase. Now, let us see some of the issues that are
ETL Testing
● An ETL tool extracts the data from all these heterogeneous data sources,
removing incorrect data fields, etc.), and loads it into a Data Warehouse.
data from a source system to the actual data warehouse and other data
integration projects.
● DW system contains historical data, so the data volume is too large and
● ETL testers are normally not provided with access to see job schedules in
the ETL tool. They hardly have access to BI Reporting tools to see the final
complex.
● ETL testing involves various complex SQL concepts for data validation in
● Sometimes the testers are not provided with the source-to-target mapping
information.
process.