0% found this document useful (0 votes)
1K views33 pages

Stqa Paper Solution (Des 16)

The document discusses different types of software testing techniques. It differentiates between exhaustive and effective testing, verification and validation, black box and white box testing, top-down and bottom-up testing. It also explains the types of regression testing. Exhaustive testing aims to test all possible inputs while effective testing focuses on testing key requirements and components. Verification checks documentation and design against specifications while validation tests the actual product. Black box testing treats the system as a black box while white box testing uses internal knowledge. Top-down testing starts from main modules while bottom-up starts from sub-modules. Regression testing ensures ongoing changes don't break existing functionality and occurs at the unit, integration and system levels.

Uploaded by

Rajavi
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views33 pages

Stqa Paper Solution (Des 16)

The document discusses different types of software testing techniques. It differentiates between exhaustive and effective testing, verification and validation, black box and white box testing, top-down and bottom-up testing. It also explains the types of regression testing. Exhaustive testing aims to test all possible inputs while effective testing focuses on testing key requirements and components. Verification checks documentation and design against specifications while validation tests the actual product. Black box testing treats the system as a black box while white box testing uses internal knowledge. Top-down testing starts from main modules while bottom-up starts from sub-modules. Regression testing ensures ongoing changes don't break existing functionality and occurs at the unit, integration and system levels.

Uploaded by

Rajavi
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

STQA PAPER SOLUTION (DEC 16)

QUESTION 1:

A) Differentiate between ​Exhaustive testing and ​Effective testing

Exhaustive testing Effective testing

1. Exhaustive testing is a test 1. Everything can and should be tested –


approach in which all possible ● Test if all defined requirements
permutation and combinations of are met
test inputs are used for testing so ● Test the performance of the
application
as to ensure that everything is ● Test each component
working as expected. ● Test the components integrated
with each other
● Test the application end to end
● Test the application in various
environments
● Test all the application paths

2. ​This type of testing validates that the 2. This type of testing validates
product cannot be crashed or destroyed in
all possible random situations and ● Testing in each phase of the
conditions Development cycle to ensure that
the “bugs”(defects) are eliminated
at the earliest
● Testing to ensure no “bugs”
creep through in the final product
● Testing to ensure the reliability of
the software
● Above all testing to ensure that
the user expectations are met

3. ​This testing mainly depends on 3. This testing mainly depends on


application risk and timeframe, based on Each Level of testing should provide adequate
which tester performs the testing. test coverage.
Unit testing should ensure each and every line
of code is tested
Integration Testing should ensure the
components can be integrated and all the
interfaces of each component are working
correctly

4. Benefits 4. Benefits
It ensures that high priority tasks will be ● ​effective Testing Strategy and Process
tested for sure helps to minimize or eliminate these
● Improved customer defects.
satisfaction ● The extent to which it eliminates these
post-production defects (Design
● Count of UAT defects Defects/Coding Defects/etc) is a good
would be reduced and measure of the effectiveness of the
would also result in Testing Strategy and Process.
productive testing
● Improved exhaustive
testing can be performed
with less manual effort

B) differentiate ​Verification and Validation

Verification ​Validation

1. Verification​ is a static practice of 1. ​Validation​ is a dynamic mechanism of


verifying documents, design, code validating and testing the actual product.
and program.

2. ​It does not involve executing the code. 2. It always involves executing the code.

3. ​Verification​ uses methods like 3. ​Validation​ uses methods like black


inspections, reviews, walkthroughs, and box (functional) testing, gray box
Desk-checking etc. testing, and white box (structural) testing
etc.

4. ​Verification ​is to check whether the 4. ​Validation​ is to check whether


software conforms to specifications. software meets the customer
expectations and requirements.

5. ​It can catch errors that validation 5. It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level Exercise.

6. Target is requirements specification, 6. ​Target is actual product-a unit, a


application and software architecture, module, a bent of integrated modules,
high level, complete design, and database and effective final product.
design etc.
7. ​Verification​ is done by QA team to 7. ​Validation​ is carried out with the
ensure that the software is as per the involvement of testing team.
specifications in the SRS document.

8. It generally comes first-done before 8. ​It generally follows after ​verification​.


validation.

C) diffranciate black box and white box testing

Parameter Black Box Testing White Box Testing


Definition Black Box Testing is a White Box Testing is a
software testing method in software testing method in
which the internal structure/ which the internal
design/ implementation of structure/ design/
the item being tested is NOT implementation of the item
known to the tester being tested is known to
the tester.
Levels Mainly applicable to higher Mainly applicable to lower
Applicable To levels of testing:​Acceptance levels of testing:​Unit
Testing Testing
System Testing Integration Testing

Responsibility Generally, independent Generally, Software


Software Testers Developers
Programming Not Required Required
Knowledge
Implementation Not Required Required
Knowledge
Basis for Test Requirement Specifications Detail Design
Cases

C) Differentiate between top down and bottom up testing.

Top down testing Bottom up testing

1.Top down Testing:​ In this approach testing is 1. Bottom up testing: ​In this approach testing is
conducted from main module to sub module. if the conducted from sub module to main module, if the
sub module is not developed a temporary program main module is not developed a temporary
called STUB is used for simulate the submodule. program called DRIVERS is used to simulate the
main module.
2. ​Advantages: 2. ​Advantages:

● Advantageous if major flaws occur ● Advantageous if major flaws occur toward


toward the top of the program. the bottom of the program.
● Once the I/O functions are added, ● Test conditions are easier to create.
representation of test cases is easier. ● Observation of test results is easier.
● Early skeletal Program allows
demonstrations and boosts morale.

3. ​Disadvantages: 3. ​Disadvantages:
● Stub modules must be produced
● Stub Modules are often more complicated ● Driver Modules must be produced.
than they first appear to be. ● The program as an entity does not exist
● Before the I/O functions are added, until the last module is added.
representation of test cases in stubs can
be difficult.
● Test conditions ma be impossible, or very
difficult, to create.
● Observation of test output is more
difficult.
● Allows one to think that design and
testing can be overlapped.
● Induces one to defer completion of the
testing of certain modules.

Que 2.

A) Explain the regression testing types.


Ans :
● Regression testing is a black box testing technique performed by executing units of code
repeatedly to ensure that the ongoing code modifications do not impact the systems
functionality. Alterations to the application can occur in various forms, be it new functionality,
bug fixes, integrations, functionality enhancements, interfaces, patches, etc.
● Many software development engineers would insist that as long as essential functions are
tested and are able to deliver results as per requirement, it would suffice.
● While this may be practical, regression testing can prove to be a real blessing at a later
stage, because rather than just guaranteeing the functionality being tested for, it ensures that
there are no other nasty surprises.

Types of regression testing

1. Unit regression​ – Unit regression testing, executed during the unit testing phase, tests the

code as a single unit.


● It has a narrow and focused approach, where complex interactions and dependencies

outside the unit of code in question are temporarily blocked.

2​. Partial regression​ – Partial regression is performed after impact analysis. In this testing process,

the new addition of the code as a unit is made to interact with other parts of older existing code, in

order to determine that even with code modification, the system functions in silos as desired.

3. ​Complete regression​ – Complete regression testing is often carried out when the code changes

for modification or the software updates seep way back into the roots.

● It is also carried out in case there are multiple changes to the existing code. It gives a

comprehensive view of the system as a whole and weeds out any unforeseen problems.

● A sort of a “final” regression testing is implemented to certify that the build (new lines of

code) has not been altered for a period of time. This final version is then deployed to

end-users.

B) explain the software testing life cycle

Following steps are involved in Software Testing Life Cycle (STLC). Each step is
have its own Entry Criteria and deliverable.

● Requirement Analysis
● Test Planning
● Test Case Development
● Environment Setup
● Test Execution
● Test Cycle Closure

Requirement Analysis:

Requirement Analysis is the very first step in Software Testing Life Cycle (STLC). In this
step Quality Assurance (QA) team understands the requirement in terms of what we will
testing & figure out the testable requirements. If any conflict, missing or not understood any
requirement, then QA team follow up with the various stakeholders like Business Analyst,
System Architecture, Client, Technical Manager/Lead etc to better understand the detail
knowledge of requirement.

From very first step QA involved in the where STLC which helps to prevent the introducing
defects into Software under test. The requirements can be either Functional or
Non-Functional like Performance, Security testing. Also requirement and Automation
feasibility of the project can be done in this stage (if applicable)

Entry Criteria: ​Following documents should be available, Requirements Specification


Application architecture,Along with above documents Acceptance criteria should be well
defined.

Activities: ​Prepare the list of questions or queries and get resolved from Business Analyst,
System Architecture, Client, Technical Manager/Lead etc. Make out the list for what all
Types of Tests performed like Functional, Security, and Performance

Deliverable:​List of questions with all answers to be resolved from business i.e. testable
requirements Automation feasibility report (if applicable)

Test Planning:​Test Planning is most important phase of ​Software testing life cycle​ where
all testing strategy is defined. This phase also called as ​Test Strategy​ phase. In this phase
typically Test Manager (or Test Lead based on company to company) involved to determine
the effort and cost estimates for entire project. This phase will be kicked off once the
requirement gathering phase is completed & based on the requirement analysis, start
preparing the Test Plan. The Result of Test Planning phase will be ​Test Plan ​or Test strategy
& Testing ​Effort estimation​ documents. Once test planning phase is completed the QA team
can start with test cases development activity.

Entry Criteria Activities Deliverable

Requirements Define Objective & Test Plan​ or Test


Documents (Updated scope of the project. strategy document.
version of unclear or
Testing ​Effort
missing requirement). List down the testing
estimation​document
types involved in the
Automation feasibility STLC.
report.
Test effort estimation
and resource planning.

Selection of testing tool


if required.

Define the testing


process overview.

Define the test


environment required
for entire project.

Prepare the test


schedules.
Test Case Development:

The test case development activity is started once the test planning activity is
finished. This is the phase of STLC where testing team write down the detailed test
cases. Along with test cases testing team also prepare the test data if any required
for testing. Once the test cases are ready then these test cases are reviewed by
peer members or QA lead.

Also the Requirement Traceability Matrix (RTM) is prepared. The Requirement


Traceability Matrix is an industry-accepted format for tracking requirements
where each test case is mapped with the requirement. Using this RTM we can
track backward & forward traceability.

Entry Criteria Activities Deliverable

Requirements Preparation of test Test cases.


Documents (Updated cases.
version of unclear or Test data.

missing requirement). Preparation of test


automation scripts (if Test Automation Scripts

Automation feasibility required). (if required).

report.
Re-requisite test data
preparation for
executing test cases.

Test Environment Setup:

Setting up the test environment is vital part of the STLC. Basically test
environment decides on which conditions software is tested. This is independent
activity and can be started parallel with Test Case Development. In process of
setting up testing environment test team is not involved in it. Based on company
to company may be developer or customer creates the testing environment. Mean
while testing team should prepare the smoke test cases to check the readiness of
the test environment setup.

Entry Criteria Activities Deliverable

Test Plan is available. Analyze the Test Environment will


requirements and be ready with test data.
Smoke Test cases are prepare the list of
available. Software & hardware Result of Smoke Test

required to set up test cases.


Test data is available. environment.

Setup the test


environment.

Once the Test


Environment is setup
execute the Smoke test
cases to check the
readiness of the test
environment.

Test Execution:

Once the preparation of Test Case Development and Test Environment setup is
completed then test execution phase can be kicked off. In this phase testing team
start executing test cases based on prepared test planning & prepared test cases
in the prior step.
Once the test case is passed then same can be marked as Passed. If any test case
is failed then corresponding defect can be reported to developer team via bug
tracking system & bug can be linked for corresponding test case for further
analysis. Ideally every failed test case should be associated with at least single
bug. Using this linking we can get the failed test case with bug associated with it.
Once the bug fixed by development team then same test case can be executed
based on your test planning.

If any of the test cases are blocked due to any defect then such test cases can be
marked as Blocked, so we can get the report based on how many test cases
passed, failed, blocked or not run etc. Once the defects are fixed, same Failed or
Blocked test cases can be executed again to retest the functionality.

Entry Criteria Activities Deliverable

Test Plan ​or Test Based on test planning Test case execution
strategy document. execute the test cases. report.

Test cases. Mark status of test Defect report.


cases like Passed,
Test data. Failed, Blocked, Not
Run etc.

Assign Bug Id for all


Failed and Blocked test
cases.

Do Retesting once the


defects are fixed.

Track the defects to


closure.
Test Cycle Closure:

Call out the testing team member meeting & evaluate cycle completion criteria
based on Test coverage, Quality, Cost, Time, Critical Business Objectives, and
Software. Discuss what all went good, which area needs to be improve & taking
the lessons from current STLC as input to upcoming test cycles, which will help to
improve bottleneck in the STLC process. Test case & bug report will analyze to find
out the defect distribution by type and severity. Once complete the test cycle then
test closure report & Test metrics will be prepared. Test result analysis to find out
the defect distribution by type and severity.

Entry Criteria Activities Deliverable

Test case execution is Evaluate cycle Test Closure report


completed completion criteria
based on Test coverage, Test metrics
Test case Execution Quality, Cost, Time,
report Critical Business
Objectives, and
Defect report Software Prepare test
metrics based on the
above parameters.

Prepare Test closure


report

Share best practices for


any similar projects in
future
QUESTION 4

A) Explain mutation testing with the help of example

Mutation testing

• Mutation testing is a technique that focuses on measuring the adequacy of test data
(or test cases).

• The original intention behind mutation testing was to expose and locate weaknesses in
test cases.

• Thus, mutation testing is a way to measure the quality of test cases, and the actual
testing of program units is an added benefit.

• Mutation testing is not a testing strategy like control flow or data flow testing. It should
be used to supplement traditional unit testing techniques.

• The mutations introduced to the source code are designed to imitate common
programming errors.

• A good unit test suite typically detects the program mutations and fails automatically.

• Mutation testing is used on many different platforms , including Java, C++, C# and
Ruby.

• Mutation testing facilitates the following advantages :

Program code fault identification

Effective test case development

Detection of loopholes in test data

Improved software program quality


Elimination of code ambiguity

• Mutant:​ A mutation of a program is a modification of the program created by


introducing a single, small, legal, syntactic change in the code. A modified program so
obtained is called a mutant.

• Killed mutant: ​A mutant is said to be killed when the execution of a test case causes
it to fail and the mutant is considered to be dead​.

• Equivalent mutants:​ Some mutants are equivalent to the given program, that is, such
mutants always produce the same output as the original program.

• Killable or stubborn mutants: ​The result of executing a mutant may be different from
the expected result, but a test suite does not detect the failure because it does not have
the right test case. In this scenario, the mutant is called killable or stubborn, that is, the
existing set of test cases is insufficient to kill it.

• Mutation Score :

A mutation score for a set of test cases is the percentage of non-equivalent mutants
killed by the test suite.The test suite is said to be mutation adequate if its mutation score
is 100 %.

Mutation Score is given by :

Mutation Score = 100 * D/(N-E)

Where D is the dead mutants, N is the total number of mutants and E is the number of
equivalent mutants.

• The following example illustrates the concept of mutation testing:

Program to find the greatest of three numbers:

inta,b,c ;
if( a>b && a > c)

System.out.println (“ The greatest number is “ +a);

else if ( b>c)

System.out.println (“ The greatest number is” +b);

else

System.out.println(“ The greatest number is “ +c);

The following table identifies the killed and killable mutants for the above
program:

Mutant Test Input Origina Mutant Test Verdi


case l output res ct
Output ult

M1: if(c > a && c 1 <1,2, 3 Progra Pas Kille


< b) 3> m s d
termina
tes
2 <1,1, 1 Progra Pass Kill
1> m ed
termina
tes

M2: if(a == b && 1 <1,2, 3 3 Fail Killa


a > c) 3> ed ble

2 <1,1, 1 Progra Pass Kill


1> m ed
termina
tes

M3: if(a > b a > c) 1 <1,2,3> 3 3 Fail Killa


ed ble

2 <1,1, 1 Progra Pass Kill


1> m ed
termina
tes

M4: if(a > b && a 1 <1,2, 3 3 Fail Killa


> c) ; 3> ed ble
System.out.printl
n("The greatest
no is +c")
2 <1,1, 1 Progra Pass Kill
1> m ed
termina
tes

M5: if(b >= a && 1 <1,2, 3 3 Fail Killa


b > c) 3> ed ble

2 <1,1, 1 Progra Pass Kill


1> m ed
termina
tes

**D: NO OF MUTANTS KILLED

N: Total no of mutants

E: No.of equivalent mutants**

Mutation Score

= 100 * D/(N-E)

=100 *5/(5-0)

=100

B) Explain the need of software matrix

ANS :

● Software Testing Metrics are useful for evaluating the health, quality, and progress of a
software testing effort.
● Without metrics, it would be almost impossible to quantify, explain, or demonstrate software
quality.
● Metrics also provide a quick insight into the status of software testing efforts, hence resulting
in better control through smart decision making.
● Traditional Software testing metrics dealt with the ones’ based on defects that were used to
measure the team’s effectiveness.
● It usually revolved around the number of defects that got leaked to production named as
Defect Leakage or the Defects that were missed during a release, reflects the team’s ability
and product knowledge.
● The other team metrics was with respect to percentage of valid and invalid defects. These
metrics can also be captured at an individual level, but generally are measured at a team
level.

Software Testing Metrics had always been an integral part of software testing projects, but the nature
and type of metrics collected and shared have changed over time.

Top benefits of tracking software testing metrics include the following:

● Helps achieve cost savings by preventing defects

● Helps improve overall project planning

● Facilitates to understand if the desired quality is achieved

● Enforces keenness to further improve the processes

● Helps to analyze the risks associated in a deeper way

● Helps to analyze metrics in every phase of testing to improve defect removal efficiency

● Improves Test Automation ROI over a time period

● Enforces better relationship between testing coverage, risks and complexities of the systems

Question 5

A) Explain MC call quality factor .

ANS: ​A quality factor represents a behavioural characteristic of a system.

Following are the list of quality factors:


1.Correctness:

● A software system is expected to meets the explicitly specified functional

requirements and the implicitly expected non-functional requirements.

● If a software system satisfies all the functional requirements, the system is said

to be correct.

2.Reliability

● Customers may still consider an incorrect system to be reliable if the failure rate

is very small and it does not adversely affect their mission objectives.

● Reliability is a customer perception, and an incorrect software can still be

considered to be reliable​.

3.Efficiency:

● Efficiency concerns to what extent a software system utilizes resources, such as

computing power, memory, disk space, communication bandwidth, and energy.

● A software system must utilize as little resources as possible to perform its

functionalities.

4.Integrity:

● A system’s integrity refers to its ability to withstand attacks to its security.

● In other words, integrity refers to the extent to which access to software or data

by unauthorized persons or programs can be controlled​.

5.Usability:

● A software is considered to be usable if human users find it easy to use.


● Without a good user interface a software system may fizzle out even if it

possesses many desired qualities.

6.Maintainability:

● Maintenance refers to the upkeep of products in response to deterioration of their

components due to continuous use of the products.

● Maintenance refers to how easily and inexpensively the maintenance tasks can

be performed.

● For software products, there are three categories of maintenance activities :

corrective, adaptive and perfective maintenance.

7.Testability:

● Testability means the ability to verify requirements. At every stage of software

development, it is necessary to consider the testability aspect of a product.

● To make a product testable, designers may have to instrument a design with

functionalities not available to the customer.

8.Flexibility:

● Flexibility is reflected in the cost of modifying an operational system.

● In order to measure the flexibility of a system, one has to find an answer to the

question: How easily can one add a new feature to a system.

9.Portability

● Portability of a software system refers to how easily it can be adapted to run in a

different execution environment.


● Portability gives customers an option to easily move from one execution

environment to another to best utilize emerging technologies in furthering their

business.

10.​Reusability

● Reusability means if a significant portion of one product can be reused, maybe

with minor modifications, in another product.

● Reusability saves the cost and time to develop and test the component being

reused.

11.​Interoperability​ :

● Interoperability means whether or not the output of one system is acceptable as

input to another system, it is likely that the two systems run on different

computers interconnected by a network.

● An example of interoperability is the ability to roam from one cellular phone

network in one country to another cellular network in another country.

McCall’s Quality Criteria :

A quality criteria is an attribute of a quality factor that is related to software

development. For example, modularity is an attribute of the architecture of a software

system.

List of McCall’s Quality Criteria :

1.Access Audit : Ease with which the software and data can be checked for compliance

with standards.
2.Access Control : Provisions for control and protection of the software

3.Accuracy : Precisions of computations and output.

4.Completeness: Degree to which full implementation of required functionalities have

been achieved.

5.Communicativeness : Ease with which the inputs and outputs can be assimilated.

6.Conciseness : Compactness of the source code, in terms of lines of code.

7.Consistency : Use of uniform design and implementation techniques.

8.Data commonality : Use of standard data representation.

9.Error tolerance : Degree to which continuity of operation is ensured under adverse

conditions.

10.Execution efficiency : Run time efficiency of the software.

11.Expandability : Degree to which storage requirements or software functions can be

expanded.

12.Hardware independence : Degree to which a software is dependent on the

underlying hardware.

13.Modularity : Provision of highly independent modules.

14.Operability : Ease of operation of the software.

15.Simplicity : Ease with which the software can be understood.

16.Software efficiency : Run time storage requirements of the software.

17.Traceability : Ability to link software components to requirements.


B) explain the guideline for automated testing

Programming remains the biggest & most critical component of test case automation.
Hence designing & coding of test cases is extremely important, for their execution and
maintenance to be effective. Seven fundamental attributes/guidelines of good
automated test cases are discussed below :

1) Simplicity:

The test case should have a single objective. Multi-objective test cases are difficult to
understand and design. There should not be more than 10-15 test steps per test case,
excluding the setup and clean up steps. Multi-purpose test cases are likely to break or
give misleading results. If the execution of a complex test leads to a system failure, it is
difficult to isolate the cause of the failure.

2)Modularity:

Each test case should have a setup and cleanup phase before and after the execution
test steps, respectively. The setup phase ensures that the initial conditions are met
before the start of the test steps. Similarly, the cleanup phase puts the system back in
the initial state, that is, the state prior to setup. Each test step should be small and
precise. One input stimulus should be provided to the system at a time and the
response verified (if applicable) with an interim verdict. The test steps are building
blocks from reusable libraries that are put together to form multi-step test cases.

3) Robustness and Reliability:

A test case verdict (pass or fail) should be assigned in such a way that it should be
unambiguous and understandable. Robust test cases can ignore trivial failures such as
one pixel mismatch in a graphical display. Care should be taken so that false test
results are minimized. The test cases must have built-in mechanisms to detect and
recover from errors. For example, a test case need not wait indefinitely if the software
under test has crashed. Rather, it can wait for a while and terminate an indefinite wait
by using a timer mechanism.

4) Reusability:

The test steps are built to be configurable, that is, variables should not be hard coded.
They can take values from a single configurable file. Attention should be given while
coding test steps to ensure that a single global variable is used, instead of multiple,
decentralized, hard-coded variables. Test steps are made as independent of test
environments as possible. The automated test cases are categorized into different
groups so that subsets of test steps and test cases can be extracted to be reused for
other platforms and/or configurations. Finally, in GUI automation hard-coded screen
locations must be avoided.

5) Maintainability:

Any changes to the software under test will have an impact on the automated test cases
and may require necessary changes to be done to the affected test cases. Therefore, it
is required to conduct an assessment of the test cases that need to be modified before
an approval of the project to change the system. The test suite should be organized and
categorized in such a way that the affected test cases are easily identified. If a particular
test case is data driven, it is recommended that the input test data be stored separately
from the test case and accessed by the test procedure as needed. The test cases must
comply with coding standard formats. Finally, all the test cases should be controlled with
a version control system.

6) Documented:

The test cases and the test steps must be well documented. Each test case gets a
unique identifier, and the test purpose is clear and understandable. Creator name, date
of creation, and the last time it was modified must be documented. There should be
traceability to the features and requirements being checked by the test case. The
situation under which the test case cannot be used is clearly described. The
environment requirements are clearly stated with the source of input test data (if
applicable). Finally, the result, that is, pass or fail, evaluation criteria are clearly
described.

7) Independent and Self-sufficient:

Each test case is designed as a cohesive entity, and test cases should be largely
independent of each other. Each test case consists of test steps, which are naturally
linked together. The predecessor and successor of a test step within a test case should
be clearly

Following three important rules are to be kept in mind while automating the test
cases:

Rule-1: Independent of data value:​ The possible corruption of data associated with
one test case should have no impact on other test cases.

Rule-2: Independent of failure:​ The failure of one test case should not cause a ripple
of failures among a large number of subsequent test cases.

Rule-3: Independent of final state:​ The state in which the environment is left by a test
case should have no impact on test cases to be executed later.

QUESTION 6

Write short notes on

A) Test suite minimization problem

1.)Test suite is a container that has a set of tests which helps testers in executing and
reporting the test execution status. It can take any of the three states namely Active, In
Progress and completed.

2.)A Test case can be added to multiple test suites and test plans. After creating a test
plan, test suites are created which in turn can have any number of tests.
3.)Test suites are created based on the cycle or based on the scope. It can contain any
type of tests, viz - functional or Non-Functional.

4.)Test suite structure is an essential items of a system test plan

5.)Test Suite Structure:

• Detail test groups and subgroups are outlined in the test suite structure section based
on the test categories identified in the test approach section.

• Test objectives are created for each test group and subgroup based on the system
requirements and functional specification.

• If some existing test cases, automated or manual, need to be run as regression tests,
those test cases must be included in the test suite.

6.)If each test case represents a piece of a scenario, such as the elements that simulate
a completing a transaction, use a test suite. For instance, a test suite might contain four
test cases, each with a separate test script:

Test case 1: Login


Test case 2: Add New Products

Test case 3: Checkout

Test case 4: Logout

7.)Test suites can identify gaps in a testing effort where the successful completion of
one test case must occur before you begin the next test case. For instance, you cannot
add new products to a shopping cart before you successfully log in to the application.
When you run a test suite in sequential mode, you can choose to stop the suite
execution if a single test case does not pass. Stopping the execution is useful if running
a test case in a test suite depends on the success of previous test cases.

8.) Test suites often need to adapt to the software that it is intended to test. The core
software changes and grows, and as such, its test suite also needs to change and grow.
However, the test suites can often grow so large as to be unmaintainable.

9.) Maintaining the suite becomes too expensive. Once we have a suite in place, we
have to maintain it. As the size of the suite grows, the amount of maintenance of
existing test grows. It grows in proportion to the number of tests in the suite and the rate
of change of the underlying software being tested. As the amount of maintenance
grows, so does its expense.

10.) We can developed techniques to assist in the maintenance of these test suites,
specifically byn allowing for test-suite reduction (while preserving coverage adequacy)
and test-suite prioritization.

B) issue in object oriented testing

ANS:

Traditional testing methods are not directly applicable to OO programs as they involve

OO concepts including encapsulation, inheritance, and polymorphism. These concepts

lead to issues, which are yet to be resolved. Some of these issues are listed below.
1.)Basic unit of unit testing.

● The class is natural unit for unit test case design.

● The methods are meaningless apart from their class.

● Testing a class instance (an object) can validate a class in isolation.

● When individually validated classes are used to create more complex classes in

an application system, the entire subsystem must be tested as whole before it

can be considered to be validated(integration testing).

2.)Implication of Encapsulation.

● Encapsulation of attributes and methods in class may create obstacles while

testing. As methods are invoked through the object of corresponding class,

testing cannot be accomplished without object.

● In addition, the state of object at the time of invocation of method affects its

behavior. Hence, testing depends not only on the object but on the state of object

also, which is very difficult to acquire.

3.) Implication of Inheritance.

● Inheritance introduce problems that are not found in traditional software.

● Test cases designed for base class are not applicable to derived class always

(especially, when derived class is used in different context). Thus, most testing

methods require some kind of adaptation in order to function properly in an OO

environment.

4.)Implication of Genericity.

● Genericity is basically change in underlying structure.


● We need to apply white box testing techniques that exercise this change.

● i.)Parameterization may or may not affect the functionality of access methods.

● ii.)In Tableclass, elemType may have little impact on implementations of the

access methods of the Table. Example: generic (parameterized class) class

Tableclass(elemType) int numberelements; create(); insert(elemType entry);

delete(elemType entry); isEmpty() returns boolean; isentered(elemType entry)

returns boolean; endclass;

● But UniqueTable class would need to evaluate the equivalence of elements and

this could vary depending on the representation of elemType. Example: class

UniqueTable extends Table insert(elemType entry); endclass;

5.)Implications of Polymorphism

● Each possible binding of polymorphic component requires a seperate set of test

cases.

● Many server classes may need to be integrated before a client class can be

tested.

● It is difficult to determine such bindings.

● It complicates the integration planning and testing.

6.)Implications for testing processes

● Here we need to re-examine all testing techniques and processes

C) Acceptance testing

Acceptance testing is ​Black Box Testing​ method is used in Acceptance

Testing. Testing does not normally follow a strict procedure and is not

scripted but is rather ​ad-hoc​.


ACCEPTANCE TESTING ​is a level of software testing where a system is

tested for acceptability. The purpose of this test is to evaluate the system’s

compliance with the business requirements and assess whether it is

acceptable for delivery.

Definition by ISTQB

● acceptance testing: ​Formal testing with respect to user needs,

requirements, and business processes conducted to determine whether

or not a system satisfies the acceptance criteria and to enable the

user, customers or other authorized entity to determine whether or

not to accept the system.

Analogy
During the process of manufacturing a ballpoint pen, the cap, the body, the

tail and clip, the ink cartridge and the ballpoint are produced separately and

unit tested separately. When two or more units are ready, they are

assembled and Integration Testing is performed. When the complete pen is

integrated, System Testing is performed. Once System Testing is complete,

Acceptance Testing is performed so as to confirm that the ballpoint pen is

ready to be made available to the end-users.

Tasks
● Acceptance Test Plan

○ Prepare

○ Review

○ Rework

○ Baseline

● Acceptance Test Cases/Checklist

○ Prepare

○ Review

○ Rework

○ Baseline

● Acceptance Test

○ Perform

When is it performed?

Acceptance Testing is the fourth and last ​level of software testing​performed

after ​System Testing​ and before making the system available for actual use.
Who performs it?

● Internal Acceptance Testing ​(Also known as Alpha Testing) is

performed by members of the organization that developed the

software but who are not directly involved in the project (Development

or Testing). Usually, it is the members of Product Management, Sales

and/or Customer Support.

● External Acceptance Testing ​is performed by people who are not

employees of the organization that developed the software.

○ Customer Acceptance Testing ​is performed by the customers of

the organization that developed the software. They are the ones

who asked the organization to develop the software. [This is in

the case of the software not being owned by the organization

that developed it.]

○ User Acceptance Testing ​(Also known as Beta Testing) is

performed by the end users of the software. They can be the

customers themselves or the customers’ customers.

D) ​challenges in data warehouse testing .

Importance of Data Warehouse testing

● Organizations with already well defined IT practices are at an innovative

stage to create the next level of technology transformation, by constructing

their own data warehouse to store and monitor real-time data.

● They have realized the need to test this data to ensure data completeness

and data integrity. They have also realized the fact that comprehensive

testing of data at every point throughout the ETL process is important and
inevitable, as more of this data is being collected and used for strategic

decision-making that affects their business forecasts.

● But, certain current strategies being followed are time-consuming,

resource-intensive, and inefficient. Thus, a well-planned, well defined and

effective ETL testing scope guarantees smooth conversion of the project to

the final production phase. Now, let us see some of the issues that are

common with ETL and Data Warehouse testing.

ETL Testing

● An ETL tool extracts the data from all these heterogeneous data sources,

transforms the data (like applying calculations, joining fields, keys,

removing incorrect data fields, etc.), and loads it into a Data Warehouse.

● ETL stands for Extract-Transform-Load and is a typical process of loading

data from a source system to the actual data warehouse and other data

integration projects.

● It is important to know that independent verification and validation of data

is gaining huge market potential.

Some of the important ETL Testing Challenges are:

● Loss of data might be there during the ETL process

● Incorrect, incomplete or duplicate data.

● DW system contains historical data, so the data volume is too large and

extremely complex to perform ETL testing in the target system.

● ETL testers are normally not provided with access to see job schedules in

the ETL tool. They hardly have access to BI Reporting tools to see the final

layout of reports and data inside the reports.


● Tough to generate and build test cases, as data volume is too high and

complex.

● ETL testers normally don’t have an idea of end-user report requirements

and business flow of the information.

● ETL testing involves various complex SQL concepts for data validation in

the target system.

● Sometimes the testers are not provided with the source-to-target mapping

information.

● Unstable testing environment delay the development and testing of a

process.

NOTE QUESTION 3 is missing

You might also like