Software Testing Sessional-2
Software Testing Sessional-2
Department of CSE/IT
Model Solution-Sessional Test-2
Course:
B.Tech
Session:
2016-17
Subject:
Software Testing &Audit
Max Marks: 50
Semester: VII
Section: CSE+IT+IMT
Sub. Code: NEOE-073
Time: 1.5 hour
Note: All questions are compulsory. All questions carry equal marks.
SECTION-A
1. Attempt all the parts of the following.
2x5=10
(a) What is testing? Differentiate between effective and exhaustive software testing.
Sol. Software testing is the process done with the intent of finding errors also Software
testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the
risks of software implementation. Test techniques include the process of executing a
program or application with the intent of finding software bugs (errors or other defects).
Effective software testing: The effectiveness of Testing can be measured if the goal and
purpose of the testing effort is clearly defined. Some of the typical Testing goals are:
Testing in each phase of the Development cycle to ensure that the bugs(defects) are
eliminated at the earliest
Testing to ensure no bugs creep through in the final product
Testing to ensure the reliability of the software
Above all testing to ensure that the user expectations are met
Unreliable software can severely hurt businesses and endanger lives depending on the
criticality of the application. The simplest application poorly written can deteriorate the
performance of your environment such as the servers, the network and thereby causing an
unwanted mess.
Exhaustive software testing: Exhaustive testing is a test approach in which all possible
data combinations are used for testing. Exploratory testing includes implicit data
combinations present in the state of the software/data at the start of testing. In this type of
testing we try to check the output given by the software by entering all the possible inputs,
in fact we use all the permutations and combinations of the inputs.
Debugging
1. Debugging starts from possibly unknown initial conditions and its end cannot
be predicted, apart from statistically.
2. The procedures for, and period of,
debugging cannot be so constrained.
3. It is the programmers vindication.
4. It is a demonstration of error or
apparent correctness.
(d) Why do bugs occur? How many states does a bug have?
Sol. Various reasons for the occurrence of bugs in software are as follows:
1. Human factor: It is because human beings develop software. hey are prone to make
mistakes. As human beings develop software, it would be foolish to expect the
software to be perfect and without any defects in it.
2. Communication failure: Another common reason for software defects can be
miscommunication, lack of communication or erroneous communication during
software development.
3. Unrealistic development timeframe: Lets face it. More often than not software are
developed under crazy release schedules, with limited/insufficient resources and with
unrealistic project deadlines.
4. Poor design logic: In this era of complex software systems development, sometimes
the software is so complicated that it requires some level of R&D and brainstorming to
reach a reliable solution. Lack of patience and an urge to complete it as quickly as
possible may lead to errors.
5. Poor coding practices: Sometimes errors are slipped into the code due to simply bad
coding. Bad coding practices such as inefficient or missing error/exception handling,
lack of proper validations (datatypes, field ranges, boundary conditions, memory
overflows etc.) may lead to introduction of errors in the code.
6. Buggy third-party tools: Quite often during software development we require many
third-party tools, which in turn are software and may contain some bugs in them.
These tools could be tools that aid in the programming (e.g. class libraries, shared
DLLs, compilers, HTML editors, debuggers etc.)
7. Lack of skilled testing: No tester would want to accept it but lets face it; poor testing
do take place across organizations. There can be shortcomings in the testing process
that are followed.
BUG LIFECYCLE:
Code coverage
Cohesion
Coupling
Cyclomatic Complexity
Number of lines of code
Program execution time
Program load time
Program size.
SECTION-B
7x4=28
Sol. Software Testing Life Cycle (STLC) is the testing process which is executed in
systematic and planned manner. In STLC process, different activities are carried out to
improve the quality of the product. Lets quickly see what all stages are involved in typical
Software Testing Life Cycle (STLC).
Following steps are involved in Software Testing Life Cycle (STLC). Each step is have its
own Entry Criteria and deliverable.
Requirement Analysis: Requirement Analysis is the very first step in Software
Testing Life Cycle (STLC). In this step Quality Assurance (QA) team understands the
requirement in terms of what we will testing & figure out the testable requirements. If
any conflict, missing or not understood any requirement, then QA team follow up with
the various stakeholders like Business Analyst, System Architecture.
Test Planning: Test Planning is most important phase of Software testing life cycle
where all testing strategy is defined. This phase also called as Test Strategy phase. In
this phase typically Test Manager (or Test Lead based on company to company)
involved to determine the effort and cost estimates for entire project.
Test Case Development: The test case development activity is started once the test
planning activity is finished. This is the phase of STLC where testing team write down
the detailed test cases. Along with test cases testing team also prepare the test data if
any required for testing. Once the test cases are ready then these test cases are
reviewed by peer members or QA lead.
Environment Setup: Setting up the test environment is vital part of the STLC.
Basically test environment decides on which conditions software is tested. This is
independent activity and can be started parallel with Test Case Development.
Test Execution: Once the preparation of Test Case Development and Test
Environment setup is completed then test execution phase can be kicked off. In this
phase testing team start executing test cases based on prepared test planning &
prepared test cases in the prior step
Test Cycle Closure: Call out the testing team member meeting & evaluate cycle
completion criteria based on Test coverage, Quality, Cost, Time, Critical Business
Objectives, and Software. Discuss what all went good, which area needs to be improve
& taking the lessons from current STLC as input to upcoming test cycles, which will
help to improve bottleneck in the STLC process.
(b)Explain white-box testing. What are the types of errors detected by black-box
testing?
Sol. White-box testing (also known as clear box testing, glass box testing, transparent box
testing, and structural testing) is a method of testing software that tests internal structures
or workings of an application, as opposed to its functionality (i.e. black-box testing). In
white-box testing an internal perspective of the system, as well as programming skills, are
used to design test cases. The tester chooses inputs to exercise paths through the code and
determine the appropriate outputs.
White-box test design techniques include the following code coverage criteria:
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Modified condition/decision coverage
Prime path testing
Path testing.
Advantages:White-box testing is one of the two biggest testing methodologies used today. It has
several major advantages:
1. Side effects of having the knowledge of the source code is beneficial to thorough
testing.
2. Optimization of code by revealing hidden errors and being able to remove these
possible defects.
3. Gives the programmer introspection because developers carefully describe any new
implementation.
4. Provides traceability of tests from the source, allowing future changes to the software
to be easily captured in changes to the tests.
5. White box tests are easy to automate.
6. White box testing give clear, engineering-based, rules for when to stop testing.
Disadvantages:Although white-box testing has great advantages, it is not perfect and contains some
disadvantages:
1. White-box testing brings complexity to testing because the tester must have
knowledge of the program, including being a programmer. White-box testing requires
a programmer with a high-level of knowledge due to the complexity of the level of
testing that needs to be done.
2. On some occasions, it is not realistic to be able to test every single existing condition
of the application and some conditions will be untested.
3. The tests focus on the software as it exists, and missing functionality may not be
discovered.
Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/ design/ implementation of the item being tested is not known
to the tester. These tests can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a
black box; inside which one cannot see. This method attempts to find errors in the
following categories:
Incorrect or missing functions
Interface errors
Errors in data structures or external database access
Behavior or performance errors
Initialization and termination errors
(c)What is the objective of white-box testing? What is the significance of cyclomatic
complexity?
Sol. The Major objective of white-box testing is to focus on internal program structure,
and discover all internal program errors.
The major testing focuses:
- Program structures
- Program statements and branches
- Various kinds of program paths
- Program internal logic and data structures
- Program internal behaviors and states.
The goal is to:
- Guarantee that all independent paths within a module have been exercised at least
once.
- Exercise all logical decisions one their true and false sides.
- Execute all loops at their boundaries and within their operational bounds.
- Exercise internal data structures to assure their validity.
- Exercise all data define and use paths.
Significance of Cyclomatic complexity:
Cyclomatic complexity is a software metric provides a quantitative measure of the
global complexity of a program.
When this metric is used in the context of the basis path testing, the value computed
for cyclomatic complexity defines the number of independent paths in the basis set of
a program.
Three ways to compute cyclomatic complexity:
- The number of regions of the flow graph correspond to the cyclomatic complexity.
- Cyclomatic complexity, V(G), for a flow graph G is defined as
V(G) = E - N +2
where E is the number of flow graph edges and N is the number of flow graph nodes.
- Cyclomatic complexity, V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.
A model describing a SUT is usually an abstract, partial presentation of the SUT's desired
behavior. Test cases derived from such a model are functional tests on the same level of
abstraction as the model. These test cases are collectively known as an abstract test suite.
An abstract test suite cannot be directly executed against an SUT because the suite is on
the wrong level of abstraction. An executable test suite needs to be derived from a
corresponding abstract test suite.
Acceptance Testing
1. Acceptance Testing Include Alpha Testing
means it is also sometimes known as alpha
testing.
SECTION-C
2. Answer any one part of the following:
12x1=20
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features,
including old bugs that have come back. Such regressions occur whenever software
functionality that was previously working, correctly, stops working as intended. Typically,
regressions occur as an unintended consequence of program changes, when the newly
developed part of the software collides with the previously existing code. Common
methods of regression testing include re-running previous sets of test-cases and checking
whether previously fixed faults have re-emerged. The depth of testing depends on the phase
in the release process and the risk of the added features. They can either be complete, for
changes added late in the release or deemed to be risky, or be very shallow, consisting of
positive tests on each feature, if the changes are early in the release or deemed to be of low
risk. Regression testing is typically the largest test effort in commercial software
development, due to checking numerous details in prior software features, and even new
software can be developed while using some old test-cases to test parts of the new design to
ensure prior functionality is still supported.
Regression Testing Services
5.
(ii)
Sol: Defect seeding is a practice in which defects are intentionally inserted into a
program by one group for detection by another group. The ratio of the number of
seeded defects detected to the total number of defects seeded provides a rough idea of
the total number of unseeded defects that have been detected.
Suppose on GigaTron 3.0 that you intentionally seeded the program with 50 errors. For
best effect, the seeded errors should cover the full breadth of the products
functionality and the full range of severitiesranging from crashing errors to cosmetic
errors.
Suppose that at a point in the project when you believe testing to be almost complete
you look at the seeded defect report. You find that 31 seeded defects and 600
indigenous defects have been reported. You can estimate the total number of defects
with the formula:
IndigenousDefectsTotal = ( SeededDefectsPlanted / SeededDefectsFound ) * IndigenousDefectsFound
This technique suggests that GigaTron 3.0 has approximately 50 / 31 * 600 = 967 total
defects.
To use defect seeding, you must seed the defects prior to the beginning of the tests
whose effectiveness you want to ascertain. If your testing uses manual methods and
has no systematic way of covering the same testing ground twice, you should seed
defects before that testing begins. If your testing uses fully automated regression tests,
you can seed defects virtually any time to ascertain the effectiveness of the automated
tests.
A common problem with defect seeding programs is forgetting to remove the seeded
defects. Another common problem is that removing the seeded defects introduces new
errors. To prevent these problems, be sure to remove all seeded defects prior to final
system testing and product release. A useful implementation standard for seeded errors
is to require them to be implemented only by adding one or two lines of code that
create the error; this standard assures that you can remove the seeded errors safely by
simply removing the erroneous lines of code.
(iii)Test Planning
Sol: IEEE Standard for Software Test Documentation defines Test Planning as a
document describing the scope, approach, resources and schedule of intended testing
activities
A Software Test Plan is a document describing the testing scope and activities. It is the
basis for formally testing any software/product in a project.
ISTQB Definition
test plan: A document describing the scope, approach, resources and schedule of
intended test activities. It identifies amongst others test items, the features to be tested,
the testing tasks, who will do each task, degree of tester independence, the test
environment, the test design techniques and entry and exit criteria to be used, and the
rationale for their choice,and any risks requiring contingency planning. It is a record of
the test planning process.
master test plan: A test plan that typically addresses multiple test levels.
phase test plan: A test plan that typically addresses one test phase.
Test Environment:
Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.
Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a link to the
detailed estimation.
Schedule:
Provide a summary of the schedule, specifying key test milestones, and/or provide a
link to the detailed schedule.
Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
List the responsibilities of each team/role/individual.
Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this plan.
List the dependencies.
Approvals:
Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)
TEST PLAN GUIDELINES
Make the plan concise. Avoid redundancy and superfluousness. If you think you do
not need a section that has been mentioned in the template above, go ahead and delete
that section in your test plan.
Be specific. For example, when you specify an operating system as a property of a
test environment, mention the OS Edition/Version as well, not just the OS Name.
Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
Have the test plan reviewed a number of times prior to baselining it or sending it
for approval. The quality of your test plan speaks volumes about the quality of the
testing you or your team is going to perform.
Update the plan as and when necessary. An out-dated and unused document stinks
and is worse than not having the document in the first place.
(iv)Software Quality
Sol: DEFINITION-Software quality is the degree of conformance to explicit or
implicit requirements and expectations.
Explanation:
Explicit: clearly defined and documented
Implicit: not clearly defined and documented but indirectly suggested
Requirements: business/product/software requirements
Expectations: mainly end-user expectations
Note: Some people tend to accept quality as compliance to only explicit requirements
and not implicit requirements. We tend to think of such people as lazy.
Definition by IEEE
often associated with automation software. Test management tools often include
requirement and/or specification management modules that allow automatic
generation of the requirement test matrix (RTM), which is one of the main metrics to
indicate functional coverage of a system under test (SUT).
Test definition includes: test plan, association with product requirements and
specifications. Eventually, some relationship can be set between tests so that
precedences can be established. E.g. if test A is parent of test B and if test A is failing,
then it may be useless to perform test B. Tests should also be associated with priorities.
Every change on a test must be versioned so that the QA team has a comprehensive
view of the history of the test.
(vi)Test Automation
Sol: Developing software for software testing is called software automation.
What to automate:
Certain types of testing are difficult to perform without automation (stress, reliability,
scalability, performance)
Certain types of testing are repeated in nature (regression)
Testing for standards (508, server-client, protocols, state transitions)
A good amount of testing is repetitive in the product development scenario (A good
product has a lifetime of 10 years with regular / periodic enhancements)
Automate only if test cases need to be executed at least 10 times in the near future and
if the effort for automation doesnt exceed 15 times of executing those test cases
(conservative estimate)
Automate those areas where requirements go through the least amount of changes
Normally, change in requirements causes scenarios & implementation to be impacted,
not the basic functionality
Automate areas that motivate use of standard methodology, test tools (not
implementation specifics)
Automate areas for which management commitment exists
User interfaces go through good amount of changes; Automate non-GUI (command
line?) portions first and integrate them with GUI modules as pluggable modules with an
option to execute non-GUI portions independently
Automate areas of critical functions & where increasing coverage is needed
Classify the test cases into HIGH, MED, LOW based on customer expectation and
automate HIGH criticality test cases first
Automate areas where permutations & combinations are high good return on
investment with less code
Automate areas for which static decision tables exist - dynamic results are difficult to
automate and take more effort
Automate dependent & core test cases this helps in getting the critical defects early
Automate the areas that can benefit more than one product and for the framework
requirements
Automate those test cases that are easy and take less effort this satisfies those people
who look at immediate returns
Automate the areas that will be required for the industry 40% of the test tools
available in the market were internal tools before
(vii)Software Maintenance
The primary goal is to prevent defects. Where this is not possible or practical, the
goals are to both find the defect as quickly as possible and minimize the impact of the
defect.
The defect management process should be risk driven -- i.e., strategies, priorities,
and resources should be based on the extent to which risk can be reduced.
Defect measurement should be integrated into the software development process
and be used by the project team to improve the process. In other words, the project
staff, by doing their job, should capture information on defects at the source. It should
not be done after-the-fact by people unrelated to the project or system
As much as possible, the capture and analysis of the information should be
automated.
Defect information should be used to improve the process. This, in fact, is the
primary reason for gathering defect information.
Most defects are caused by imperfect or flawed processes. Thus to prevent
defects, the process must be altered.