Software Testing End Semester (Odd) Examination December 2023-1
Software Testing End Semester (Odd) Examination December 2023-1
• Test case ID
• Test case name (file
Records all the “static” information name)
Test case
about the tests • Test case owner
• Associated files for
the test case
• Test case ID
Gives the history of when a test
Test case was run and what was the result; • Run date
run history provides inputs on selection of • Time taken
tests for regression runs
• Run status
2
(success/failure)
(l) State the type of tools required for review and inspections.
• Complexity analysis tools
• Code comprehension
(m)Define class level testing.
• Methods and attributes make up a class.
• Class-level (or intra-class) testing refers to the testing of interactions among the
components of an individual class.
(n) State the completeness property of web page.
Check that certain information is available on a given web page, links between pages
exist, or even check the existence of the web pages themselves.
(o) Mention two challenges in testing for web-based Software.
• Diversity and complexity
• Dynamic environment
• Very short development time
• Continuous evolution
• Compatibility and interoperability
2. Answer any two:
(a)Explain life cycle of a bug.
3
Classified into two phases
• bugs-in phase
• bugs-out phase
Bugs-In Phase
• This phase is where the errors and bugs are introduced in the software.
• Whenever we commit a mistake, it creates errors on a specific location of the
software and consequently, when this error goes unnoticed, it causes some
conditions to fail, leading to a bug in the software.
• This bug is carried out to the subsequent phases of SDLC, if not detected.
• Thus, a phase may have its own errors as well as bugs received from the
previous phase.
• If you are not performing verification on earlier phases, then there is no chance
of detecting these bugs.
Bugs-Out Phase
• If failures occur while testing a software product, we come to the conclusion
that it is affected by bugs.
• However, there are situations when bugs are present, even though we don’t
observe any failures.
• In this phase, when we observe failures, the following activities are performed
to get rid of the bugs.
o Bug classification
o Bug isolation
o Bug resolution
Bug classification
• A bug can be critical or catastrophic in nature or it may have no adverse effect
on the output behaviour of the software.
• In this way, we classify all the failures.
• This is necessary, because there may be many bugs to be resolved.
• But a tester may not have sufficient time.
• Thus, categorization of bugs may help by handling high criticality bugs first and
considering other trivial bugs on the list later, if time permits.
Bug isolation
• Bug isolation is the activity by which we locate the module in which the bug
appears.
• Incidents observed in failures help in this activity.
• We observe the symptoms and back-trace the design of the software and reach
the module/files and the condition inside it which has caused the bug.
• This is known as bug isolation.
Bug resolution
4
• Once we have isolated the bug, we back-trace the design to pinpoint the
location of the error.
• In this way, a bug is resolved when we have found the exact location of its
occurrence.
(b)Discuss different kinds of bug based on Software Development Life Cycle (SDLC).
Bug Classification Based on SDLC
• Requirements and Specifications Bugs
o The first type of bug in SDLC is in the requirement gathering and
specification phase.
o It has been observed that most of the bugs appear in this phase only.
o If these bugs go undetected, they propagate into subsequent phases.
• Design Bugs
o Design bugs may be the bugs from the previous phase and in addition
those errors which are introduced in the present phase.
o The following design errors may be there.
▪ Control flow bugs
• If we look at the control flow of a program then there may
be many errors.
• For example,
o some paths through the flow may be missing;
o there may be unreachable paths, etc.
▪ Logic bugs
• Any type of logical mistakes made in the design is a logical
bug.
• For example,
• improper layout of cases,
• missing cases,
• improper combination of cases,
• misunderstanding of the semantics of the order in which a
Boolean expression is evaluated.
▪ Processing bugs
• Any type of computation mistakes result in processing
bugs.
• Examples include
• arithmetic error,
5
• incorrect conversion from one data representation to
another,
• ignoring overflow,
• improper use of logical operators, etc.
▪ Data flow bugs
• There may be dataflow anomaly errors like
o un-initialized data,
o initialized in wrong format,
o data initialized but not used,
o data used but not initialized,
o redefined without any intermediate use, etc.
▪ Error handling bugs
• There may be errors about error handling in the software.
• There are situations in the system when exception
handling mechanisms must be adopted.
• If the system fails, then there must be an error message or
the system should handle the error in an appropriate way.
• If you forget to do all this, then error handling bugs appear.
▪ Race condition bugs
• Race conditions also lead to bugs.
• Sometimes these bugs are irreproducible.
▪ Boundary-related bugs
• Most of the time, the designers forget to take into
consideration what will happen if any aspect of a program
goes beyond its minimum and maximum values.
• When the software fails at the boundary values, then these
are known as boundary-related bugs.
• There may be boundaries in loop, time, memory, etc.
▪ User interface bugs
• There may be some design bugs that are related to users.
• If the user does not feel good while using the software,
then there are user interface bugs.
• Examples include
o inappropriate functionality of some features;
o not doing what the user expects;
6
o missing, misleading, or confusing information;
o wrong content in the help text; inappropriate error
messages, etc.
• Coding Bugs
o undeclared data
o undeclared routines
o dangling code
o typographical errors
o documentation bugs, i.e. erroneous comments lead to bugs in
maintenance.
• Interface and Integration Bugs
o External interface bugs include
▪ invalid timing or sequence assumptions related to external
signals,
▪ misunderstanding external input and output formats, and
▪ user interface bugs.
o Internal interface bugs include
▪ input and output format bugs,
▪ inadequate protection against corrupted data,
▪ wrong subroutine call sequence,
▪ call parameter bugs, and
▪ misunderstood entry or exit parameter
values.
o Integration bugs result from inconsistencies or incompatibilities between
modules discussed in the form of interface bugs.
o There may be bugs in data transfer and data sharing between the
modules.
• System Bugs
o There may be bugs while testing the system as a whole based on
various parameters like
▪ performance,
▪ stress,
▪ compatibility,
▪ usability, etc.
• Testing Bugs
7
o Some testing mistakes are:
▪ failure to notice/report a problem,
▪ failure to use the most promising test case,
▪ failure to make it clear how to reproduce the problem,
▪ failure to check for unresolved problems just before the release,
▪ failure to verify fixes,
▪ failure to provide summary report.
(c) Explain the process to verify requirements and objectives.
Following are the points against which every requirement in SRS should be verified:
• Correctness
o Testers should refer to other documentations or applicable standards
and compare the specified requirement with them.
o Testers can interact with customers or users, if requirements are not
well-understood.
o Testers should check the correctness in the sense of realistic
requirement.
• Unambiguous
o Every requirement has only one interpretation.
o Each characteristic of the final product is described using a single unique
term.
• Consistent
o Real-world objects conflict,
o for example, one specification recommends mouse for input, another
recommends joystick.
o Logical conflict between two specified actions,
o e.g. one specification requires the function to perform square root, while
another specification requires the same function to perform square
operation.
o Conflicts in terminology should also be verified.
o For example, at one place, the term process is used while at another
place, it has been termed as task or module.
• Completeness
o Verify that all significant requirements such as functionality,
performance, design constraints, attribute, or external interfaces are
complete.
o Check whether responses of every possible input (valid & invalid) to the
software have been defined.
o Check whether figures and tables have been labeled and referenced
completely.
8
• Updation
o Requirement specifications are not stable,
o they may be modified or another requirement may be added later.
o Therefore, if any updation is there in the SRS, then the updated
specifications must be verified.
o If the specification is a new one,
o then all the above mentioned steps and their feasibility should be
verified.
o If the specification is a change in an already mentioned specification,
o then we must verify that this change can be implemented in the current
design.
• Traceability
o Two types of traceability must be verified:
▪ Backward traceability
• Check that each requirement references its source in
previous documents.
▪ Forward traceability
• Check that each requirement has a unique name or
reference number in all the documents.
3. Answer any two:
(a)A program determines the next date in the calendar. Its input is entered in the form
of <ddmmyyyy> with the following range:
1 ≤ mm ≤ 12
1 ≤ dd ≤ 31
1900 ≤ yyyy ≤ 2025
Its output would be the next date or an error message 'invalid date'. Design test cases
using equivalence class partitioning method.
First we partition the domain of input in terms of valid input values and invalid values,
getting the following classes:
I1 = {<m, d, y> : 1 ≤ m ≤ 12}
I2 = {<m, d, y> : 1 ≤ d ≤ 31}
I3 = {<m, d, y> : 1900 ≤ y ≤ 2025}
I4 = {<m, d, y> : m < 1}
I5 = {<m, d, y> : m > 12}
I6 = {<m, d, y> : d < 1}
I7 = {<m, d, y> : d > 31}
I8 = {<m, d, y> : y < 1900}
I9 = {<m, d, y> : y > 2025}
The test cases can be designed from the above derived classes, taking one
test case from each class such that the test case covers maximum valid input
classes, and separate test cases for each invalid class. The test cases are shown
below:
Test case ID mm dd yyyy Expected result Classes covered by the test case
10
Cyclomatic complexity
(i) V(G) = e – n + 2p
= 17 – 12 + 2
=7
(ii) V(G) = Number of predicate nodes + 1
= 2 (Nodes B, C) + 1
=7
Node C is a switch-case, so for fi nding out the number of predicate
nodes for this case, follow the following formula:
Number of predicated nodes
= Number of links out of main node –1
= 6 – 1 = 5 (For node C)
(iii) V(G) = Number of regions = 7
Independent paths
Since the cyclomatic complexity of the graph is 7, there will be 7 independent paths in
the graph as shown below:
1. A-B-D-L
2. A-B-C-E-K-L
3. A-B-C-F-K-L
4. A-B-C-G-K-L
5. A-B-C-H-K-L
6. A-B-C-I-K-L
7. A-B-C-J-K-L
Test Case Design from the list of Independent Paths
12
• Test P ‘ with T ‘, establishing correctness of P ‘ with respect to T ‘. (Test suite
execution problem)
• If necessary, create T ‘’, a set of new functional or structural test cases for P ‘.
(Coverage identification problem)
• Test P ‘ with T ‘’, establishing correctness of P ‘ with respect to T ‘’. (Test suite
execution problem)
• Create T ‘’’, a new test suite and test execution profile for P ‘, from T, T ‘, and T
‘’. (Test suite maintenance problem)
4. Answer any two:
(a)Explain the key elements of test management.
• Test organization
o It is the process of setting up and managing a suitable test
organizational structure and defining explicit roles.
o The project framework under which the testing activities will be carried
out is reviewed, high-level test phase plans are prepared, and resource
schedules are considered.
o Test organization also involves the determination of configuration
standards and defining the test environment.
o The testing group is responsible for the following activities:
▪ Maintenance and application of test policies
▪ Development and application of testing standards
▪ Participation in requirement, design, and code reviews
▪ Test planning
▪ Test execution
▪ Test measurement
▪ Test monitoring
▪ Defect tracking
▪ Acquisition of testing tools
▪ Test reporting
o The staff members of such a testing group are called test specialists or
test engineers or simply testers.
• Detailed test design and test specifications
o A detailed design is the process of designing a meaningful and useful
structure for the tests as a whole.
o It specifies the details of the test approach for a software functionality or
feature and identifying the associated test cases.
o Detailed test designing for each validation activity maps the
requirements or features to the actual test cases to be executed.
o One way to map the features to their test cases is to analyse the
following:
▪ Requirement traceability
▪ Design traceability
▪ Code traceability
13
o The analyses can be maintained in the form of a traceability matrix such
that every requirement or feature is mapped to a function in the
functional design.
o This function is then mapped to a module (internal design and code) in
which the function is being implemented.
o This in turn is linked to the test case to be executed.
• Test monitoring and assessment
o It is the ongoing monitoring and assessment to check the integrity of
development and construction.
o The status of configuration items should be reviewed against the phase
plans and the test progress reports prepared, the ensure the verification
and validation activities are correct.
• Test Planning
o The requirements definition and design specifications facilitate the
identification of major test items and these may necessitate updating the
test strategy.
o A detailed test plan and schedule is prepared with key test
responsibilities being indicated.
o Since software projects become uncontrolled if not planned properly, the
testing process is also not effective if not planned earlier.
o Moreover, if testing is not effective in a software project, it also affects
the final software product.
o Therefore, for a quality software, testing activities must be planned as
soon as the project planning starts.
o A test plan is defined as a document that describes the scope, approach,
resources, and schedule of intended testing activities.
o Test plan is driven with the business goals of the product.
o In order to meet a set of goals, the test plan identifies the following:
▪ Test items
▪ Features to be tested
▪ Testing tasks
▪ Tools selection
▪ Time and effort estimate
▪ Who will do each task
▪ Any risks
▪ Milestones
(b)Discuss major activities of V&V planning.
• Master Schedule
14
o The master schedule summarizes various V&V tasks and their
relationship to the overall project.
o Describes the project life cycle and project milestones including
completion dates.
o Summarizes the schedule of V&V tasks and how verification and
validation results provide feedback to the development process to
support overall project management functions.
o Defines an orderly flow of material between project activities and V&V
tasks.
o Use reference to PERT, CPM, and Gantt charts to define the relationship
between various activities.
• Resource Summary
o This activity summarizes the resources needed to perform V&V tasks,
including staffing, facilities, tools, finances, and special procedural
requirements such as
security, access rights, and documentation control.
o In this activity,
▪ Use graphs and tables to present resource utilization.
▪ Include equipment and laboratory resources required.
▪ Summarize the purpose and cost of hardware and software tools
to be employed.
▪ Take all resources into account and allow for additional time and
money to cope with contingencies.
• Responsibilities
o Identify the organization responsible for performing V&V tasks.
o There are two levels of responsibilities—
▪ general responsibilities assigned to different organizations and
▪ specific responsibilities for the V&V tasks to be performed,
assigned to individuals.
o General responsibilities should be supplemented with
specific responsibility for each task in the V&V plan.
• Tools, Techniques, and Methodology
o Identify the special software tools, techniques, and methodologies to be
employed by the V&V team.
o The purpose of each should be defined and plans for the acquisition,
training, support, and qualification of each should be described.
o This section may be in a narrative or graphic format.
o A separate tool plan may be developed for software tool acquisition,
development, or modification.
o In this case, a separate tool plan section may be added to the plan.
(c) Explain different types of project metrics.
Project Metrics is a set of metrics that indicates how the project is planned and
executed. Types are
15
• Effort Variance
o A typical project starts with requirements gathering and ends with
product release.
o All the phases that fall in between these points need to be planned and
tracked.
o In the planning cycle, the scope of the project is finalized.
o The project scope gets translated to size estimates, which specify the
quantum of work to be done.
o This size estimate gets translated to effort estimate for each of the
phases and activities by using the available productivity data available.
o This initial effort is called baselined effort.
o As the project progresses and if the scope of the project changes or if
the available productivity numbers are not correct,
▪ then the effort estimates are re-evaluated again and this re-
evaluated effort estimate is called revised effort.
o Effort variance for each of the phases provides a quantitative measure of
the relative difference between the revised and actual efforts.
o Variance % = [(Actual effort – Revised estimate)/Revised estimate] * 100
o A variance of more than 5% in any of the SDLC phase indicates the
scope for improvement in the estimation.
o The variance can be negative also.
o A negative variance is an indication of an over estimate.
o These variance numbers along with analysis can help in better
estimation for the next release or the next revised estimation cycle.
• Schedule Variance
o Schedule variance, like effort variance, is the deviation of the actual
schedule from the estimated schedule.
o Depending on the SDLC model used by the project, several phases
could be active at the same time.
o Further, the different phases in SDLC are interrelated and could share
the same set of individuals.
o Because of all these complexities involved, schedule variance is
calculated only at the overall project level, at specific milestones,
▪ not with respect to each of the SDLC phases.
o Effort and schedule variance have to be analyzed in totality, not in
isolation.
o This is because while effort is a major driver of the cost,
▪ schedule determines how best a product can exploit market
opportunities.
o Variance can be classified into
▪ negative variance,
▪ zero variance,
▪ acceptable variance, and
▪ unacceptable variance.
o Generally 0–5% is considered as acceptable variance.
16
• Effort distribution
o Variance calculation helps in finding out whether commitments are met
on time and whether the estimation method works well.
o In addition, some indications on product quality can be obtained if the
effort distribution across the various phases are captured and analyzed.
o For example,
▪ Spending very little effort on requirements may lead to frequent
changes
▪ Spending less effort in testing may cause defects to crop up in the
customer place
o Adequate and appropriate effort needs to be spent in each of the SDLC
phase for a quality product release.
o The distribution percentage across the different phases can be
estimated at the time of planning and these can
be compared with the actuals at the time of release for getting a comfort
feeling on the release and estimation
methods.
o Mature organizations spend at least 10–15% of the total effort in
requirements and approximately the same
effort in the design phase.
o The effort percentage for testing depends on the type of release and
amount of change to the existing code base and functionality.
o Typically, organizations spend about 20–50% of their total effort in
testing.
19
o Automation can be effective if proven techniques are used to produce
efficient, maintainable, and reusable test scripts.
o The following are some hints:
▪ Read the data values from either spreadsheets or tool-provided
data pools, rather than being hard-coded into the test-case script
because this prevent test cases from being reused.
▪ Hard-coded values should be replaced with variables and
whenever possible read data from external sources.
o Use modular script development.
▪ It increases maintainability and readability of the source code.
o Build library of reusable functions by separating the common actions into
shared script library usable by all test engineers.
o All test scripts should be stored in a version control tool.
• Automate the regression tests whenever feasible
o Regression testing consumes a lot of time.
o If tools are used for this testing, the testing time can be reduced to a
greater extent.
o Therefore, whenever possible, automate the regression test cases.
20
State diagram is shown below.
22