0% found this document useful (0 votes)
42 views22 pages

Software Testing End Semester (Odd) Examination December 2023-1

The document outlines various aspects of software testing, including types of bugs, verification goals, testing costs, and the lifecycle of bugs. It discusses the classification of bugs based on criticality and the Software Development Life Cycle (SDLC), as well as methods for verifying requirements and objectives. Additionally, it provides examples of test cases using equivalence class partitioning and highlights the importance of automation in testing.

Uploaded by

routservices247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views22 pages

Software Testing End Semester (Odd) Examination December 2023-1

The document outlines various aspects of software testing, including types of bugs, verification goals, testing costs, and the lifecycle of bugs. It discusses the classification of bugs based on criticality and the Software Development Life Cycle (SDLC), as well as methods for verifying requirements and objectives. Additionally, it provides examples of test cases using equivalence class partitioning and highlights the importance of automation in testing.

Uploaded by

routservices247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

1.

Answer the following questions:


(a) Give one example of equivalent fault.
i. Consider the following two faults in C language
1. division by zero and
2. illegal memory access errors.
These two are equivalent faults, since each of these leads to a program crash.
(b) Classify the bugs based on criticality.
• Critical Bug
• Major Bug
• Medium Bug
• Minor Bug
(c) State the goals of verification.
• Everything Must be Verified
• Results of Verification May Not be Binary
• Even Implicit Qualities Must be Verified
(d)Calculate the number of test cases to be developed in worst case testing if there
are 5 variables in a module.
5n=125
(e) Distinguish between predicate usage node and computation usage node with
suitable examples.
• If usage node n is a predicate node, then n is a predicate usage node.
• If usage node n corresponds to a computation statement in a program other
than predicate, then it is called a computation usage node.
(f) T contains 100 tests of which 64 are non-modification-revealing for P and P’ and M
omits 43 of these 64 tests, then calculate the precision of M relative to P, P’, and
T.
n=64
m=43
m/n*100%=43/64*100%=67.1875%
(g) Give two examples of testing cost.
• Cost of planning and designing the tests
• Cost of acquiring the hardware and software required for the tests
• Cost to support the environment
• Cost of executing the tests
• Cost of recording and analysing the test results
• Cost of training the testers, if any
• Cost of maintaining the test database
1
(h) Suppose module X has 20 input parameters, 30 internal data items and 20 output
parameters. Similarly, module Y has 10 input parameters, 20 internal data items,
and 5 output parameters. Compare the data complexity of module X with module
Y.
• The data complexity of module X is more as compared to Y, therefore X is
more prone to errors.
• Therefore, testers should be careful while testing module X
(i) Write the formula for finding Defects per 100 Hours of Testing.
Defects per 100 hours of testing = (Total defects found in the product for a
period/Total hours spent to get those defects) * 100
(j) Write any two advantages of automation testing.
• Automation saves time as software can execute test cases faster than human
do
• Test automation can free the test engineers from mundane tasks and make
them focus on more creative tasks
• Automated tests can be more reliable
• Automation helps in immediate testing
• Automation can protect an organization against attrition of test engineers
• Test automation opens up opportunities for better utilization of global resources
(k) Mention the contents of test case database (TCDB).

Entity Purpose Attributes

• Test case ID
• Test case name (file
Records all the “static” information name)
Test case
about the tests • Test case owner
• Associated files for
the test case

Provides a mapping between the


Test case -
tests and the corresponding • Test case ID
Product
product features; enables
cross • Module ID
identification of tests for a given
reference
feature

• Test case ID
Gives the history of when a test
Test case was run and what was the result; • Run date
run history provides inputs on selection of • Time taken
tests for regression runs
• Run status

2
(success/failure)

Test Gives details of test cases • Test case ID


case— introduced to test certain specific
Defect defects detected in the product; • Defect reference #
cross provides inputs on the selection of points to a record in
reference tests for regression runs the defect repository)

(l) State the type of tools required for review and inspections.
• Complexity analysis tools
• Code comprehension
(m)Define class level testing.
• Methods and attributes make up a class.
• Class-level (or intra-class) testing refers to the testing of interactions among the
components of an individual class.
(n) State the completeness property of web page.
Check that certain information is available on a given web page, links between pages
exist, or even check the existence of the web pages themselves.
(o) Mention two challenges in testing for web-based Software.
• Diversity and complexity
• Dynamic environment
• Very short development time
• Continuous evolution
• Compatibility and interoperability
2. Answer any two:
(a)Explain life cycle of a bug.

3
Classified into two phases
• bugs-in phase
• bugs-out phase
Bugs-In Phase
• This phase is where the errors and bugs are introduced in the software.
• Whenever we commit a mistake, it creates errors on a specific location of the
software and consequently, when this error goes unnoticed, it causes some
conditions to fail, leading to a bug in the software.
• This bug is carried out to the subsequent phases of SDLC, if not detected.
• Thus, a phase may have its own errors as well as bugs received from the
previous phase.
• If you are not performing verification on earlier phases, then there is no chance
of detecting these bugs.
Bugs-Out Phase
• If failures occur while testing a software product, we come to the conclusion
that it is affected by bugs.
• However, there are situations when bugs are present, even though we don’t
observe any failures.
• In this phase, when we observe failures, the following activities are performed
to get rid of the bugs.
o Bug classification
o Bug isolation
o Bug resolution
Bug classification
• A bug can be critical or catastrophic in nature or it may have no adverse effect
on the output behaviour of the software.
• In this way, we classify all the failures.
• This is necessary, because there may be many bugs to be resolved.
• But a tester may not have sufficient time.
• Thus, categorization of bugs may help by handling high criticality bugs first and
considering other trivial bugs on the list later, if time permits.
Bug isolation
• Bug isolation is the activity by which we locate the module in which the bug
appears.
• Incidents observed in failures help in this activity.
• We observe the symptoms and back-trace the design of the software and reach
the module/files and the condition inside it which has caused the bug.
• This is known as bug isolation.
Bug resolution
4
• Once we have isolated the bug, we back-trace the design to pinpoint the
location of the error.
• In this way, a bug is resolved when we have found the exact location of its
occurrence.
(b)Discuss different kinds of bug based on Software Development Life Cycle (SDLC).
Bug Classification Based on SDLC
• Requirements and Specifications Bugs
o The first type of bug in SDLC is in the requirement gathering and
specification phase.
o It has been observed that most of the bugs appear in this phase only.
o If these bugs go undetected, they propagate into subsequent phases.
• Design Bugs
o Design bugs may be the bugs from the previous phase and in addition
those errors which are introduced in the present phase.
o The following design errors may be there.
▪ Control flow bugs
• If we look at the control flow of a program then there may
be many errors.
• For example,
o some paths through the flow may be missing;
o there may be unreachable paths, etc.
▪ Logic bugs
• Any type of logical mistakes made in the design is a logical
bug.
• For example,
• improper layout of cases,
• missing cases,
• improper combination of cases,
• misunderstanding of the semantics of the order in which a
Boolean expression is evaluated.
▪ Processing bugs
• Any type of computation mistakes result in processing
bugs.
• Examples include
• arithmetic error,

5
• incorrect conversion from one data representation to
another,
• ignoring overflow,
• improper use of logical operators, etc.
▪ Data flow bugs
• There may be dataflow anomaly errors like
o un-initialized data,
o initialized in wrong format,
o data initialized but not used,
o data used but not initialized,
o redefined without any intermediate use, etc.
▪ Error handling bugs
• There may be errors about error handling in the software.
• There are situations in the system when exception
handling mechanisms must be adopted.
• If the system fails, then there must be an error message or
the system should handle the error in an appropriate way.
• If you forget to do all this, then error handling bugs appear.
▪ Race condition bugs
• Race conditions also lead to bugs.
• Sometimes these bugs are irreproducible.
▪ Boundary-related bugs
• Most of the time, the designers forget to take into
consideration what will happen if any aspect of a program
goes beyond its minimum and maximum values.
• When the software fails at the boundary values, then these
are known as boundary-related bugs.
• There may be boundaries in loop, time, memory, etc.
▪ User interface bugs
• There may be some design bugs that are related to users.
• If the user does not feel good while using the software,
then there are user interface bugs.
• Examples include
o inappropriate functionality of some features;
o not doing what the user expects;
6
o missing, misleading, or confusing information;
o wrong content in the help text; inappropriate error
messages, etc.
• Coding Bugs
o undeclared data
o undeclared routines
o dangling code
o typographical errors
o documentation bugs, i.e. erroneous comments lead to bugs in
maintenance.
• Interface and Integration Bugs
o External interface bugs include
▪ invalid timing or sequence assumptions related to external
signals,
▪ misunderstanding external input and output formats, and
▪ user interface bugs.
o Internal interface bugs include
▪ input and output format bugs,
▪ inadequate protection against corrupted data,
▪ wrong subroutine call sequence,
▪ call parameter bugs, and
▪ misunderstood entry or exit parameter
values.
o Integration bugs result from inconsistencies or incompatibilities between
modules discussed in the form of interface bugs.
o There may be bugs in data transfer and data sharing between the
modules.
• System Bugs
o There may be bugs while testing the system as a whole based on
various parameters like
▪ performance,
▪ stress,
▪ compatibility,
▪ usability, etc.
• Testing Bugs
7
o Some testing mistakes are:
▪ failure to notice/report a problem,
▪ failure to use the most promising test case,
▪ failure to make it clear how to reproduce the problem,
▪ failure to check for unresolved problems just before the release,
▪ failure to verify fixes,
▪ failure to provide summary report.
(c) Explain the process to verify requirements and objectives.
Following are the points against which every requirement in SRS should be verified:
• Correctness
o Testers should refer to other documentations or applicable standards
and compare the specified requirement with them.
o Testers can interact with customers or users, if requirements are not
well-understood.
o Testers should check the correctness in the sense of realistic
requirement.
• Unambiguous
o Every requirement has only one interpretation.
o Each characteristic of the final product is described using a single unique
term.
• Consistent
o Real-world objects conflict,
o for example, one specification recommends mouse for input, another
recommends joystick.
o Logical conflict between two specified actions,
o e.g. one specification requires the function to perform square root, while
another specification requires the same function to perform square
operation.
o Conflicts in terminology should also be verified.
o For example, at one place, the term process is used while at another
place, it has been termed as task or module.
• Completeness
o Verify that all significant requirements such as functionality,
performance, design constraints, attribute, or external interfaces are
complete.
o Check whether responses of every possible input (valid & invalid) to the
software have been defined.
o Check whether figures and tables have been labeled and referenced
completely.

8
• Updation
o Requirement specifications are not stable,
o they may be modified or another requirement may be added later.
o Therefore, if any updation is there in the SRS, then the updated
specifications must be verified.
o If the specification is a new one,
o then all the above mentioned steps and their feasibility should be
verified.
o If the specification is a change in an already mentioned specification,
o then we must verify that this change can be implemented in the current
design.
• Traceability
o Two types of traceability must be verified:
▪ Backward traceability
• Check that each requirement references its source in
previous documents.
▪ Forward traceability
• Check that each requirement has a unique name or
reference number in all the documents.
3. Answer any two:
(a)A program determines the next date in the calendar. Its input is entered in the form
of <ddmmyyyy> with the following range:
1 ≤ mm ≤ 12
1 ≤ dd ≤ 31
1900 ≤ yyyy ≤ 2025
Its output would be the next date or an error message 'invalid date'. Design test cases
using equivalence class partitioning method.
First we partition the domain of input in terms of valid input values and invalid values,
getting the following classes:
I1 = {<m, d, y> : 1 ≤ m ≤ 12}
I2 = {<m, d, y> : 1 ≤ d ≤ 31}
I3 = {<m, d, y> : 1900 ≤ y ≤ 2025}
I4 = {<m, d, y> : m < 1}
I5 = {<m, d, y> : m > 12}
I6 = {<m, d, y> : d < 1}
I7 = {<m, d, y> : d > 31}
I8 = {<m, d, y> : y < 1900}
I9 = {<m, d, y> : y > 2025}
The test cases can be designed from the above derived classes, taking one
test case from each class such that the test case covers maximum valid input
classes, and separate test cases for each invalid class. The test cases are shown
below:
Test case ID mm dd yyyy Expected result Classes covered by the test case

1 5 20 1996 21-5-1996 I1, I2, I3


9
2 0 13 2000 Invalid input I4

3 13 13 1950 Invalid input I5

4 12 0 2007 Invalid input I6

5 6 32 1956 Invalid input I7

6 11 15 1899 Invalid input I8

7 10 19 2026 Invalid input I9

(b)Consider the following program.


main()
{
char chr;
1. printf ("Enter the special character\n");
2. scanf ("%c", &chr);
3. if (chr != 48) && (chr != 49) && (chr != 50) && (chr != 51) &&
(chr != 52) && (chr != 53) && (chr != 54) && (chr != 55) &&
(chr != 56) && (chr != 57)
4. {
5. switch(chr)
6. {
7. Case '*': printf("It is a special character");
8. break;
9. Case '#': printf("It is a special character");
10. break;
11. Case '@': printf("It is a special character");
12. break;
13. Case '!': printf("It is a special character");
14. break;
15. Case '%': printf("It is a special character");
16. break;
17. default : printf("You have not entered a special character");
18. break;
19. }// end of switch
20. } // end of If
21. else
22. printf("You have not entered a character");
23. } // end of main()
i. Draw the DD graph for the program.
ii. Calculate the cyclomatic complexity of the program using all the
methods.
iii. List all independent paths.
iv. Design test cases from independent paths.

10
Cyclomatic complexity
(i) V(G) = e – n + 2p
= 17 – 12 + 2
=7
(ii) V(G) = Number of predicate nodes + 1
= 2 (Nodes B, C) + 1
=7
Node C is a switch-case, so for fi nding out the number of predicate
nodes for this case, follow the following formula:
Number of predicated nodes
= Number of links out of main node –1
= 6 – 1 = 5 (For node C)
(iii) V(G) = Number of regions = 7
Independent paths
Since the cyclomatic complexity of the graph is 7, there will be 7 independent paths in
the graph as shown below:
1. A-B-D-L
2. A-B-C-E-K-L
3. A-B-C-F-K-L
4. A-B-C-G-K-L
5. A-B-C-H-K-L
6. A-B-C-I-K-L
7. A-B-C-J-K-L
Test Case Design from the list of Independent Paths

Test Input Expected Output Independent path covered


Case ID Character by Test Case
11
1 ( You have not entered a A-B-D-L
character

2 * It is a special character A-B-C-E-K-L

3 # It is a special character A-B-C-F-K-L

4 @ It is a special character A-B-C-G-K-L

5 ! It is a special character A-B-C-H-K-L

6 % It is a special character A-B-C-I-K-L

7 $ You have not entered a special A-B-C-J-K-L


character

(c) Explain the features and steps in selective retest technique.


Features Of Selective Retest Techniques
• It minimizes the resources required to regression test a new version
• It is achieved by minimizing the number of test cases applied to the new
version
• It is needed because a regression test suite grows with each version, resulting
in broken, obsolete, uncontrollable, redundant test cases
• It analyses the relationship between the test cases and the software elements
they cover
• It uses the information about changes to select test cases
Steps in Selective Retest Technique

• Select T ‘ subset of T, a set of test cases to execute on P ‘. (Regression test


selection problem)

12
• Test P ‘ with T ‘, establishing correctness of P ‘ with respect to T ‘. (Test suite
execution problem)
• If necessary, create T ‘’, a set of new functional or structural test cases for P ‘.
(Coverage identification problem)
• Test P ‘ with T ‘’, establishing correctness of P ‘ with respect to T ‘’. (Test suite
execution problem)
• Create T ‘’’, a new test suite and test execution profile for P ‘, from T, T ‘, and T
‘’. (Test suite maintenance problem)
4. Answer any two:
(a)Explain the key elements of test management.
• Test organization
o It is the process of setting up and managing a suitable test
organizational structure and defining explicit roles.
o The project framework under which the testing activities will be carried
out is reviewed, high-level test phase plans are prepared, and resource
schedules are considered.
o Test organization also involves the determination of configuration
standards and defining the test environment.
o The testing group is responsible for the following activities:
▪ Maintenance and application of test policies
▪ Development and application of testing standards
▪ Participation in requirement, design, and code reviews
▪ Test planning
▪ Test execution
▪ Test measurement
▪ Test monitoring
▪ Defect tracking
▪ Acquisition of testing tools
▪ Test reporting
o The staff members of such a testing group are called test specialists or
test engineers or simply testers.
• Detailed test design and test specifications
o A detailed design is the process of designing a meaningful and useful
structure for the tests as a whole.
o It specifies the details of the test approach for a software functionality or
feature and identifying the associated test cases.
o Detailed test designing for each validation activity maps the
requirements or features to the actual test cases to be executed.
o One way to map the features to their test cases is to analyse the
following:
▪ Requirement traceability
▪ Design traceability
▪ Code traceability

13
o The analyses can be maintained in the form of a traceability matrix such
that every requirement or feature is mapped to a function in the
functional design.
o This function is then mapped to a module (internal design and code) in
which the function is being implemented.
o This in turn is linked to the test case to be executed.
• Test monitoring and assessment
o It is the ongoing monitoring and assessment to check the integrity of
development and construction.
o The status of configuration items should be reviewed against the phase
plans and the test progress reports prepared, the ensure the verification
and validation activities are correct.
• Test Planning
o The requirements definition and design specifications facilitate the
identification of major test items and these may necessitate updating the
test strategy.
o A detailed test plan and schedule is prepared with key test
responsibilities being indicated.
o Since software projects become uncontrolled if not planned properly, the
testing process is also not effective if not planned earlier.
o Moreover, if testing is not effective in a software project, it also affects
the final software product.
o Therefore, for a quality software, testing activities must be planned as
soon as the project planning starts.
o A test plan is defined as a document that describes the scope, approach,
resources, and schedule of intended testing activities.
o Test plan is driven with the business goals of the product.
o In order to meet a set of goals, the test plan identifies the following:
▪ Test items
▪ Features to be tested
▪ Testing tasks
▪ Tools selection
▪ Time and effort estimate
▪ Who will do each task
▪ Any risks
▪ Milestones
(b)Discuss major activities of V&V planning.
• Master Schedule

14
o The master schedule summarizes various V&V tasks and their
relationship to the overall project.
o Describes the project life cycle and project milestones including
completion dates.
o Summarizes the schedule of V&V tasks and how verification and
validation results provide feedback to the development process to
support overall project management functions.
o Defines an orderly flow of material between project activities and V&V
tasks.
o Use reference to PERT, CPM, and Gantt charts to define the relationship
between various activities.
• Resource Summary
o This activity summarizes the resources needed to perform V&V tasks,
including staffing, facilities, tools, finances, and special procedural
requirements such as
security, access rights, and documentation control.
o In this activity,
▪ Use graphs and tables to present resource utilization.
▪ Include equipment and laboratory resources required.
▪ Summarize the purpose and cost of hardware and software tools
to be employed.
▪ Take all resources into account and allow for additional time and
money to cope with contingencies.
• Responsibilities
o Identify the organization responsible for performing V&V tasks.
o There are two levels of responsibilities—
▪ general responsibilities assigned to different organizations and
▪ specific responsibilities for the V&V tasks to be performed,
assigned to individuals.
o General responsibilities should be supplemented with
specific responsibility for each task in the V&V plan.
• Tools, Techniques, and Methodology
o Identify the special software tools, techniques, and methodologies to be
employed by the V&V team.
o The purpose of each should be defined and plans for the acquisition,
training, support, and qualification of each should be described.
o This section may be in a narrative or graphic format.
o A separate tool plan may be developed for software tool acquisition,
development, or modification.
o In this case, a separate tool plan section may be added to the plan.
(c) Explain different types of project metrics.
Project Metrics is a set of metrics that indicates how the project is planned and
executed. Types are

15
• Effort Variance
o A typical project starts with requirements gathering and ends with
product release.
o All the phases that fall in between these points need to be planned and
tracked.
o In the planning cycle, the scope of the project is finalized.
o The project scope gets translated to size estimates, which specify the
quantum of work to be done.
o This size estimate gets translated to effort estimate for each of the
phases and activities by using the available productivity data available.
o This initial effort is called baselined effort.
o As the project progresses and if the scope of the project changes or if
the available productivity numbers are not correct,
▪ then the effort estimates are re-evaluated again and this re-
evaluated effort estimate is called revised effort.
o Effort variance for each of the phases provides a quantitative measure of
the relative difference between the revised and actual efforts.
o Variance % = [(Actual effort – Revised estimate)/Revised estimate] * 100
o A variance of more than 5% in any of the SDLC phase indicates the
scope for improvement in the estimation.
o The variance can be negative also.
o A negative variance is an indication of an over estimate.
o These variance numbers along with analysis can help in better
estimation for the next release or the next revised estimation cycle.
• Schedule Variance
o Schedule variance, like effort variance, is the deviation of the actual
schedule from the estimated schedule.
o Depending on the SDLC model used by the project, several phases
could be active at the same time.
o Further, the different phases in SDLC are interrelated and could share
the same set of individuals.
o Because of all these complexities involved, schedule variance is
calculated only at the overall project level, at specific milestones,
▪ not with respect to each of the SDLC phases.
o Effort and schedule variance have to be analyzed in totality, not in
isolation.
o This is because while effort is a major driver of the cost,
▪ schedule determines how best a product can exploit market
opportunities.
o Variance can be classified into
▪ negative variance,
▪ zero variance,
▪ acceptable variance, and
▪ unacceptable variance.
o Generally 0–5% is considered as acceptable variance.

16
• Effort distribution
o Variance calculation helps in finding out whether commitments are met
on time and whether the estimation method works well.
o In addition, some indications on product quality can be obtained if the
effort distribution across the various phases are captured and analyzed.
o For example,
▪ Spending very little effort on requirements may lead to frequent
changes
▪ Spending less effort in testing may cause defects to crop up in the
customer place
o Adequate and appropriate effort needs to be spent in each of the SDLC
phase for a quality product release.
o The distribution percentage across the different phases can be
estimated at the time of planning and these can
be compared with the actuals at the time of release for getting a comfort
feeling on the release and estimation
methods.
o Mature organizations spend at least 10–15% of the total effort in
requirements and approximately the same
effort in the design phase.
o The effort percentage for testing depends on the type of release and
amount of change to the existing code base and functionality.
o Typically, organizations spend about 20–50% of their total effort in
testing.

5. Answer the followings:


(a)Discuss the scope for automation in automation testing.
• Identifying the Types of Testing Amenable to Automation
o Certain types of tests automatically lend themselves to automation.
▪ Stress, reliability, scalability, and performance testing
• These types of testing require the test cases to be run from
a large number of different machines for an extended
period of time, such as 24 hours, 48 hours, and so on.
• It is just not possible to have hundreds of users trying out
the product day in and day out—
o they may neither be willing to perform the repetitive
tasks, nor will it be possible to find that many people
with the required skill sets.
• Test cases belonging to these testing types become the
first candidates for automation.
▪ Regression tests
17
• Regression tests are repetitive in nature.
• These test cases are executed multiple times during the
product development phases.
• Given the repetitive nature of the test cases, automation
will save significant time and effort in the long run.
• Furthermore, the time thus gained can be effectively
utilized for ad hoc testing and other more creative avenues.
▪ Functional tests
• These kinds of tests may require a complex set up and
thus require specialized skill, which may not be available
on an ongoing basis.
• Automating these once, using the expert skill sets, can
enable using less-skilled people to run these tests on an
ongoing basis.
• Automating Areas Less Prone to Change
o In a product scenario, the changes in requirements are quite common.
o Automation should consider those areas where requirements go through
lesser or no changes.
o Normally change in requirements cause scenarios and new features to
be impacted, not the basic functionality of the product.
o User interfaces normally go through significant changes during a project.
o To avoid rework on automated test cases, proper analysis has to be
done to find out the areas of changes to user interfaces, and automate
only those areas that will go through relatively less change.
o The non-user interface portions of the product can be automated first.
o While automating functions involving user interfaces-and non-user
interface-oriented ("backend") elements, clear demarcation and
“pluggability” have to be provided so that they can be executed together
as well as executed independently.
o This enables the non-GUI portions of the automation to be reused even
when GUI goes through changes.
• Automate Tests that Pertain to Standards
o One of the tests that products may have to undergo is compliance to
standards.
o For example, a product providing a JDBC interface should satisfy the
standard JDBC tests. These tests undergo relatively less change.
o Even if they do change, they provide backward compatibility by which
automated scripts will continue to run.
o Automating for standards provides a dual advantage.
o Test suites developed for standards are not only used for product testing
but can also be sold as test tools for the market.
o Hence, automating for standards creates new opportunities for them to
be sold as commercial tools.
o In case there are tools already available in the market for checking such
standards,
18
▪ then there is no point in reinventing the wheel and rebuilding
these tests.
o Rather, focus should be towards other areas for which tools are not
available and in providing interfaces to other tools.
• Management Aspects in Automation
o Prior to starting automation, adequate effort has to be spent to obtain
management commitment.
o Automation generally is a phase involving a large amount of effort and is
not necessarily a one-time activity.
o The automated test cases need to be maintained till the product reaches
obsolescence.
o Since it involves significant effort to develop and maintain automated
tools, obtaining management commitment is an important activity.
o Since automation involves effort over an extended period of time,
management permissions are only given in phases and part by part.
o Hence, automation effort should focus on those areas for which
management commitment exists already.
(b)Explain the guidelines of automation testing.
• Consider building a tool instead of buying one, if possible
o It may not be possible every time.
o But if the requirement is small and sufficient resources allow, then go for
building the tool instead of buying, after weighing the pros and cons.
o Whether to buy or build a tool requires management commitment,
including budget and resource approvals.
• Test the tool on an application prototype
o While purchasing the tool, it is important to verify that it works properly
with the system being developed.
o However, it is not possible as the system being developed is often not
available.
o Therefore, it is suggested that if possible, the development team can
build a system prototype for evaluating the testing tool.
• Not all the tests should be automated
o Automated testing is an enhancement of manual testing, but it cannot be
expected that all test on a project can be automated.
o It is important to decide which parts need automation before going for
tools.
o Some tests are impossible to automate, such as verifying a printout.
o It has to be done manually.
• Select the tools according to organizational needs
o Do not buy the tools just for their popularity or to compete with other
organizations.
o Focus on the needs of the organization and know the resources (budget,
schedule) before choosing the automation tool.
• Use proven test-script development techniques

19
o Automation can be effective if proven techniques are used to produce
efficient, maintainable, and reusable test scripts.
o The following are some hints:
▪ Read the data values from either spreadsheets or tool-provided
data pools, rather than being hard-coded into the test-case script
because this prevent test cases from being reused.
▪ Hard-coded values should be replaced with variables and
whenever possible read data from external sources.
o Use modular script development.
▪ It increases maintainability and readability of the source code.
o Build library of reusable functions by separating the common actions into
shared script library usable by all test engineers.
o All test scripts should be stored in a version control tool.
• Automate the regression tests whenever feasible
o Regression testing consumes a lot of time.
o If tools are used for this testing, the testing time can be reduced to a
greater extent.
o Therefore, whenever possible, automate the regression test cases.

6. Answer the followings:


(a)Discuss state based testing. Consider the class CreditCard in a banking system.
Draw the state transition diagram and generate test cases for the class
CreditCard.
• State based testing uses the concept of state machine of electronic circuits
where the output of the state machine is dependent not only on the present
state but also on the past state.
• A state represents the effect of previous inputs.
• Hence, in state machine, the output is not only dependent on the present inputs
but also on the previous inputs.
• In electronic circuits, such circuits are called sequential circuits.
• If the output of a state is only dependent on present inputs, such circuits are
called combinational circuits.
• In state based testing, the resulting state is compared with the expected state.
The initial state of credit card is undefined (i.e., no credit card number has been
provided). Upon reading the credit card during a sale, the object takes on a defined
state; that is, the attributes card number and expiration date, along with bank specific
identifiers are defined. The credit card is submitted when it is sent for authorization
and it is approved when authorization is received. The transition of credit card from
one state to another can be tested by deriving test cases that cause the transition to
occur.
Test Cases:
undefined->defined->submitted->approved
undefined->defined->submitted->disapproved
undefined->defined->submitted->*[defined->submitted]->disapproved

20
State diagram is shown below.

(b)Discuss various threat types in a web application.


• Unauthorized user/Fake identity/Password cracking
o When an unauthorized user tries to access the software by using fake
identity, then security testing should be done such that any unauthorized
user is not able to see the contents/data in the software.
• Cross-site scripting (XSS)
o When a user inserts HTML/client-side script in the user interface of a
web application and this insertion is visible to other users, it is called
cross-site scripting (XSS).
o Attacker can use this method to execute malicious script or URL on the
victim’s browser.
o The tester should additionally check the web application for XSS.
• Buffer overflows
o Due to this problem, malicious code can be executed by the hackers.
o In the application, check the buffer overflow module and the different
ways of submitting a range of lengths to the application.
• URL manipulation
o Communication through HTTP may cause fiddling of data.
o Modification of data passed through HTTP GET method in the form of
query string is known as fiddling of data.
o The tester should check if the application passes important information
in the query string and design the test cases correspondingly.
• SQL injection
o Hackers can also put some SQL statements through the web application
user interface into some queries meant for querying the database to get
vital information from the server database.
o Design the test cases such that special characters from user inputs
should be handled/escaped properly in such cases.
• Denial of service
o When a service does not respond, it is denial of service.
o There are several ways that can make an application fail.
o For example,
21
▪ heavy load put on the application,
▪ distorted data that may crash the application,
o Overloading of memory, etc.
o Design the test cases considering all these factors.

22

You might also like