Unit 3 2 Marks
Unit 3 2 Marks
Unrealistic expectations. Poor testing practice. Expectation that automated tests will find a lot of new defects. False sense of security. Maintenance of automated tests. Technical problems. Organizational issues.
2. What are the benefits of automated test case generation? automates tedious aspects of test case design, such as activating every menu item or calculating boundary values from known data ranges; can generate a complete set of test cases with respect to their source (code, interface, or specification); can identify some types of defect, such as missing links, non-working window items, or software that does not conform to a stored specification.
3. What are the limitations of automated test case generation? code-based methods do not generate expected outcomes; interface-based methods can only generate partial expected outcomes; code-based and interface-based methods cannot find specification defects; specification-based methods depend on the quality of the specification; all methods may generate too many tests to be practical to run; human expertise is still needed to prioritize tests, to assess the usefulness of the generated tests, and to think of the tests that could never be generated by any tool.
4. What are the limitations of automating software testing?
Does not replace manual testing Manual tests find more defects than automated tests
Greater reliance on the quality of the tests Test automation does not improve effectiveness Test automation may Limit software development Tools have no imagination
Scripts should be: annotated, to guide both the user and the maintainer; functional, performing a single task, encouraging reuse; structured, for ease of reading, understanding, and maintenance; understandable, for ease of maintenance; documented, to aid reuse and maintenance. 6. What are the different scripting techniques? The scripting techniques described are: linear scripts; structured scripting; shared scripts; data-driven scripts; keyword-driven scripts. 7. What is script pre-procesing? Script pre-processing is a term we use to describe any of a number of different script manipulation techniques that strive to make the process of writing and maintaining scripts easier and therefore less error prone.
8. What are the advantages and disadvantages of data driven approach?
The advantages of the data-driven approach: similar tests can be added very quickly; adding new tests can be done by testers without technical or programming knowledge about the tool scripting language; there is no additional script maintenance effort for the second and subsequent tests. The disadvantages of the data-driven approach are: initial set-up takes of lot of effort; specialized (programming) support is required; it must be well managed. 9. What are the advantages and disadvantages of shared scripts? The advantages of shared scripts are: similar tests will take less effort to implement; maintenance costs are lower than for linear scripts; eliminates obvious repetitions; can afford to put more intelligence into the shared scripts. The disadvantages to shared scripts: there are more scripts to keep track of, document, name, and store,
and if not well managed it ma^ be hard to find an appropriate script; test-specific scripts are si ill required for every test so the maintenance costs will still be high; shared scripts are often specific to one part of the software under test.
The advantages are: no upfront work or planning is required; you can just sit down and record any manual task; you can quickly start automating; it provides an audit trail of what was actually done; the user doesn't need to be a programmer (providing no changes are required to the recorded script, the script itself need not be seen by the user); good for demonstrations (of the software or of the tool).
Disadvantages of linear scripts
the process is labor-intensive: typically it can take 2 to 10 times longer to produce a working automated test (including comparisons) than run ning the test manually; everything tends to be done 'from scratch' each time; the test inputs and comparisons are 'hard-wired' into the script; there is no sharing or reuse of scripts; linear scripts are vulnerable to software changes;0` linear scripts are expensive to change (they have a high maintenance cost); if anything happens when the script is being replayed that did not happen when it was recorded, such as an unexpected error message from the network, the script can easily become out of step with the software under test, causing the whole test to fail.
PART B
1. Explain the V-model and tool support for life-cycle testing 2. Explain the benefits and problems of test automation
4. Explain the different Scripting techniques. 5. Explain script pre-processing functions in detail.
When automating test cases, the expected outcomes have either to be prepared in advance or generated by capturing the actual outcomes of a test run. In the latter case the captured outcomes must be verified manually and saved as the expected outcomes for further runs of the automated tests. This is called reference testing.
An automated comparison tool, normally referred to as a 'comparator,' is a computer program that detects differences between two sets of data. For test automation this data is usually the outcome of a test run and the expected outcome. The data may be displayed on a screen or held in files or databases, and can be in a variety of formats including standard text files. Where a comparator facility is built into a test execution tool the data is more likely to be screen images. 3. What is dynamic comparison? Dynamic comparison is the comparison that is performed while a test case is executing. Test execution tools normally include comparator features that are specifically designed for dynamic comparison. Dynamic comparison is perhaps the most popular because it is much better supported by commercial test execution tools, particularly those with capture/replay facilities.
4. What is Post-execution comparison?
Post-execution comparison is the comparison that is performed after a test case has run. It is mostly used to compare outputs other than those that have been sent to the screen, such as files that have been created and the updated content of a database. 5. Why are complex comparisons needed? Complex comparison (also known as intelligent comparison) enables us to compare actual and expected outcomes with known differences between them. A complex comparison will ignore certain types of difference (usually those that we expect to see or that are not important to us) and highlight others, providing we specify the comparison requirements correctly. 6. What is a filter?
A filter is an editing or translating step that is performed on both an expected outcome file and the corresponding actual outcome file. More than one filtering task can be performed on any expected/actual test outcome before comparison.
Number of test cases Quantity of test data Format of test data Time to run test cases Debug-ability of test cases Interdependences between tests Naming conventions Test complexity Test documentation