0% found this document useful (0 votes)
471 views12 pages

Static Testing

Static testing is a form of manual software testing that involves reviewing code or documentation without executing the code. The main objectives are to reduce defects early in the development process and improve code quality. Common static testing techniques include inspections, walkthroughs, peer reviews, and desk checking. Fagan inspections involve a structured group review process with roles like moderator, reviewer, and author. Issues found are categorized and the author fixes them.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
471 views12 pages

Static Testing

Static testing is a form of manual software testing that involves reviewing code or documentation without executing the code. The main objectives are to reduce defects early in the development process and improve code quality. Common static testing techniques include inspections, walkthroughs, peer reviews, and desk checking. Fagan inspections involve a structured group review process with roles like moderator, reviewer, and author. Issues found are categorized and the author fixes them.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Static Testing: Introduction of Static Testing Objectives of Static Testing Static Testing Methods Inspections Walkthroughs Peer Reviews

Desk Checking Introduction:In software development, static testing, also called dry run testing, is a form of software testing where the authors manually read their own documents/code to find any errors. It is generally not detailed testing, but primarily syntax checking of the code/document and/or manually reviewing the code or document to find logic errors also. The term static in this context means not while running or not while executing Static Testing Approach:Static testing is the least expensive form of testing and has the largest potential for reducing defects in software under development. The primary objective of static testing is defect reduction in the software by reducing defects in the documentation from which the software is developed. Consider using a two-step approach to static testing The First step is clean up the cosmetic appearance of the document: check spelling, check grammar, check punctuation, and check formatting. Second step is use some technique to focus expert review on document contents. With the help of first step, the document is cosmetically clean so that the readers can concentrate on the content. Some popular and effective techniques used for content review are discussed in the next section. Static Testing Techniques:1. Inspection 2. Walkthrough 3. Desk Checking 4. Peer Review/Rating 1. Inspection:Fagan Inspection Gilb Inspection Two Person Inspection N-Fold Inspection Meeting-less Inspection Generic Inspection Process Generic Process/steps of Inspection Process: 1. Planning and Preparation (Individual) 2. Collection (Group/Meeting) 3. Repair (Follow Up)

Fagan Inspection

Fagan inspection refers to a structured process of trying to find defects in documents such as programming code, specifications, designs and others during various phases of the software development process. It is named after Michael Fagan who is credited with being the inventor of formal software inspections. Fagan Inspection is a group review method used to evaluate output of a given process. In the process of software inspection the defects which are found are categorized in two categories: Major Defects Minor Defects The defects which are incorrect or even missing functionality or specifications can be classified as major defects: the software will not function correctly when these defects are not being solved. In contrast to major defects, minor defects do not threaten the correct functioning of the software, but are mostly small errors like spelling mistakes in documents or optical issues like incorrect positioning of controls in a program interface. In a typical Fagan Inspection, the inspection process consist of the following operations: Planning Preparation of Materials Arrangement of Participants Arrangement of Meeting Place Overview Author Inspection Meeting Assignment of Roles Preparation or Individual Inspection Independent analysis/ examination Code as well as other document Individual Results Questions Potential defects The participants prepare their roles Inspection Meeting Meeting to collect / consolidate individual inspection results Defect identification, but not solutions, to ensure inspection effectiveness. Fagan inspection typically involves about 4 people in the inspection team. No more than 2 hours Inspection report Rework (Performed by the author) Rework in the step in software inspection in which the defects found during the inspection meeting are resolved by the author, designer, or programmer. Follow Up In the follow-up phase of Fagan Inspection, defects fixed in the rework phase should be verified. The moderator is usually responsible for verifying rework. Sometimes fixed work can be accepted without being verified, such as when the defect was trivial. In non-trivial cases, a full re-inspection is performed by the inspection team (not only the moderator). If verification fails, go back to the rework process.

Gilb Inspection:1. Planning (Same as Fagan Inspection) 2. Kickoff (Overview) 3. Individual Checking (Preparation) 4. Logging Meeting (Inspection) 5. (a) Edit (Rework) (b) Process Brainstorming 6. Edit Audit (Follow-up) Process brainstorming is added right after the inspection meeting. The focus of this step is root cause analysis aimed at preventive actions and process improvement in the form of reduced defect injections for future development activities The team size is typically about four to six people. Checklists are extensively used, particularly for step 3. Inspection Session During the session, two activities occur: 1. The programmer narrates, statement by statement, the logic of the program. During the address, other participants should raise questions, and they should be followed to determine whether errors exist. It is likely that the programmer rather than the other team members will find many of the errors found during this narration. 2. The program is analyzed with respect to a checklist of historically common programming errors. The moderator is responsible for ensuring that the discussions proceed along productive lines and that the participants focus their attention on finding errors, not correcting them. (The programmer corrects errors after the inspection session.) The ideal time for the inspection session appears to be from 90 to 120 minutes. Since the session is a mentally taxing experience, longer sessions tend to be less productive. Most inspections proceed at a rate of approximately 150 program statements per hour . For that reason, large programs should be examined in multiple inspections, each inspection dealing with one or several modules or subroutines. Note that for the inspection process to be effective, the appropriate attitude must be established. If the programmer views the inspection as an attack on his or her character and adopts a defensive posture, the process will be ineffective. Rather, the programmer must approach the process with an ego less attitude so the the session can be productive Two Person Inspection:Some software artifacts are small enough to be inspected by one or two inspectors. Similarly, such reduced size inspection teams can be used to inspect software artifacts of limited size, scope or complexity. The so called Two Person Inspection was proposed to simplify the Fagan inspection, with an AuthorInspector pair. This technique is cheaper and more suitable for smaller scale programs, small increments of design and/or code in the incremental development, or other software artifacts of similarly smaller size. A typical implementation of two-person inspection is the reversible author-inspector pair. This technique is easier to manage because of the mutual benefit to both individuals.

Meeting Less Inspection Experimental evidences indicates that most of the discovered defects are indeed discovered by individual inspectors during the preparation step of Formal Inspections like Fagan and Gilb The defect detection ratio in the meeting session lies in the range of 5% to 30% Therefore there is a possibility of eliminating inspection meetings entirely, thus significantly reducing the overall inspection cost This results in a so called meetingless inspection, where individual inspectors do not communicate with each other. One of the main drawback of this approach is the high False Alarm rate. Another drawback of this approach is duplication of errors. Various ways of communication can be used to pass the individual inspection results to the author, e.g through direct communication with the author, or through some defect repository. N-Fold Inspection Tsai et al. [1], developed the N-fold inspection process, in which N teams each carry out independent inspections of the entire artifact. N-Fold inspection uses formal inspections but replicates these inspection activities using N independent teams. The same software artifact (e.g., a URD (User Requirements Document) is given to all N teams. An appointed moderator supervises the efforts of all N teams. N-fold inspections will find more defects than regular inspections as long as the teams dont completely duplicate each others work. However, they are far more expensive than a single team inspection Each team performs formal inspection using a checklist and analyzes the software artifact. Several teams may identify the same fault, but the moderator gathers all results of the independent inspection efforts and records each fault once in a database. Over the Shoulder Review over-the-shoulder review is an informal code review technique. An over-the-shoulder review is just that a developer standing over the authors workstation while the author walks the reviewer through a set of code changes. Typically the author drives the review by sitting at the keyboard and mouse, opening various files, pointing out the changes and explaining why it was done this way. The author can present the changes using various tools and even run back and forth between changes and other files in the project. If the review sees something a miss, they can engage in a little spot pair programming as the author writes the fix while the reviewer waits and watch the fix. Bigger changes where the reviewer doesnt need to be involved are taken off-line. With modern desktop-sharing software a so-called over-the shoulder review can be made to work over long distances. This complicates the process because you need to schedule these sharing meetings and communicate over the phone. Standing over a shoulder allows people to point, write examples, or even go to a whiteboard for discussion; this is more difficult over the Internet. Many of the face to face interactions are lost in this case. E-mail Pass-around reviews This is the second-most common form of informal code review. Here, whole files or changes are packaged up by the author and sent to reviewers via e-mail. Reviewers examine the files, ask questions and discuss with the author and other developers, and suggest changes

3. Desk Checking Introduction Desk Checking Process Drawbacks Desk Checking is one of the older practice of human error-detection process. A desk check can be viewed as a one-person inspection or walkthrough: A person reads a program, checks it with respect to an error list, and/or walks test data through it. In other words you can say Manually testing the logic of a program Desk Checking Process Desk checking involves first running a spellchecker, grammar checker, syntax checker, or whatever tools are available to clean up the cosmetic appearance of the document. Then, the author reviews the document trying to look for inconsistencies, incompleteness, and missing information. Problems detected in the contents should be corrected directly by the author with the possible advice of the project manager and other experts on the project. Once all corrections are made, the cosmetic testing is rerun to catch and correct all spelling, grammar, and punctuation errors introduced by the content corrections. Desk Checking Drawbacks Desk checking is the least formal and least time-consuming static testing technique. Of all the techniques, desk checking is the only one whereby the author test his or her own document. For most people, desk checking is relatively unproductive. One reason is that it is a completely undisciplined process. A second, and more important, reason is that it runs counter to a testing principle (that people are generally ineffective in testing their own programs). For this reason, you could deduce that desk checking is best performed by a person other than the author of the program. 4. Code Walkthrough The code walkthrough, like the inspection, is a set of procedures and error-detection techniques for group code reading. It shares much in common with the inspection process, but the procedures are slightly different, and a different error-detection technique is employed. Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours in duration. The walkthrough team consists of three to five people. One of these people plays a role similar to that of the moderator in the inspection process, another person plays the role of a secretary (a person who records all errors found), and a third person plays the role of a tester. Suggestions as to who the three to five people should be vary. Of course, the programmer is one of those people. Suggestions for the other participants include a highly experienced programmer a programming-language expert A new programmer (to give a fresh, unbiased outlook) the person who will eventually maintain the program The initial procedure is identical to that of the inspection process: The participants are given the materials several days in advance to allow them to bone up on the program. However, the procedure in the meeting is different. Rather than simply reading the program or using error checklists, the participants play computer. The person designated as the tester comes to the meeting armed with a small set of paper test casesrepresentative sets of inputs (and expected outputs) for the program or module.

During the meeting, each test case is mentally executed. That is, the test data are walked through the logic of the program. The state of the program (i.e., the values of the variables) is monitored on paper or whiteboard. Of course, the test cases must be simple in nature and few in number, because people execute programs at a rate that is many orders of magnitude slower than a machine. The walkthrough should have a follow-up process similar to that described for the inspection process. 5. Peer Ratings/Review Peer rating is a technique of evaluating anonymous programs in terms of their overall quality, maintainability, extensibility, usability, and clarity. The purpose of the technique is to provide programmer self-evaluation. A programmer is selected to serve as an administrator of the process. The administrator, in t urn, selects approximately 6 to 20 participants(6 is the minimum to preserve anonymity). The participants are expected to have similar backgrounds (you shouldnt group Java application programmers with assembly language system programmers, for example). Each participant is asked to select two of his or her own programs to be reviewed. One program should be representative of what the participant considers to be his or her finest work; the other should be a program that the programmer considers to be poorer in quality. Once the programs have been collected, they are randomly distributed to the participants. Each participant is given four programs to review. Two of the programs are the finest program s and two are poorer programs, but the reviewer is not told which is which. Each participant spends 30 minutes with each program and then completes an evaluation form after reviewing the program. After reviewing all four programs, each participant rates the relative quality of the four programs. The evaluation form asks the reviewer to answer, on a scale from 1 to 7 (1 meaning definitely yes, 7 meaning definitely no), The reviewer also is asked for general comments and suggested improvements. After the review, the participants are given the anonymous evaluation forms for their two contributed programs. The participants also are given a statistical summary showing the overall and detailed ranking of their original programs across the entire set of programs, as well as an analysis of how their ratings of other programs compared with those ratings of other reviewers of the same program. The purpose of the process is to allow programmers to self-assess their programming skills. Black Box Testing Technique Equivalence Partitioning Equivalence Partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. Equivalence Partitioning strives to define the test case that uncovers classes of errors, there by reducing the total number of test cases that must be developed. An equivalence class represents a set of valid or invalid states for input conditions.

Test selection using equivalence partitioning allows a tester to subdivide the input domain into a relatively small number of sub-domains, say N>1, as shown (next slide (a)). In strict mathematical terms, the sub-domains by definition are disjoint. The four subsets shown in (a) constitute a partition of the input domain while the subsets in (b) are not. Each subset is known as an equivalence class. Subdomains

Program behavior and equivalence classes The equivalence classes are created assuming that the program under test exhibits the same behavior on all elements, i.e. tests, within a class This assumption allow the tester to select exactly one test from each equivalence class resulting in a test suite of exactly N tests. Faults targeted The entire set of inputs to any application can be divided into at least two subsets: one containing all the expected, or legal, inputs (E) and the other containing all unexpected, or illegal, inputs (U). Each of the two subsets, can be further subdivided into subsets on which the application is required to behave differently (e.g. E1, E2, E3, and U1, U2).

Unidimensional partitioning One way to partition the input domain is to consider one input variable at a time. Thus each input variable leads to a partition of the input domain. We refer to this style of partitioning as unidimensional equivalence partitioning or simply unidimensional partitioning. This type of partitioning is commonly used. Multidimensional partitioning Another way is to consider the input domain I as the set product of the input variables and define a relation on I. This procedure creates one partition consisting of several equivalence classes. We refer to this method as multidimensional equivalence partitioning or simply multidimensional partitioning. Multidimensional partitioning leads to a large number of equivalence classes that are difficult to manage manually. Many classes so created might be infeasible. Nevertheless, equivalence classes so created offer an increased variety of tests as is illustrated in the next section. Partitioning Example Consider an application that requires two integer inputs x and y. Each of these inputs is expected to lie in the following ranges: 3 x7 and 5y9.

For unidimensional partitioning we apply the partitioning guidelines to x and y individually. This leads to the following six equivalence classes.

For multidimensional partitioning we consider the input domain to be the set product X x Y. This leads to 9 equivalence classes

E8: x>7, 5y9 Equivalence Class Partitioning Testing Equivalence classes can be define according to the following guidelines; If an input condition specifies a range one valid and two invalid equivalence classes are defined. If an input condition specifies a specific value one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set one valid and one invalid equivalence class are defined. If an input condition is Boolean, one valid and one invalid class are defined. Identify the input domain: Read the requirements carefully and identify all input and output variables, their types, and any conditions associated with their use. Environment variables, such as class variables used in the method under test and environment variables in Unix, Windows, and other operating systems, also serve as input variables. Given the set of values each variable can assume, an approximation to the input domain is the product of these sets. Systematic procedure for equivalence partitioning (contd.) Equivalence classing: Partition the set of values of each variable into disjoint subsets. Each subset is an equivalence class. Together, the equivalence classes based on an input variable partition the input domain. partitioning the input domain using values of one variable, is done based on the the expected behavior of the program. Values for which the program is expected to behave in the ``same way" are grouped together. Note that ``same way" needs to be defined by the tester. Combine equivalence classes: This step is usually omitted and the equivalence classes defined for each variable are directly used to select test cases. However, by not combining the equivalence classes, one misses the opportunity to generate useful tests. The equivalence classes are combined using the multidimensional partitioning approach described earlier. 4. Identify infeasible equivalence classes: An infeasible equivalence class is one that contains a combination of input data that cannot be generated during test. Such an equivalence class might arise due to several reasons.

For example, suppose that an application is tested via its GUI, i.e. data is input using commands available in the GUI. The GUI might disallow invalid inputs by offering a palette of valid inputs only. There might also be constraints in the requirements that render certain equivalence infeasible Example Consider a component, generate_grading, with the following specification The component is passed an exam mark (out of 75) and a coursework (c/w) mark (out of 25), from which it generates a grade for the course in the range 'A' to 'D'. The grade is calculated from the overall mark which is calculated as the sum of the exam and c/w marks, as follows: greater than or equal to 70 'A' greater than or equal to 50, but less than 70 'B' greater than or equal to 30, but less than 50 C' less than 30 'D' Where a mark is outside its expected range then a fault message ('FM') is generated. All inputs are passed as integers. Initially the equivalence partitions are identified. Then test cases derived to exercise the partitions. Equivalence partitions are identified from both the inputs and outputs of the component and Both valid and invalid inputs and outputs are considered. Example (Input Identification The partitions for the two inputs are initially identified. The valid partitions can be described by: 0 exam mark 75 0 coursework mark 25 The most obvious invalid partitions based on the inputs can be described by: exam mark > 75 exam mark < 0 coursework mark > 25 coursework mark < 0 Partitioned ranges of values can be represented pictorially, therefore, for the input, exam mark, we get:

And for the input, coursework mark, we get:

Example (Output Identification) The partitions for the outputs are identified. The valid partitions are produced by considering each of the valid outputs for the component: 'A' is induced by 70 total mark 100 'B' is induced by 50 total mark < 70 'C' is induced by 30 total mark < 50 'D' is induced by 0 total mark < 30 'Fault Message' is induced by total mark > 100 'Fault Message' is induced by total mark < 0

where total mark = exam mark + coursework mark. Note that 'Fault Message' is considered as a valid output as it is a specified output. Example (Output Identification) The equivalence partitions and boundaries for total mark are shown pictorially below:

Less obvious invalid input equivalence partitions would include any other inputs that can occur not so far included in a partition. For Example: non-integer inputs or perhaps non-numeric inputs. So, we could generate the following invalid input equivalence partitions: exam mark = real number (a number with a fractional part) exam mark = alphabetic coursework mark = real number coursework mark = alphabetic Example (Total Equivalence Partitioning) 0 exam mark 75 exam mark > 75 exam mark < 0 0 coursework mark 25 coursework mark > 25 coursework mark < 0 exam mark = real number exam mark = alphabetic coursework mark = real number coursework mark = alphabetic 70 total mark 100 50 total mark < 70 30 total mark < 50 0 total mark < 30 total mark > 100 total mark < 0 Example (Test Case Generation) Two distinct approaches can be taken when generating the test cases. In the first a test case is generated for each identified partition on a one-to-one basis, While in the second a minimal set of test cases is generated that cover all the identified partitions. Example (One-to-One Basis Sixteen partitions were identified leading to sixteen test cases. The test cases corresponding to partitions derived from the input exam mark are:

Note that the input coursework (c/w) mark has been set to an arbitrary valid value of 15. The test cases corresponding to partitions derived from the input coursework mark are:

Note that the input exam mark has been set to an arbitrary valid value of 40. The test cases corresponding to partitions derived from possible invalid inputs are:

The test cases corresponding to partitions derived from the valid outputs are:

The test cases corresponding to partitions derived from the valid outputs are:

It can be seen above that several of the test cases are similar, such as test cases 1 and 14, where the main difference between them is the partition targetted. Each test case actually 'hits' three partitions; two input partitions and one output partition. It is possible to generate a smaller 'minimal' test set that still 'hits' all the identified partitions by deriving test cases that are designed to exercise more than one partition. The following test case suite of Nine test cases corresponds to the minimised test case suite approach where each test case is designed to hit as many new partitions as possible rather than just one. Example (Minimal Set)

Example (Minimal Set

Example (Minimal Set

You might also like