Unit Testing
Unit Testing
are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually. This testing mode is a component of Extreme Programming (XP), a pragmatic method of software development that takes a meticulous approach to building a product by means of continual testing and revision. Unit testing involves only those characteristics that are vital to the performance of the unit under test. This encourages developers to modify the source code without immediate concerns about how such changes might affect the functioning of other units or the program as a whole. Once all of the units in a program have been found to be working in the most efficient and error-free manner possible, larger components of the program can be evaluated by means of integration testing. Unit testing can be time-consuming and tedious. It demands patience and thoroughness on the part of the development team. Rigorous documentation must be maintained. Unit testing must be done with an awareness that it may not be possible to test a unit for every input scenario that will occur when the program is run in a real-world environment.
Unit Testing
The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.
Stubs Replacements for missing components that the components being tested will call as part of the test
While doing an Integration , If we dont have all the modules get ready and Need to test a particualr module which is ready then We Use Stubs and Drivers.
Stubs and drivers used in Integration testing for a Top Down Integration testing and Botton Up Integration Testing. For EX : If we have Modules x,y,z . X module is ready and Need to Test it , But i calls functions from y and z.(Which is not ready)To test at a particular module we write a Small Dummy piece a code which Simulates Y and Z Whch will return values for X, These pice of Dummy code is Called Stubs in a Top Down Integration So Stubs are called Functions in Top Down Integration. Similar to the above ex: If we have Y and Z modules get ready and x module is not ready, and we need to test y and z modules Which return values from X,So to get the values from X We write a Small Pice of Dummy code for x which returns values for Y and Z,So these piece of code is called Drivers in Botton Up Integration So Drivers are calling Functions in Bottom Up Inegration.
Unit Testing
Visual Studio .NET 2003 The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use. The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit. For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:
Is the error due to a defect in unit 1? Is the error due to a defect in unit 2?
Is the error due to defects in both units? Is the error due to a defect in the interface between the units? Is the error due to a defect in the test?
Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole. Drivers and stubs can be reused so the constant changes that occur during the development cycle can be retested frequently without writing large amounts of additional test code. In effect, this reduces the cost of writing the drivers and stubs on a per-use basis and the cost of retesting is better controlled.
Integration Testing
Visual Studio .NET 2003 Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once. Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis. You can do integration testing in a variety of ways but the following are three common strategies:
The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited functionality. The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The downside, however, is that the need for drivers
complicates test management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach also provides poor support for early release of limited functionality. The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern discussed above. The outputs for each function are then integrated in the top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality. It also helps minimize the need for stubs and drivers. The potential weaknesses of this approach are significant, however, in that it can be less systematic than the other two approaches, leading to the need for more regression testing.
Regression Testing
Visual Studio .NET 2003 Any time you modify an implementation within a program, you should also do regression testing. You can do so by rerunning existing tests against the modified code to determine whether the changes break anything that worked prior to the change and by writing new tests where necessary. Adequate coverage without wasting time should be a primary consideration when conducting regression tests. Try to spend as little time as possible doing regression testing without reducing the probability that you will detect new failures in old, already tested code. Some strategies and factors to consider during this process include the following:
Test fixed bugs promptly. The programmer might have handled the symptoms but not have gotten to the underlying cause. Watch for side effects of fixes. The bug itself might be fixed but the fix might create other bugs. Write a regression test for each bug fixed. If two or more tests are similar, determine which is less effective and get rid of it. Identify tests that the program consistently passes and archive them. Focus on functional issues, not those related to design. Make changes (small and large) to data and find any resulting corruption. Trace the effects of the changes on program memory.
Building a Library
The most effective approach to regression testing is based on developing a library of tests made up of a standard battery of test cases that can be run every time you build a new version of the program. The most difficult aspect involved in building a library of test cases is determining which test cases to include. The most common suggestion from authorities in the field of software testing is to avoid spending excessive amounts of time
trying to decide and err on the side of caution. Automated tests, as well as test cases involving boundary conditions and timing almost definitely belong in your library. Some software development companies include only tests that have actually found bugs. The problem with that rationale is that the particular bug may have been found and fixed in the distant past. Periodically review the regression test library to eliminate redundant or unnecessary tests. Do this about every third testing cycle. Duplication is quite common when more than one person is writing test code. An example that causes this problem is the concentration of tests that often develop when a bug or variants of it are particularly persistent and are present across many cycles of testing. Numerous tests might be written and added to the regression test library. These multiple tests are useful for fixing the bug, but when all traces of the bug and its variants are eliminated from the program, select the best of the tests associated with the bug and remove the rest from the library. Top down Integration Testing Top-down integration is an incremental approach to construction of program structure. In top down integration, first control hierarchy is identified. That is which module is driving or controlling which module. Main control module, modules sub-ordinate to and ultimately sub-ordinate to the main control block are integrated to some bigger structure. For integrating depth-first or breadth-first approach is used.
Fig. 9.6 Top down integration In depth first approach all modules on a control path are integrated first. See fig. 9.6. Here sequence of integration would be (M1, M2, M3), M4, M5, M6, M7, and M8. In breadth first all modules directly subordinate at each level are integrated together.
Using breadth first for fig. 9.6 the sequence of integration would be (M1, M2, M8), (M3, M6), M4, M7, andM5. Another approach for integration is bottom up integration, which we discuss in the following page.
Bottom Up Integration: Bottom-up integration testing starts at the atomic modules level. Atomic modules are the lowest levels in the program structure. Since modules are integrated from the bottom up, processing required for modules that are subordinate to a given level is always available, so stubs are not required in this approach. A bottom-up integration implemented with the following steps: 1. Low-level modules are combined into clusters that perform a specific software subfunction. These clusters are sometimes called builds. 2. A driver (a control program for testing) is written to coordinate test case input and output. 3. The build is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
Fig. 9.7 (a) Program Modules (b)Bottom-up integration applied to program modules in (a) Fig 9.7 shows the how the bottom up integration is done. Whenever a new module is added to as a part of integration testing, the program structure changes. There may be new data flow paths, some new I/O or some new control logic. These changes may cause problems with functions in the tested modules, which were working fine previously. To detect these errors regression testing is done. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects in the programs. Regression testing is the activity that helps to ensure that changes (due to testing or for other reason) do not introduce undesirable behavior or additional errors. As integration testing proceeds, the number of regression tests can grow quite large. Therefore, regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functions once a change has occurred.
Advantages of Top down Approach: a) This approach is advantageous if all major flaws are captured towards the top of the program
b) Early skeletal program allows demonstrations and boosts the morale Disadvantages of Top down Approach: a) Stub modules are essential b) Test conditions my be impossible, or very difficult, to create c) Observation of test output is more difficult, as only simulated values will be used initially. For the same reason, program correctness can be misleading
3) Big Bang Approach: Once all the modules are ready after testing individually; the approach of integrating them finally at once is known as big bang approach.
Though Big Bang approach seems to be advantageous when we construct independent module concurrently, this approach is quite challenging and risky as we integrate all modules in a single step and test the resulting system. Locating interface errors, if any, becomes difficult here. 4) Hybrid Approach: To overcome the limitations and to exploit the advantages of Top-down and Bottomup approaches, a hybrid approach in testing is used. As the name suggests, it is a mixture of the two approaches like Top Down approach as well as Bottom Up approach. In this approach the system is viewed as three layers consisting of the main target layer in the middle, another layer above the target layer, and the last layer below the target layer.
The Top-Down approach is used in the topmost layer and Bottom-Up approach is used in the lowermost layer. The lowermost layer contains many general-purpose utility programs, which are helpful in verifying the correctness during the beginning of testing. Testing converges for the middle level target layers are selected on the basis of system characteristics and the structure of the code. The middle level target layer contains components using the utilities.
Final decision on selecting an integration approach depends on system characteristics as well as on customer expectations. Sometimes the customer wants to see a working version of the application as soon as possible thereby forcing an integration approach aimed at producing a basic working system in the earlier stages of the testing process.