1.1 Motivation: Quality Is One of The Most Important Aspects of Software Products. If Software Does Not Work, It Is
1.1 Motivation: Quality Is One of The Most Important Aspects of Software Products. If Software Does Not Work, It Is
1.1 Motivation
Quality is one of the most important aspects of software products. If software does not work, it is not worth a lot. The drawbacks caused by faulty software can be much higher than the advantages gained from using it. Malfunctioning or difficult to use software can complicate daily life. In life critical systems faults may even cause loss of human lives. In highly competing markets quality may determine which software product is going to be a success and which ones are going to fail. Low quality software products have a negative impact on firms reputation and unquestionably also on the sales. Unhappy customers are also more willing to change to other software suppliers. For these reasons organizations have to invest on the quality of software products. Even high quality software can fail at the markets if it does not meet the customers needs. At the be-ginning of a software project it is common that customers exact needs are unknown. This may lead to guessing the wanted features and development of useless features and in the worst case useless soft-ware. This should obviously be avoided.
New feature ideas usually arise when the customer understands the problem domain more thoroughly. This might be quite problematic if strict contractual agreements on the developed features exist. Even when it is contractually possible to add new features to the software, there might be a lot of rework before the features are ready for use.
Iterative and especially agile software processes are introduced as a solution for changing requirements. The basic idea in the iterative processes is to create the software in small steps. When software is developed in this way, the customers can try out the developed software and based on the customers feedback, the development team can create features that are valuable for the customer. The most valuable features are developed first allowing the customer to start using the software earlier than the soft-ware developed with a non-iterative development process. Iterative software development adds new challenges for software testing. In traditional software projects main part of the testing is conducted in the end of the development project. With the iterative and agile processes the software should, however, be tested in every iteration. If the customer uses the result of the iteration, at least all the major problems should be solved before the product can be delivered. In an ideal situation each iteration outcome would be high quality software.
In the agile methods the need for testing is understood and there are development practices that are used to assure the quality of the software. Many of these practices are targeted for developers and used to test that the code works as the developers have thought it should. To also test that the features fulfill the customers requirements there is need for a higher level testing. This higher level testing is often called as acceptance testing or customer testing. Customer input is needed to define these higher level test cases to make sure that her requirements are met. Because the software is developed in the iterative manner and there is a continuous change, it would be beneficial to test all the features at least once during the iteration. Testing over again is needed, be-cause the changes may have caused defects. Testing manually all functionalities after every change is not possible. It may be possible at the beginning, but when the count of features rises, manual regression testing becomes harder and eventually impossible. This leads to a situation in which the changes done late in the iteration may have caused faults that cannot be noticed in testing. And even if the faults could be noticed, developers may not be able to fix them during the iteration
Test automation can be used for helping the testing effort. Test automation means testing software with other software. When software and computers are used for testing, the test execution can be conducted much faster than manually. If the automated tests can be executed daily or even more often, the status of the developed software is continuously known. Therefore the problems can be found faster and the changes causing the problems can be pinpointed. That is why the test automation is an integral part of agile software development By automating the customer defined acceptance tests, the test cases defining how the system should work from the customer point of view can be executed often. This makes it possible to know the status of the software in any point of the development. In acceptance test-driven development this approach is taken even further and the acceptance tests are not only used for verifying that the system works but also driving the system development. The customer defined test cases are created before the implementation starts. The goal of the implementation is then to develop software that passes all the acceptance test cases.
2 TRADITIONAL TESTING
In this chapter the traditional testing terminology and divisions of different testing aspects are described. The purpose is to give an overall view of the testing field and make it possible in the following chapters to compare agile testing to traditional testing and specify the research area in a wider context
Testing is an integral part of the software development. The goal of software testing is to find faults from developed software and to make sure they get fixed (Kaner et al. 1999, Patton 2000). It is important to find the faults as early as possible because fixing them is more expensive in the later phases of the development (Kaner et al. 1999, Patton 2000). The purpose of testing is also to provide information about the current state of the developed software from the quality perspective (Burnstein 2003). One might argue that software testing should make sure that
3
software works correctly. This is however impossible because even a simple piece of software has millions of paths that should all be tested to make sure that it works correctly (Kaner et al. 1999).
The non-functional testing means testing quality aspects of software. Examples of non-functional testing are performance, security, usability, portability, reliability, and memory management testing. Each non-functional testing needs different approaches and different kind of know-how and resources. The needed non-functional testing is always decided based on the quality attributes of the system and there-fore selected by case basis. (Burnstein 2003)
4
There are two basic testing strategies, white-box testing and black-box testing. When the whitebox strategy is used, the internal structure of the system under test is known. The purpose is to verify the correct behavior of internal structural elements. This can be done for example by exercising all the statements or all conditional branches. Because the white-box testing is quite time consuming, it is usually done for small parts of the system at a time. White-box testing methods are useful in finding design, code-based control, logic and sequence defects, initialization defects, and data flow defects. (Burnstein 2003) In black-box testing the system under test is seen as an opaque box. There is no knowledge of the inner structure of the software. The only knowledge is how the software works. The intention of the black-box testing is to provide inputs to the system under test and verify that the system works as defined in the specifications. Because black box approach considers only behavior and functionality of the system under test, it is also called functional testing. With black box strategy requirement and specification defects are revealed. Black-box testing strategy can be used at all test levels defined in the following chapter. (Burnstein 2003)
UNIT TESTING The smallest part of software is a unit. A unit is traditionally viewed as a function or a procedure in a (imperative) programming language. In object-oriented systems methods and classes/objects can be seen as units. Unit can also be a small-sized component or a programming library. The principal goal of unit testing is to detect functional and structural defects in the unit. Sometimes the name component is used instead of a unit. In that case the name of this phase is component testing. (Burnstein 2003)
There are different opinions about who should create unit tests. Unit testing is in most cases best handled by developers who know the code under test and techniques needed (Dustin et al. 1999; Craig & Jaskiel 2002; Mosley & Posey 2002). On the other hand, Burnstein (2003) thinks that an independent tester should plan and execute the unit tests. The latter is the more traditional point of view, pointing that nobody should evaluate their own job.
Unit testing can be started in an early phase of the software development after the unit is created. The failures revealed by the unit tests are usually easy to locate and repair since only one unit is under consideration (Burnstein 2003). For these reasons, finding and fixing the defects is cheapest on the unit test level.
INTEGRATION TESTING When units are combined the resulting group of units is called a subsystem or some times in object-oriented software system a cluster. The goal of integration testing is to verify that the component/class interfaces are working correctly and the control and data flows are working correctly between the components. (Burnstein 2003)
7
SYSTEM TESTING When ready and tested subsystems are combined to the final system, system test execution can be started. System tests evaluate both the functional behavior and non-functional qualities of the system. The goal is to ensure that the system performs according to its requirements when tested as a whole system. After system testing and corrections based on the found faults are done, the system is ready for the customers acceptance testing, alpha testing or beta testing (see next paragraph). If the customer has defined the acceptance tests, those can be used in the system testing phase to assure the quality of the system from the customers point of view. (Burnstein 2003)
ACCEPTANCE TESTING When a software product is custom-made, the customer wants to verify that the developed software meets her requirements. This verification is done in the acceptance testing phase. The acceptance tests are developed in co-operation between the customer and test planners and executed after the system testing phase. The purpose is to evaluate the software in terms of customers expectations and goals. When the acceptance testing phase is passed, the product is ready for production. If the product is targeted for mass market, it is often not possible to arrange customer-specific acceptance testing. In these cases the acceptance testing is conducted in two phases called alpha and beta testing. In alpha testing the possible customers and members from the development organization test the product in the development organization premises. After defects found in alpha testing are fixed, beta testing can be started. The product is send to a cross-section of users who use it in the real-world environment and report the found defects. (Burnstein 2003)
8
REGRESSION TESTING The purpose of regression testing is to ensure that old characteristics are working after changes made to the software and verify that the changes have not introduced new defects. Regression testing is not a test level as such and it can be performed in all test levels. The importance of the regression testing increases when the system is released multiple times. The functionality provided in the previous version should still work with all the new functionality and verifying this is very time consuming. There-fore it is recommended to use automated testing tools to support this task (Burnstein 2003). Also Kaner et al. (1999) have noticed that it is a common way to automate acceptance and regression tests to quickly verify the status of the latest build
Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps 1. Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired. 2. Write detailed test case, identifying clear and concise steps to be taken by the tester, with expected outcomes. 3. Assign the test cases to testers, who manually follow the steps and record the results. 4. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems. A rigorous test case based approach is often traditional for large software engineering projects that follow a WaterFlowModel However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.
10
Time Save: Running unattended automated test scripts saves human time as well as machine time than executing scripts manually
Better use of people While automated scripts are running unattended on machines, testers can do more useful tasks
Cost Saving On test engagements requiring a lot of regression testing, usage of automated testing reduces the people count and time requirement to complete the engagement and helps reduce the costs
11
4 Automation approaches
12
4.3
Application input data are extracted from the test script for maintainability and expansion of scope Validation data are maintained separately and are compared with the input data Data is stored in flat files, spreadsheets, or databases depending upon the data complexity and the extent to which it needs to be accessed by test scripts
13
6 Conclusion
we need automation of software testing for reuse of test, better use of people ,cost saving and resource saving we use automation of software testing .and different approach of automate testing like record/reply tool approach, manual tool approach are painful because that approach have not reusability and does not sustain against the changes means it is not reliable. And it does not meet all technical as well as business requirements. but our approach test case driven approach may be painless and meet all requirement of business as well as technical requirement. we can derive and prove after data gathering and draw graph of ROI and LOE
14
7 References
1. J. Bullock, Calculating the Value of Testing, Software Testing and Quality Engineering, Volume 2, Issue 3, 56-62 (May/June, 2000). 2. Gautam Kumar Saha, Understanding Software Testing Concepts, ACM Ubiquity Vol 9,Issue 6, February12-18, 2008. 3. Dafydd Vaughan, System Testing with Object-Oriented Programs CS339 Advanced Topics in Computer Science: Testing Department of Computer Science, Swansea University, January 14, 2007 4. Jong Chae Na ,Test Automation Framework for Implementing Continuous Integration Sixth International Conference on Information Technology: New Generations, 2009
15