Software Testing Fundamentals
Software Testing Fundamentals
WHAT IS SOFTWARE TESTING? Software testing is a process of verifying and validating that a software application or program 1. Meets the business and technical requirements that guided its design and development, and 2. Works as expected. Software testing also identifies important defects, flaws, or errors in the application code that must be fixed. The modifier important in the previous sentence is, well, important because defects must be categorized by severity Software testing has three main purposes: verification, validation, and defect finding. The verification process confirms that the software meets its technical specifications. A specification is a description of a function in terms of a measurable output value given a specific input value under specific preconditions. A simple specification may be along the line of a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields <list> ordered by month within 3 seconds of submission. The validation process confirms that the software meets the business requirements. A simple example of a business requirement is After choosing a branch office name, information about the branchs customer account managers will appear in a new window. The window will present manager identification and summary information about each managers customer base: <list of data elements>. Other requirements provide details on how the data will be summarized, formatted and displayed. A defect is a variance between the expected and actual result. The defects ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases. Software testing answers questions that development testing and code reviews cant. Does it really work as expected? Does it meet the users requirements? Is it what the users expect? Do the users like it? Is it compatible with our other systems? How does it perform? How does it scale when more users are added? Which areas need more work? Is it ready for release? What can we do with the answers to these questions? Save time and money by identifying defects early Avoid or reduce development downtime Provide better customer service by building a better application Know that weve satisfied our users requirements Build a list of desired modifications and enhancements for later versions Identify and catalog reusable modules and components Identify areas where programmers and developers need training Testing can involve some or all of the following factors. Business requirements Functional design requirements Technical design requirements Code Hardware configuration Cultural issues and language differences
THE V-MODEL OF SOFTWARE TESTING In a diagram of the V-Model, the V proceeds down and then up, from left to right depicting the basic sequence of development and testing activities. The model highlights the existence of different levels of testing and depicts the way each relates to a different development phase.
THE TEST PLAN The test plan is a mandatory document. You cant test without one.
Items Covered by a Test Plan Component Responsibilities Assumptions Test Communication Risk Analysis Defect Reporting Environment Description Specific people who are and their assignments Code and systems status and availability Testing scope, schedule, duration, and prioritization Communications planwho, what, when, how Critical items that will be tested How defects will be logged and documented The technical environment, data, work area, and interfaces used in testing Purpose Assigns responsibilities and keeps everyone on track and focused Avoids misunderstandings about schedules Outlines the entire process and maps specific tests Everyone knows what they need to know when they need to know it Provides focus by identifying areas that are critical for success Tells how to document a defect so that it can be reproduced, fixed, and retested Reduces or eliminates misunderstandings and sources of potential delay
Defects Category:
1. Show Stopper It is impossible to continue testing because of the severity of the defect. 2. Critical -- Testing can continue but the application cannot be released into production until this defect is fixed. 3. Major -- Testing can continue but this defect will result in a severe departure from the business requirements if released for production. 4. Medium -- Testing can continue and the defect will cause only minimal departure from the business requirements when in production. 5. Minor Testing can continue and the defect will not affect the release into production. The defect should be corrected but little or no changes to business requirements are envisaged. 6. Cosmetic Minor cosmetic issues like colors, fonts, and pitch size that do not affect testing or production release. If, however, these features are important business requirements then they will receive a higher severity level.
Severity: means the intensity of the defect Priority: means how soon the defect needs to be fixed
TYPES OF SOFTWARE TESTS The V-Model of testing identifies five software testing phases, each with a certain type of test associated with it.
Each testing phase and each individual test should have specific entry criteria that must be met before testing can begin and specific exit criteria that must be met before the test or phase can be certified as successful. The entry and exit criteria are defined by the Test Coordinators and listed in the Test Plan. UNIT TESTING A series of stand-alone tests are conducted during Unit Testing. Each test examines an individual component that is new or has been modified. A unit test is also called a module test because it tests the individual units of code that comprise the application. SYSTEM TESTING System Testing tests all components and modules that are new, changed, affected by a change, or needed to form the complete application. The system test may require involvement of other systems but this should be minimized as much as possible to reduce the risk of externally-induced problems. Testing the interaction with other parts of the complete system comes in Integration Testing. The emphasis in system testing is validating and verifying the functional design specification and seeing how all the modules work together. For example, the system test for a
new web interface that collects user input for addition to a database doesnt need to include the databases ETL applicationprocessing can stop when the data is moved to the data staging area if there is one. The first system test is often a smoke test. This is an informal quick-and-dirty run through of the applications major functions without bothering with details INTEGRATION TESTING Integration testing examines all the components and modules that are new, changed, affected by a change, or needed to form a complete system. Where system testing tries to minimize outside factors, integration testing requires involvement of other systems and interfaces with other applications, including those owned by an outside vendor, external partners, or the customer REGRESSION TESTING Regression testing is also known as validation testing and provides a consistent, repeatable validation of each change to an application under development or being modified. Each time a defect is fixed, the potential exists to inadvertently introduce new errors, problems, and defects. An element of uncertainty is introduced about ability of the application to repeat everything that went right up to the point of failure. Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. USER ACCEPTANCE TESTING (UAT) User Acceptance Testing is also called Beta testing, application testing, and end-user testing.