SDLC Manual
SDLC Manual
SDLC Manual
TOPICS
SDLC (different phases) Types of SDLC models Role of Testing in SDLC Key differences between QA, QC and testing Testing life cycle (Test Analysis, Test planning/Test Design, Test execution) Types of Testing Roles and responsibilities of a QA Analyst/QA Tester Manual testing a. Test Plan b. Test Strategy
k. Issue Logging process Test Director Test case writing workshop in Test Director Introduction to Rational ClearQuest Introduction to Rational ClearCase Introduction to Rational RequisitePro Automated testing a. QuicktestPro 6.5, QTP 8.0 b. Practice test to record and play back on QTP and Winrunner c. Differences between Winrunner and QTP
d. Introduction to Loadrunner
Preparing resumes and Interviewing tips
Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.
SD
Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudo code and other documentation.
SD
Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more.
HLD means high level design LLD means low level design HLD: It refers to the functionlity to be achieved to meet the client requirement. Precisely speaking it is a diagramatic representation of clients operational systems, staging areas, dwh n datamarts. also how n what frequency the data is extracted n loaded into the target database. LLD: It is prepared for every mapping along with unit test plan. It contains the names of source definitions, target definitions, transformatoins used, column names, data types, business logic written n source to target field
To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, spiral, rapid prototyping, RUP (Rational Unified Process) and incremental etc.
Life Cycle
Spiral model - The spiral model emphasizes the need to go back and reiterate earlier stages a number of times as the project progresses. It's actually a series of short waterfall cycles, each producing an early prototype representing a part of the entire project.
This approach helps demonstrate a proof of concept early in the cycle, and it more accurately reflects the disorderly, even chaotic evolution of technology. Rapid Prototyping - In the rapid prototyping (sometimes called rapid application development) model, initial emphasis is on creating a prototype that looks and acts like the desired product in order to test its usefulness. The prototype is an essential part of the requirements determination phase, and may be created using tools different from those used for the final product. Once the prototype is approved, it is discarded and the "real" software is written.
Incremental - The incremental model divides the product into builds, where sections of the project are created and tested separately. This approach will likely find errors in user requirements quickly, since user feedback is solicited for each stage and because code is tested sooner after it's written. Iterative models - by definition have an iterative component to the systems development. It allows the developer to take a small segment of the application and develop it in a fashion that, at each recursion, the application is improved. Each of the three main sections: requirements
Rational Unified Process In its simplest form, RUP consists of some fundamental workflows: Business Engineering: Understanding the needs of the business. Requirements: Translating business need into the behaviors of an automated system. Analysis and Design: Translating requirements into software architecture. Implementation: Creating software that fits within the architecture and has the required behaviors. Test: Ensuring that the required behaviors are correct, and that all required behaviors are present. Configuration and change management: Keeping track of all the different versions of all the work products. Project Management: Managing schedules and resources. Environment: Setting up and maintaining the development environment. Deployment: Everything needed to roll out the project.
10
11
Term
Definition
Acceptance Testing
SDLC (Software Development Lifeof confirming readiness of the product and Cycle) Testing the system with the intent
customer acceptance.
Transition phase: In this phase, the system/software whatever it is designed is ready to rollout
Ad Hoc Testing Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.
Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.
Black Box Testing Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.. Compatibility Testing Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested. Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Functional Testing Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do. Independent Verification and Validation (IV&V)
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.
Installation Testing
Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs.
12
Integration Testing Testing two or more modules or functions together with the intent of finding interface
Testing life cycle (Test Analysis, Test planning/Test Design, Test execution)
Similar to the system development life cycle, testing also has a life cycle. As testing is a part of the SDLC, some of the testing phases are combination of two different SDLC phases. Testing life cycle has three different phases viz. Test analysis phase, Test planning/Test designing phase, Test execution phase.
Test Analysis phase: In this phase, a tester needs get an understanding about the project. Test Design phase: In this phase, a tester needs to design the test cases based on the requirements and use cases. Test Execution phase: In this phase, a tester needs to execute the test cases written by him/her or any other resource and raise the defects, if any.
Types of Testing
Roles and responsibilities of a QA Analyst/QA Tester
The QA Tester will follow a documented test plan and be responsible for the reporting of clear and very detailed bug reports. The QA Tester will work with their fellow testers and Senior tester to ensure quality testing and bug reporting is maintained throughout the products entire testing process. The QA analyst will be required to analyze user requirements Understand and document procedures Develop publish and implement test plans Write and maintain test cases Create and maintain test data Clearly document and re-verify all defects Document test results Analyzing the requirements for multi-tier architected web-commerce applications from industry standard design documents. Developing high-level test design and planning documentation. Design, code, test, and execute test case scenarios Analysis of test results leading to defect isolation and resolution.
Responsibilities can vary for a QA analyst position depending on the job requirement. But these are the major responsibilities.
Manual testing
13
a.
Test Plan
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: Contents of a Test Plan Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact-info/responsibilities Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues Software migration processes Software CM processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control
14
b.
Test Strategy
Test strategy is How we plan to cover the product so as to develop an adequate assessment of quality. A good test strategy is: Specific, Practical, Justified The purpose of a test strategy is to clarify the major tasks and challenges of the test project. Test Approach and Test Architecture are other terms commonly used to describe what Im calling test strategy. It describes what kind of testing needs to be done for a project for ex: user acceptance testing, functional testing, load testing, performance testing etc.
c.
i.
15
ii.
A test case is developed based on the high level scenarios, which are in turn developed from the requirement. So, every requirement must have at least one test case. This test case needs to be wholly concentrated on the requirement. For ex: Lets take yahoomail.com, in this website, the requirement says that username can accept alphanumeric characters. So, the test case must be written to check for different combinations like testing with only alphabets, only numeric and alphanumeric characters. And the test data what you give for each test case is different for each combination. Like this, we can write any number of test cases, but optimization of these test cases is important. Optimize exactly what all test cases we need and what not.
HLS1
Requirement 1
Test Case 1
HLS2
Test Case 2
Test Case 3
iii.
Once all the test cases are written, they need to be executed. Execution starts only after the testing team receives the build from the development. The build is nothing but the new code, which has been developed as per the project requirements. This build is tested thoroughly by executing all combination of these test cases. Please dont be in a myth that we write test cases after the development is ready with the build. Development and testing has to go parallel. Remember, test designing is done purely on the available valid documentation. While executing test cases, there will always a possibility that the expected result can vary from the actual result while testing. In this case, it is a defect/bug. A defect needs to be raised against the development team, and this defect needs to be resolved as soon as possible based on the schedule of the project. A test case is identified by ID number and prioritized. Each test case has the following criteria: Purpose - Reason for the test case Steps - A logical sequence of steps the tester must follow to execute the test case Expected Results - The expected result of the test case Actual Result - What actually happened when the test case was executed
16
d.
QA matrix is used to assess how many requirements have been tested and have test cases written for them. It is in the form of a excel sheet, which shows whether a requirement is covered or not. In case, we are missing a requirement which needs to be tested then we are not testing 100% accurately. So, there is a chance of defects arising when the product is rolled out into production. Sample of a QA matrix
Test Description Test Cases/ Samples Pass/ Fail
1.1 P/F
No. of Bugs
#
Bug#
#
Comments
Setup for [Product Name] setup Test that file types supported by the program can be opened Verify all the different ways to open file (mouse, keyboard and accelerated keys) Verify files can be open from the local drives as well as network
1.2
1.2
P/F
1.3
1.3
P/F
e.
i.
Raise defects
The above the figure displays the defect life cycle. The defects need not arise during testing only; they can arise at any stage of the SDLC. Whenever the expected result of a test case doesnt match with the actual result of the test case, a defect is raised. It is shown as create
17
ii.
Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient. Cancelled Defects - The list of defects remaining in the defect tracking system with a status of cancelled. Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem. Fixed Defects - The list of defects waiting for verification by QA.
Closed Defects - The list of defects verified as fixed by QA during the project cycle.
Once the defects are resolved, the test cases which are failed initially need to be retested and check for any new defects.
18
This is a sample figure showing how many defects were raised during different phases of SDLC.
f.
Purpose:
The purpose of the Issue Log is to: Allocate a unique number to each Project Issue Record the type of Project Issue Be a summary of all the Project Issues, their analysis and status.
http://www.dijest.com/tools/pmworkbench/pmtemplates/pitempl/IMLOG2.DOC
g.
The creation and implementation of the Change Control Process ensures the standardization of the methods and procedures used for efficient and prompt handling of changes, minimizing the impact of change-related incidents to service quality and improving service delivery. Documents are modified at every stage of the SDLC. These documents need to be differentiated based on their content. In this kind of situation, change control process comes into picture. This is a process of organizing the project documents effectively to deliver the output as expected. Based on the discussions between different groups, a document is assigned by a version number for ex: Yahoo.v2.3.
The initial draft is assigned as v0.0 and is discussed among various teams. Whenever there is a change in this document, we assign it a new version number as 0.1 and 0.2 and so on.
19