0% found this document useful (0 votes)
18 views11 pages

Unit 2 SQA

The document outlines various software development methodologies, including the Software Development Life Cycle (SDLC), Waterfall, Prototyping, Spiral, and Object-Oriented models, detailing their phases, advantages, and disadvantages. It also discusses quality assurance aspects such as verification, validation, and qualification, emphasizing their importance in ensuring software quality. Additionally, it provides a comprehensive overview of test planning, including roles, objectives, components, and an example test plan.

Uploaded by

64 PREETHI S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

Unit 2 SQA

The document outlines various software development methodologies, including the Software Development Life Cycle (SDLC), Waterfall, Prototyping, Spiral, and Object-Oriented models, detailing their phases, advantages, and disadvantages. It also discusses quality assurance aspects such as verification, validation, and qualification, emphasizing their importance in ensuring software quality. Additionally, it provides a comprehensive overview of test planning, including roles, objectives, components, and an example test plan.

Uploaded by

64 PREETHI S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Q1) Software Development Methodologies

(Pg 1 to 9) (Include all diagrams from notes)

Types of Software Development Processes


1. Software Development Life Cycle (SDLC) Model
2. Prototyping Model
3. Spiral Model
4. Object-Oriented Model

1. Software Development Life Cycle (SDLC) Model


Definition

 Describes the sequence of activities in a software engineering (SE) project.


 Organizes phases in a structured manner.

Phases of SDLC

1. Requirements Definition
o Customers define required functionality, behavior, performance, and interfaces.
2. Analysis
o Examines the requirements and forms an initial system model.
3. Design
o Defines outputs, inputs, processing procedures, data structures, and software architecture.
4. Coding
o Converts design into code.
o Includes quality assurance (unit tests, integration tests).
5. System Testing
o Identifies software errors to ensure software quality.
o Performed by the developer before delivery.
o Customers may conduct independent "acceptance tests."
6. Installation and Conversion
o System is installed as firmware.
o If replacing an existing system, a conversion process ensures uninterrupted activities.
7. Regular Operation & Maintenance
o Three types of maintenance:
 Corrective: Fixing faults.
 Adaptive: Modifying software to meet new needs.
 Perfective: Adding minor features to enhance performance.

Variation in Phases

 Large-scale projects may have 8+ phases.


 Small projects may merge phases into 4-6 steps.
2. Waterfall Model
Strengths

 Simple and easy to understand.


 Well-structured process for inexperienced staff.
 Clear milestones.
 Ensures stable requirements.
 Suitable for projects prioritizing quality over cost and schedule.

When to Use

 When requirements are well-defined.


 For stable product definitions.
 When using well-understood technology.
 Porting an existing product to a new platform.

Disadvantages

 Doesn’t align well with real-world iterative development.


 Difficult to capture accurate early-stage requirements.
 Late software delivery may delay error discovery.

3. Prototype Model
Definition

 Quickly creates a working model (prototype) for testing design concepts and gathering user feedback.

Process

1. Developers build a prototype during the requirements phase.


2. Users evaluate the prototype and provide feedback.
3. Developers refine the prototype.
4. Once users approve, the prototype is finalized into a product.

Advantages

 Users can test and provide feedback early.


 Rapid development of functional prototypes.
 Helps detect errors early.

Disadvantages

 Design ideas can be uncertain.


 Information may be lost through continuous modifications.
 Hard to estimate project duration.
4. Spiral Model
Definition

 Iterative development process where each cycle includes planning, risk analysis, engineering, and
evaluation.

Phases

1. Planning
o Gather requirements like Business Requirement Specifications (BRS) and System Requirement
Specifications (SRS).
2. Risk Analysis
o Identify risks and suggest alternative solutions.
o Create a prototype for evaluation.
3. Engineering
o Develop and test the software.
4. Evaluation
o Customers review project progress before proceeding to the next cycle.

Advantages

 Reflects real-world software development.


 Combines benefits of Waterfall and Prototyping models.
 Reduces project risk.
 Enhances project visibility.

Disadvantages

 High cost.
 Requires specialized risk analysis expertise.
 Project success depends on risk evaluation.
 Unsuitable for small projects.

When to Use

 When risk and cost evaluation are critical.


 For medium-to-high-risk projects.
 When user needs and requirements are uncertain.
 In research-oriented projects expecting significant changes.

5. Object-Oriented Model
Definition

 Emphasizes software reuse through existing components (objects).


 Uses object-oriented analysis and design.

Process

1. Conduct object-oriented analysis and design.


2. Reuse existing software components from a component library.
3. Develop new components when necessary.
4. Store new components for future reuse.

Advantages

 Encourages software reuse.


 Improves software development productivity.

6. Quality Assurance in Software Development


Factors Affecting Quality Assurance Activities

Project Factors

 Project size and complexity.


 Availability of reusable software components.
 Risk severity if the project fails.

Team Factors

 Team expertise and experience.


 Availability of skilled support staff.
 Familiarity with project and technologies.

Quality Assurance Process

1. Define quality assurance activities for each development phase.


2. Assign responsibilities for each activity.
3. Allocate resources for defect removal and changes.

Example: Patient Monitoring System


Project Overview

 Develop an advanced patient monitoring system combining room units with control stations.
 Interfaces with medical equipment from different manufacturers.
 Nurses use a control unit; doctors receive data via mobile units.

Project Characteristics
 Duration: 14 months
 Team Size: 5 members
 Effort: 40 man-months
 Reusable Components: 15%
 Methodology Used: SDLC with two prototypes for user feedback.

Considerations

 High system complexity.


 Limited availability of reusable components.
 Large project scope.
 Critical failure consequences.

Planned Quality Assurance Activities

1. Design review of requirements.


2. Design review of room unit analysis.
3. Design review of control unit analysis.
4. Preliminary design review.
5. Inspection of patient room unit design.
6. Inspection of control unit design.
7. Prototype design review for room unit.
8. Prototype design review for control unit.
9. Detailed design inspection of each software component.
10. Review of test plans for room and control units.
11. Unit testing for each software module.

Q3) Verification, Validation, and Qualification


(Pg 10 to 11)

1. Overview of Quality Assurance Aspects


Quality assurance in software development involves three key aspects:

 Verification – Ensuring the product meets specified conditions during each development phase.
 Validation – Checking if the final product meets customer requirements.
 Qualification – Determining if the system or component is suitable for operational use.
2. Definitions
2.1 Verification

 Evaluates whether each development phase produces outputs that meet predefined conditions.
 Ensures consistency between products from different development phases.
 Assumes previous phases are completed correctly.
 Focuses on adherence to specifications rather than customer requirements.

2.2 Validation

 Ensures the final product meets customer expectations.


 Helps improve customer satisfaction.
 Compares the system against the original requirements.

2.3 Qualification

 Determines whether a system or component is fit for operational use.

3. Key Differences Between Verification and Validation


Verification Validation
1. Static practice – involves checking documents, design, and 1. Dynamic practice – involves testing the
code. actual product.
2. Does not involve code execution. 2. Always involves code execution.
3. Human-based checking of documents and files. 3. Computer-based execution of the program.
4. Uses methods like inspections, reviews, walkthroughs, desk- 4. Uses black-box testing, gray-box testing,
checking. and white-box testing.
5. Ensures software meets customer
5. Ensures software conforms to specifications.
expectations and requirements.
6. Catches errors that verification might miss
6. Catches errors that validation might miss (low-level check).
(high-level check).
7. Targets requirements specification, application/software 7. Targets actual product, including modules,
architecture, high-level design, and database design. integrated modules, and final product.
8. Performed by the QA team to ensure compliance with the 8. Performed with the involvement of the
Software Requirements Specification (SRS). testing team.
9. Comes first – done before validation. 9. Comes after verification.

4. Importance of Specifications in Quality Assurance


 Validation ensures the specification aligns with customer needs.
 Verification ensures the software meets the given specifications.
 Both are essential for delivering high-quality, functional, and reliable software.
Q6)
https://www.geeksforgeeks.org/test-plan-software-testing/

Here's a structured, pointwise version of the test plan document with clear subtopics:

Test Plan: A Comprehensive Overview


1. What is a Test Plan?
 A document outlining all testing-related activities for a project.
 Defines what will be tested, how it will be tested, and by whom.
 Created by the test manager before testing begins.
 Serves as a blueprint that adapts to project changes.
 Shared with Business Analysts, Project Managers, and stakeholders.

2. Roles and Responsibilities in Test Planning


Factor Roles
Who writes Test Plans? Test Lead, Test Manager, Test Engineer
Who reviews Test Plans? Test Lead, Test Manager, Test Engineer, Customer, Development Team
Who approves Test Plans? Customer, Test Manager
Who writes Test Cases? Test Lead, Test Engineer
Who reviews Test Cases? Test Engineer, Test Lead, Customer, Development Team
Who approves Test Cases? Test Manager, Test Lead, Customer

3. Importance of Test Plan Creation


Key Benefits

 Defines Objectives: Ensures clear understanding of testing goals.


 Structured Approach: Systematic testing process.
 Avoids Scope Creep: Prevents unnecessary testing.
 Resource Allocation: Ensures necessary tools, environments, and personnel are available.
 Identifies Risks: Outlines mitigation strategies.
 Contingency Plans: Prepares for unexpected events.
 Stakeholder Alignment: Facilitates communication.
 Documentation: Provides transparency and knowledge sharing.
 Resource Optimization: Efficiently uses time and personnel.
 Focus on Priorities: Targets high-impact areas.
4. Objectives of a Test Plan
 Overview of testing activities: Defines the start and stop points.
 Timeline management: Helps estimate testing duration.
 Resource estimation: Determines workforce and tool requirements.
 Blueprint for activities: Covers details from start to finish.
 Solution identification: Identifies potential project challenges.
 Rulebook for execution: Defines rules for each phase.

5. Difference Between Test Strategy and Test Plan


Aspect Test Strategy Test Plan
High-level document outlining approach and Detailed document for specific testing
Definition
goals. activities.
Purpose Provides a framework for multiple projects. Specifies the process for a single project.
Scope Broad, organization-wide. Narrow, project-specific.
Level of Detail High-level guidelines. Detailed execution plan.
Responsibility Created by senior management. Created by test managers or leads.
Components Static, updated infrequently. Dynamic, updated regularly.
Audience Stakeholders, senior management. Testers, developers, project managers.
Testing tools, defect tracking, general Specific test cases, environments, and
Examples
guidelines. schedules.
Focus What and why of testing. How, when, and who of testing.
Timeframe Long-term. Short-term, aligned with project lifecycle.
Updates Rare. Frequent, as needed.

6. Components and Attributes of a Test Plan


1. Objective

 Defines the goal of the test plan.


 Aims to detect defects and deliver a bug-free product.
 Should break objectives into components and sub-components.

2. Scope

 In-Scope: Modules to be tested rigorously.


 Out-Scope: Modules that won’t be tested in detail.
 Example: Testing features A, B, C, D, but feature B is purchased externally, so only integration testing
will be performed.

3. Testing Methodology
 Defines testing methods based on application features.
 Ensures clarity on the types of testing to be used.

4. Approach

 Defines how testing will be performed.


 Includes:
o High-Level Scenarios: E.g., logging into a website, booking a product.
o Flow Graphs: Visual representation of testing sequences.

5. Assumptions

 Example assumptions:
o The testing team will receive proper support from developers.
o Testers will get knowledge transfer.
o The company will allocate adequate resources.

6. Risks

 Risks arise when assumptions fail.


 Example risks:
o Poor test management.
o Project delays.
o Lack of team coordination.

7. Mitigation Plan

 Steps to handle risks:


o Prioritize test activities.
o Train testers.
o Develop leadership skills in managers.

8. Roles and Responsibilities

 Test Manager: Manages the project, allocates resources.


 Tester: Executes tests, identifies issues.

9. Schedule

 Records start and end dates for testing activities.


 Example: Test case creation starts on Day 1, ends on Day 5.

10. Defect Tracking

 Steps in tracking bugs:


o Information Capture: Document the issue.
o Prioritization: Categorize bugs by severity.
o Communication: Ensure clear reporting between testers and developers.
o Testing Environment: Reproduce defects in various conditions.
 Example: Using Jira, Mantis, Trac for bug tracking.
11. Test Environments

 Lists hardware and software configurations for testing.


 Example:
o Software: Windows, Linux, MacOS.
o Hardware: RAM, ROM specifications.

12. Entry and Exit Criteria

 Entry Conditions:
o Resources must be available.
o Test data must be ready.
 Exit Conditions:
o No major defects.
o Most test cases executed successfully.
 Example: If 45% of test cases fail, testing is suspended until fixes are applied.

13. Test Automation

 Decides what to automate:


o Manual Testing: If the feature has many bugs.
o Automated Testing: If the feature is frequently tested.

7. Example Test Plan


Test Plan Component Details
Objective Ensure login functionality works correctly.
Scope Login feature across web and mobile applications.
Testing Methodology Functional, Regression, and Security Testing.
Approach Use automation for login validation.
Assumptions Testers will receive full API documentation.
Risks API downtime may delay testing.
Mitigation Plan Maintain a backup testing environment.
Roles QA Lead, Test Engineer.
Defect Tracking Tool Jira.
Entry Criteria Login module fully developed.
Exit Criteria No major bugs, all test cases pass.
Test Automation Login validation automated using Selenium.

This structured version makes it easier to understand the key aspects of a test plan while ensuring clarity and
accessibility! 🚀

You might also like