STQA
STQA
Unit No: I
1. What is software testing? Discuss the need of software testing.
Software Testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free. The
purpose of software testing is to identify the errors, faults, or missing requirements in contrast to actual
requirements. It mainly aims at measuring the specification, functionality, and performance of a software
program or application.
The following are important reasons why software testing techniques should be incorporated into
application development:
Identifies defects early. Developing complex applications can leave room for errors. Software testing is
imperative, as it identifies any issues and defects with the written code so they can be fixed before the
software product is delivered.
Improves product quality. When it comes to customer appeal, delivering a quality product is an important
metric to consider. An exceptional product can only be delivered if it's tested effectively before launch.
Software testing helps the product pass quality assurance (QA) and meet the criteria and specifications
defined by the users.
Increases customer trust and satisfaction. Testing a product throughout its development lifecycle builds
customer trust and satisfaction, as it provides visibility into the product's strong and weak points. By the
time customers receive the product, it has been tried and tested multiple times and delivers on quality.
Detects security vulnerabilities. Insecure application code can leave vulnerabilities that attackers can
exploit. Since most applications are online today, they can be a leading vector for cyber attacks and should
be tested thoroughly during various stages of application development. For example, a web application
published without proper software testing can easily fall victim to a cross-site scripting attack where the
attackers try to inject malicious code into the user's web browser by gaining access through the vulnerable
web application. The nontested application thus becomes the vehicle for delivering the malicious code,
which could have been prevented with proper software testing.
Helps with scalability. A type of nonfunctional software testing process, scalability testing is done to
gauge how well an application scales with increasing workloads, such as user traffic, data volume and
transaction counts. It can also identify the point where an application might stop functioning and the
reasons behind it, which may include meeting or exceeding a certain threshold, such as the total number of
concurrent app users.
Saves money. Software development issues that go unnoticed due to a lack of software testing can haunt
organizations later with a bigger price tag. After the application launches, it can be more difficult to trace
and resolve the issues, as software patching is generally more expensive than testing during the
development stages.
Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate of the
software system, and can refer to the entire system or to one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).It also deals with the time between recharging of the
system’s portable units, such as, information system units located in portable computers, or meteorological
units placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to
distinguish between the group of people to be given read as well as write permit.
Usability- Usability requirements deal with the staff resources needed to train a new employee and to operate
the software system.
3. Elaborate the difference between QA and QC in detail.
It focuses on providing assurance that the quality It focuses on fulfilling the quality requested.
requested will be achieved.
It is involved during the development phase. It is not included during the development phase.
It does not include the execution of the program. It always includes the execution of the program.
The aim of quality assurance is to prevent defects. The aim of quality control is to identify and
improve the defects.
It pays main focus is on the intermediate process. Its primary focus is on final products.
All team members of the project are involved. Generally, the testing team of the project is
involved.
It aims to prevent defects in the system. It aims to identify defects or bugs in the system.
Statistical Process Control (SPC) statistical Statistical Quality Control (SQC) statistical
technique is applied on Quality Assurance. technique is applied on Quality Control.
Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate of the
software system, and can refer to the entire system or to one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).It also deals with the time between recharging of the
system’s portable units, such as, information system units located in portable computers, or meteorological
units placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to
distinguish between the group of people to be given read as well as write permit.
Usability- Usability requirements deal with the staff resources needed to train a new employee and to operate
the software system.
7. Discuss the Role of testing in each phase of software development life cycle.
1. Requirements Gathering and Analysis:
Testing begins by validating requirements for consistency, clarity, and feasibility.
Testers collaborate with stakeholders to ensure understanding and identify potential ambiguities or
conflicts in requirements.
2. System Design:
During this phase, testing involves reviewing system design documents and architecture diagrams
to ensure they align with requirements.
Testers prepare test scenarios and design test cases based on the system design.
3. Implementation (Coding):
Unit testing is performed at this stage by developers to validate individual components or modules.
Code review and static analysis tools may be utilized to detect defects early.
4. Integration and Testing:
Integration testing verifies the interactions and interfaces between integrated components or
modules.
Testers conduct functional and non-functional testing to ensure the software behaves as expected.
5. System Testing:
This phase involves testing the entire system as a whole to ensure it meets specified requirements.
Testers perform regression testing, user acceptance testing (UAT), performance testing, security
testing, etc.
6. Deployment and Maintenance:
Post-deployment, testing continues with maintenance activities to identify and fix defects reported
by users or identified through monitoring.
Regression testing is performed to ensure changes or fixes do not introduce new issues.
8. What is quality assurance? Write down the purpose of the quality assurance.
Quality Assurance Quality assurance consists of the auditing and reporting functions of management. The
goal of quality assurance is to provide management with the data necessary to be informed about product
quality, thereby gaining insight and confidence that product quality is meeting its goals. Of course, if the
data provided through quality assurance identify problems, it is management’s responsibility to address the
problems and apply the necessary resources to resolve quality issues.
The purpose of Quality Assurance includes:
1. Process Compliance: QA ensures that established processes, methodologies, and standards are followed
consistently throughout the development lifecycle. It involves defining, implementing, and maintaining
processes to meet quality goals.
2. Quality Standards Adherence: QA establishes and maintains quality standards, guidelines, and
procedures. It aims to ensure that these standards are met and continuously improved upon.
3. Defect Prevention: QA focuses on identifying and rectifying potential issues early in the development
process to prevent defects from occurring in the final product. This involves risk assessment, reviews, and
proactive measures.
4. Customer Satisfaction: By maintaining quality standards and meeting customer requirements, QA
contributes to higher customer satisfaction and confidence in the product or service.
5. Continuous Improvement: QA involves a continuous feedback loop for improvement. It aims to identify
areas for enhancement, refine processes, and incorporate lessons learned from previous projects to enhance
overall quality in subsequent iterations.
6. Metrics and Analysis: QA often involves the collection of metrics and data analysis to assess the
effectiveness of processes and identify areas that need improvement. These metrics help in making
informed decisions for quality enhancement.
Verificatio Validation
n
The verifying process includes checking It is a dynamic mechanism of testing
documents, design, code, and program and validating the actual product
It does not involve executing the code It always involves executing the code
Verification uses methods like reviews, It uses methods like Black Box Testing, White
walkthroughs, inspections, and desk- checking Box Testing, and non-functional testing
etc.
Whether the software conforms to It checks whether the software meets the
specification is checked requirements and expectations of a customer
It can find bugs that the verification process
It finds bugs early in the development cycle
can not catch
Target is application and software
architecture, specification, complete Target is an actual product
design, high level, and database design
etc.
QA team does verification and make sure
With the involvement of testing team validation
that the software is as per the requirement
is executed on software code.
in the SRS document.
It comes before validation It comes after verification
Inspectio
n Walkthrough
1. It is formal. It is informal.
Initiated by project
2. Initiated by author.
team.
A group of relevant
Usually team members of the same project take
persons from different
3. participation in the walkthrough. Author himself
departments participate
acts walkthrough leader.
in the inspection.
Checklist is used to
4. No checklist is used in the walkthrough.
find faults.
Inspection processes
Walkthrough process includes overview, little or
includes overview,
no preparation, little or no preparation
5. preparation, inspection,
examination (actual walkthrough meeting), and
and rework and follow
rework and follow up.
up.
Formalized procedure
6. No formalized procedure in the steps.
in each step.
9. Reader reads product Author reads product code and his teammate
code. Everyone comes up with the defects or suggestions.
inspects it and comes
up with detects.
12. What is the role of the software quality assurance (SQA) group?
1. Defining Standards and Procedures: SQA establishes and defines quality standards, guidelines,
methodologies, and best practices to be followed throughout the development process. This includes
creating documentation outlining these standards.
2. Process Improvement: SQA continually evaluates and improves development processes to enhance
efficiency and ensure higher-quality outcomes. They identify bottlenecks, inefficiencies, and areas for
improvement within the development lifecycle.
3. Quality Planning: SQA plans and strategizes quality assurance activities for each phase of the SDLC. This
involves outlining test strategies, defining testing environments, and establishing metrics to measure quality.
4. Quality Control: SQA conducts various types of testing (functional, non-functional, performance, security,
etc.) to verify that the software conforms to defined standards and meets user requirements.
5. Risk Management: SQA identifies and assesses risks associated with the software development process.
They implement risk mitigation strategies to address potential issues that could affect the quality or delivery
of the software.
6. Audits and Reviews: SQA performs regular audits and reviews of development processes, documentation,
code, and test results to ensure compliance with standards and to identify areas for improvement.
7. Training and Guidance: SQA provides training and guidance to project teams and stakeholders on quality
standards, processes, and tools to maintain consistency and adherence to quality practices.
8. Documentation and Reporting: SQA maintains comprehensive documentation of processes, test plans,
test cases, defects, and reports on the quality status of the software to stakeholders and management.
9. Customer Focus: SQA ensures that the end product meets customer expectations by validating that
requirements are met, and user needs are addressed effectively.
10. Continuous Improvement: SQA fosters a culture of continuous improvement, learning from past
experiences and implementing lessons learned to enhance quality in future projects.
McCall's Quality Factors, developed by Dr. Harlan D. Mills and his colleagues at IBM in the 1970s, is a model
used to evaluate software quality. It identifies 11 key factors that contribute to the quality of software. These factors
cover various aspects of software and help in assessing, planning, and improving software development processes.
The McCall's Quality Factors are categorized into three main groups: product revision, product transition, and
product operation.
1. Product Revision Factors:
Correctness: This factor assesses the degree to which the software meets its specified requirements
and performs its intended functions accurately.
Reliability: Reliability refers to the ability of the software to maintain its performance under
specific conditions for a specific period. It evaluates how often the software fails and its ability to
recover from failures.
Efficiency: Efficiency measures the software's performance concerning system resources, such as
CPU usage, memory, and response time, while accomplishing its tasks.
Integrity: Integrity evaluates the security and protection of the software against unauthorized access
and alterations, ensuring the data remains accurate and secure.
2. Product Transition Factors:
Usability: Usability refers to how easily and effectively users can interact with and use the software.
It assesses user-friendliness, interface design, and user acceptance.
Maintainability: Maintainability evaluates how easy it is to modify, update, and fix issues within
the software. It includes factors like modularity, code readability, and documentation.
Flexibility: Flexibility measures the software's capability to accommodate future changes or
modifications in its functionality or environment without extensive modifications.
3. Product Operation Factors:
Portability: Portability assesses the software's ability to be transferred from one environment to
another, allowing it to operate in different configurations and platforms.
Reusability: Reusability measures the extent to which software components or modules can be
reused in other applications or contexts, reducing development time and effort.
Interoperability: Interoperability evaluates the software's ability to operate and communicate with
other systems or software, ensuring seamless integration and data exchange.
Testability: Testability assesses how easily the software can be tested to ensure that it meets its
specifications and requirements.
Stage 1: Planning and Requirement Analysis Requirement analysis is the most important and fundamental stage
in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales
department, market surveys and domain experts in the industry. This information is then used to plan the basic
project approach and to conduct product feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the project is also
done in the planning stage. The outcome of the technical feasibility study is to define the various technical
approaches that can be followed to implement the project successfully with minimum risks.
Stage 2: Defining Requirements Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the customer or the market analysts. This is
done through an SRS (Software Requirement Specification) document which consists of all the product
requirements to be designed and developed during the project life cycle.
Stage 3: Designing the Product Architecture SRS is the reference for product architects to come out with the
best architecture for the product to be developed. Based on the requirements specified in SRS, usually more
than one design approach for the product architecture is proposed and documented in a DDS - Design
Document Specification. This DDS is reviewed by all the important stakeholders and based on various
parameters as risk assessment, product robustness, design modularity, budget and time constraints, the best
design approach is selected for the product. A design approach clearly defines all the architectural modules of
the product along with its communication and data flow representation with the external and third party
modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.
Stage 4: Building or Developing the Product In this stage of SDLC the actual development starts and the
product is built. The programming code is generated as per DDS during this stage. If the design is performed in
a detailed and organized manner, code generation can be accomplished without much hassle. Developers must
follow the coding guidelines defined by their organization and programming tools like compilers, interpreters,
debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++,
Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of
software being developed. 14
Stage 5: Testing the Product This stage is usually a subset of all the stages as in the modern SDLC models, the
testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing only
stage of the product where product defects are reported, tracked, fixed and retested, until the product reaches the
quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance Once the product is tested and ready to be deployed it is
released formally in the appropriate market. Sometimes product deployment happens in stages as per the
business strategy of that organization. The product may first be released in a limited segment and tested in the
real business environment (UAT- User acceptance testing). Then based on the feedback, the product may be
released as it is or with suggested enhancements in the targeting market segment. After the product is released
in the market, its maintenance is done for the existing customer base.
18. Explain any five desirable software qualities.
1. Reliability:
Reliability refers to the software's ability to perform consistently and predictably under various
conditions without failure. A reliable software system delivers accurate results, operates as expected,
and maintains its performance over time. It should be robust enough to handle unexpected inputs or
conditions without crashing or causing errors.
2. Maintainability:
Maintainability is the ease with which software can be modified, updated, or enhanced. A highly
maintainable system is structured in a way that allows developers to make changes or fix issues
efficiently without causing unintended side effects. This quality involves good code organization,
documentation, and adherence to coding standards.
3. Usability:
Usability focuses on how easily and effectively users can interact with the software to accomplish
their tasks. A user-friendly interface, intuitive design, clear navigation, and responsiveness
contribute to a highly usable software product. Usability ensures that users can operate the software
efficiently and with minimal training or assistance.
4. Scalability:
Scalability refers to the software's capability to handle increased workload or accommodate growth
without a significant impact on performance or functionality. A scalable system can adapt to
increased demands by adding resources or expanding its capacity, ensuring it remains efficient and
responsive as user numbers or data volume grows.
5. Security:
Security is crucial for protecting the software from unauthorized access, data breaches, and
malicious attacks. A secure software system implements robust measures to safeguard sensitive
information, prevent vulnerabilities, and ensure compliance with security standards. It includes
encryption, authentication, access control, and regular security updates.
1. Quality Assessment: Metrics offer objective evaluations of software quality by quantifying aspects such as
defect density, code complexity, and adherence to coding standards. This information aids in identifying
areas needing improvement and tracking the progress of quality enhancement efforts.
2. Performance Monitoring: Metrics track project progress, resource utilization, and productivity. They help
in identifying bottlenecks, inefficiencies, or deviations from project plans, allowing timely interventions for
better resource allocation and project management.
3. Process Improvement: By analyzing metrics related to software development processes, teams can identify
process inefficiencies, streamline workflows, and implement best practices for increased efficiency and
better outcomes.
4. Risk Management: Metrics provide early indicators of potential risks and issues. For instance, metrics
related to defect density or regression rates can forecast potential challenges, allowing teams to take
proactive measures to mitigate risks.
5. Decision Support: Metrics serve as a basis for informed decision-making. They help stakeholders assess
the feasibility of project goals, make trade-off decisions, prioritize tasks, and allocate resources effectively.
6. Benchmarking and Comparison: Metrics allow for comparisons within a project or across different
projects. By comparing metrics across similar projects, teams can identify successful practices and areas for
improvement.
1. Early Detection of Major Issues: Smoke Testing helps in quickly identifying major flaws or issues in the
application's critical functionalities. It aims to catch severe defects that could hinder further testing or
integration efforts.
2. Time and Cost Efficiency: By executing a minimal set of tests focused on critical functionalities, smoke
testing saves time and resources during the initial phase of testing. It allows testers to detect show-stopping
issues early, reducing the time spent on subsequent testing phases if the basic functionalities fail.
3. Risk Mitigation: It reduces the risk of progressing with a build that has severe issues. Verifying essential
functionalities through smoke testing minimizes the chances of wasting effort on a build that is not viable
for further testing or deployment.
4. Quick Feedback Loop: Smoke testing provides quick feedback to development teams, allowing them to
address critical defects promptly. This accelerates the development process by ensuring that basic
functionalities are working before proceeding with more comprehensive testing.
5. Streamlined Development Process: It encourages a continuous integration and continuous testing
approach by verifying the basic stability of each new build. This promotes a more streamlined and efficient
development cycle.
32. What are test plans and test cases? Explain with example.
Test Plan:
Definition: A Test Plan is a comprehensive document that outlines the overall approach, scope, resources,
schedules, and objectives of the testing process for a software project. It provides a roadmap for testing activities
and sets the direction for the testing team.
Test Case:
Definition: A Test Case is a detailed set of conditions, actions, and expected results developed to verify specific
functionalities or aspects of the software. Each test case represents a unique test scenario that helps in evaluating
whether the software behaves as expected.
1. No Knowledge of Internal Structure: Testers conduct black box testing without any knowledge of the
internal workings, algorithms, or code implementation of the software. They solely rely on externally
visible behaviors.
2. Based on Specifications and Requirements: Testing is performed based on predefined specifications,
requirements documents, user stories, or functional specifications provided for the software.
3. Focus on Functionalities: It emphasizes verifying whether the software meets user expectations and
performs its functions as intended, rather than delving into code-level details.
4. Test Cases Creation: Testers create test cases based on input conditions, test data, and expected outputs
without considering the software's internal logic.
5. Various Techniques: Black box testing utilizes techniques such as equivalence partitioning, boundary
value analysis, decision tables, state transition testing, and more to design test cases.
6. Types of Testing: It encompasses various types of testing, including functional testing, non-functional
testing (e.g., usability, performance), and regression testing, among others.
35. Distinguish between structural and functional testing.
Structural test cases are designed based on Functional testing would depend on both external
external specifications and internal code specifications and the internal workings of the
structure is not considered component.
Structural test cases are based on Functional test cases are based on actions that a
input/output conditions component can perform.
Structural testing is used to find errors in Functional testing verifies that the system adheres
data structure usage and internal coding to acceptable standards of information processing
logic. and does not contain any defects.
Structural test cases do not depend on data Functional test cases may have to use some specific
values. value for a test case to pass or fail (error checking).
Structural test cases are based on hardware Functional testing is achieved by software
level error checking. techniques.
Structural testing involves static data Functional testing involves the analysis of dynamic
structures and algorithms. data structures and object-oriented programming.
Reliability and security testing are not Reliability, security and robustness are checked during beta
checked in alpha testing. testing.
Developers can immediately address the Most of the issues or feedback collected from the beta testing
critical issues or fixes in alpha testing. will be implemented in future versions of the product.
41. What is system testing? List its various types. Explain any two in short.
System testing is a level of software testing where a complete, integrated software system is tested as a whole to
evaluate its compliance with specified requirements. It focuses on verifying that the entire software system meets
its intended purpose, functions correctly, and operates as expected in its intended environment.
Various types of system testing include:
1. Functional Testing
2. Non-Functional Testing
3. Usability Testing
4. Performance Testing
5. Security Testing
6. Compatibility Testing
7. Regression Testing
8. Acceptance Testing
Two types of system testing explained briefly:
1. Performance Testing: This type of testing evaluates how well the system performs under various
conditions, assessing aspects like speed, responsiveness, scalability, and stability. For instance, load testing
examines the system's behavior under normal and peak load conditions, stress testing pushes the system
beyond its limits to determine breaking points, and scalability testing assesses the system's ability to handle
growing demands.
2. Security Testing: Security testing focuses on assessing the system's resistance to unauthorized access,
vulnerabilities, and potential threats. It involves various techniques such as vulnerability scanning,
penetration testing, authentication checks, encryption testing, and access control testing to ensure the
system's robustness against security risks and breaches.
46. Write a short note on boundary value testing and decision table testing.
Boundary Value Analysis
For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries of the
input domain rather than in the "center." It is for this reason that boundary value analysis (BVA) has been
developed as a testing technique.
Boundary value analysis leads to a selection of test cases that exercise bounding values. Boundary value
analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any
element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than
focusing solely on input conditions, BVA derives test cases from the output domain as well
(1) Knowing the specified function that a product has been designed to perform, tests can be conducted that
demonstrate each function is fully operational while at the same time searching for errors in each function;
(2) Knowing the internal workings of a product, tests can be conducted to ensure that "all gears mesh," that is,
internal operations are performed according to specifications and all internal components have been adequately
exercised. The first test approach is called black- box testing and the second, white-box testing.
52. What are coverage criteria? list and explain any two coverage criteria in short.
Coverage criteria, also known as coverage metrics or coverage measures, are quantitative indicators used to
measure the extent to which a specific aspect of the software has been tested. These criteria determine the
effectiveness and completeness of the testing process by specifying what portions of the software should be
exercised by the test cases.
Some common coverage criteria in software testing include:
1. Statement Coverage (or Line Coverage):
Explanation: Statement coverage measures the percentage of executable code lines that have been
executed at least once during testing.
How It Works: It aims to ensure that each line of code is executed by at least one test case, helping
to identify unexecuted code.
Example: If a piece of code contains ten executable lines and the test suite causes all ten lines to
execute, the statement coverage is 100%.
2. Branch Coverage (or Decision Coverage):
Explanation: Branch coverage evaluates the proportion of decision points or branches in the code
that have been exercised by the test cases.
How It Works: It ensures that both true and false outcomes of conditional statements (branches) are
tested.
Example: In an 'if-else' statement, if the test suite executes both the true and false paths of the
condition, branch coverage for that decision point is complete.
3. Path Coverage:
Explanation: Path coverage aims to test every possible path through the code from start to finish.
How It Works: It verifies that every unique path in the program, including loops and conditional
statements, is traversed by at least one test case.
Example: If a function has multiple loops and conditional statements, achieving path coverage
requires executing all feasible paths, which might be impractical for complex code.
4. Condition Coverage (or Predicate Coverage):
Explanation: Condition coverage ensures that each boolean sub-expression in a decision takes on
both true and false values during testing.
How It Works: It focuses on testing individual conditions within compound conditions, aiming to
evaluate all combinations of conditions.
Example: In a complex condition like (A && B) || (C || D), condition coverage would ensure that
both A && B and C || D are evaluated to both true and false during testing.
55. Explain the defect management process in detail with a neat diagram.
The defect management process is the core of software testing. Once the defects have been identified, the
most significant activity for any organization is to manage the flaws, not only for the testing team but also
for everyone involved in the software development or project management process.
The Defect Management Process is process where most of the organizations manage the Defect Discovery,
Defect Removal, and then the Process Improvement
o Various Stages of Defect Management Process
The defect management process includes several stages, which are as follows:
1. Defect Prevention
2. Deliverable Baseline
3. Defect Discovery
4. Defect Resolution
5. Process Improvement
6. Management Reporting
56. Explain formal technical review and its benefits in detail.
Formal Technical Review (FTR) is a software quality control activity performed by
software engineers.
In addition, the purpose of FTR is to enable junior engineer to observe the analysis,
design, coding and testing approach more closely. FTR also works to promote back up
and continuity become familiar with parts of software they might not have seen
otherwise. Actually, FTR is a class of reviews that include walkthroughs, inspections,
round robin reviews and other small group technical assessments of software. Each FTR
is conducted as meeting and is considered successful only if it is properly planned,
controlled and attended.
EXAMPLE: suppose during the development of the software without FTR design cost 10
units, coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it we
have to re design the software and final cost will become 70 units. that is why FTR is so
helpful while developing the software.
57. List quality improvement methodologies and explain any three in detail.
Quality Improvement Methodologies
PDSA: The basic Plan-Do-Study-Act (PDSA) cycle was first developed by Shewhart and then modified
by Deming. It is an effective improvement technique
The four steps in the cycle are exactly as stated. First, plan carefully what is to be done. Next, carry out
the plan (do it). Third, study the results—did the plan work as intended, or were the results different?
Finally, act on the results by identifying what worked as planned and what didn’t. Using the knowledge
learned, develop an improved plan and repeat the cycle.
Kaizen : Kaizen is a Japanese word for the philosophy that defines management’s role
in continuously encouraging and implementing small improvements involving
everyone. It is the process of continuous improvement in small increments that make
the process more— efficient, effective, under control, and adaptable. Improvements
are usually accomplished at little or no expense, without sophisticated techniques or
expensive equipment. It focuses on simplification by breaking down complex
processes into their sub-processes and then improving them.
Six Sigma : Six Sigma is the process of producing high and improved quality output.
This can be done in two phases – identification and elimination. The cause of defects
is identified and appropriate elimination is done which reduces variation in whole
processes. A six sigma method is one in which 99.99966% of all the products to be
produced have the same features and are of free from defects.
2. Kick-Off :
Getting everybody on the same page regarding document under review is the
main goal and aim of this meeting. Even entry result and exit criteria are also
discussed in this meeting. It is basically an optional step. It also provides better
understanding of team about relationship among document under review and
other documents. During kick-off, Distribution of document under review,
source documents, and all other related documentation can also be done.
3. Preparation :
In preparation phase, participants simply work individually on document under
review with the help of related documents, procedures, rules, and provided
checklists. Spelling mistakes are also recorded on document under review but
not mentioned during meeting.
These reviewers generally identify and determine and also check for any defect,
issue or error and offer their comments, that later combined and recorded with
the assistance of logging form, while reviewing document.
4. Review Meeting :
This phase generally involves three different phases i.e. logging, discussion, and
decision. Different tasks are simply related to document under review is
performed.
5. Rework :
Author basically improves document that is under review based on the defects
that are detected and improvements being suggested in review meeting.
Document needs to be reworked if total number of defects that are found are
more than an unexpected level. Changes that are done to document must be easy
to determine during follow-up, therefore author needs to indicate changes are
made.
6. Follow-Up :
Generally, after rework, moderator must ensure that all satisfactory actions need
to be taken on all logged defects, improvement suggestions, and change
requests. Moderator simply makes sure that whether author has taken care of all
defects or not. In order to control, handle, and optimize review process,
moderator collects number of measurements at every step of process. Examples
of measurements include total number of defects that are found, total number of
defects that are found per page, overall review effort, etc.
Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states that
defect or bug goes through in its entire life. The purpose of Defect life cycle is to
easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
Defect States
#1) New: This is the first state of a defect in the Defect Life Cycle. When any new
defect is found, it falls in a ‘New’ state, and validations & testing are performed on
this defect in the later stages of the Defect Life Cycle.
#2) Assigned: In this stage, a newly created defect is assigned to the development
team to work on the defect. This is assigned by the project lead or the manager of the
testing team to a developer.
#3) Open/Active: Here, the developer starts the process of analyzing the defect and
works on fixing it, if required.
If the developer feels that the defect is not appropriate then it may get transferred to
any of the below four states namely Duplicate, Deferred, Rejected, or Not a Bug-
based upon a specific reason. We will discuss these four states in a while.
#4) Fixed: When the developer finishes the task of fixing a defect by making the
required
changes then he can mark the status of the defect as “Fixed”.
#5) Pending Retest: After fixing the defect, the developer assigns the defect to the tester
to retest the defect at their end, and until the tester works on retesting the defect, the state
of the defect remains in “Pending Retest”.
#6) Retest: At this point, the tester starts the task of retesting the defect to verify if the
defect is fixed accurately by the developer as per the requirements or not.
#7) Reopen: If any issue persists in the defect, then it will be assigned to the developer
again for testing and the status of the defect gets changed to ‘Reopen’.
#8) Verified: If the tester does not find any issue in the defect after being assigned to
the developer for retesting and he feels that if the defect has been fixed accurately then
the status of the defect gets assigned to ‘Verified’.
#9) Closed: When the defect does not exist any longer, then the tester changes the status
of the defect to “Closed”.
Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.
63. What are quality improvement tools? List and explain any two.
1. Pareto Analysis:
Explanation: The Pareto Principle, also known as the 80/20 rule, suggests
that roughly 80% of effects come from 20% of causes. Pareto Analysis helps
identify and prioritize the most significant factors contributing to a problem.
How It Works: Data related to defects, issues, or problems are collected and
categorized. A Pareto chart is created, displaying the frequency or impact of
each category in descending order. This chart helps identify the vital few (the
most significant issues causing the majority of problems) versus the trivial
many.
Example: In software development, if defects are categorized by type (e.g.,
functionality, usability, performance), a Pareto chart can highlight which types
of defects contribute most to overall issues, allowing teams to prioritize efforts
for maximum impact.
2. Root Cause Analysis (RCA):
Explanation: RCA is a problem-solving technique used to identify the
underlying causes of issues or problems rather than just addressing symptoms.
It helps prevent recurrence by tackling the fundamental reasons for problems.
How It Works: RCA involves a systematic approach of investigating and
analyzing problems to determine their root causes. Techniques such as the "5
Whys" (repeatedly asking "why" to trace problems to their origins) or fishbone
diagrams (Ishikawa or cause-and-effect diagrams) are used to map out and
understand cause-and-effect relationships leading to the issue.
Example: If a software application frequently crashes, RCA might involve
identifying multiple potential causes such as coding errors, resource
constraints, or hardware issues. The 5 Whys technique could be employed to
dig deeper into each cause until the core issue causing the crashes is
uncovered.
fig C
shows that as speed increases, gas mileage decreases. Automotive speed is plotted
on the x-axis and is the independent variable. The independent variable is usually
controllable. Gas mileage is on the y-axis and is the dependent, or response,
variable. Other examples of relationships are as follows:
Cutting speed and tool life.
Temperature and lipstick hardness.
Striking pressure and electrical current.
Temperature and percent foam in soft drinks.
Yield and concentration.
Training and errors.
Breakdowns and equipment age.
Accidents and years with the organization.
causes. Figure above illustrates a C&E diagram with the effect on the right and
causes on the left. The effect is the quality characteristic that needs improvement.
Causes are sometimes broken down into the major causes of work methods,
materials, measurement, people, equipment, and the environment.
Each major cause is further subdivided into numerous minor causes. For example,
under work methods, we might have training, knowledge, ability, physical
characteristics, and so
forth. C&E diagrams are the means of picturing all these major and minor causes.
Figure below shows a C&E diagram for house paint peeling using four major
causes.
The first step in the construction of a C&E diagram is for the project team to identify the
effect or quality problem. It is placed on the right side of a large piece of paper by the
team leader. Next, the major causes are identified and placed on the diagram. Determining
all the minor causes requires brainstorming by the project team. Brainstorming is an idea
generating technique that is well suited to the C&E diagram. It uses the creative thinking
capacity of the team.
67. Explain run charts.
RUN CHART
A run chart, which is shown in Figure D, is a very simple technique for analyzing the
process in the development stage or, for that matter, when other charting techniques
are not applicable. The important point is to draw a picture of the process and let it
“talk” to you. A picture is worth a thousand words, provided someone is listening.
Plotting the data points is a very effective way of finding out about the process. This
activity should be done as the first step in data analysis. Without a run chart, other data
analysis tools—such as the average, sample standard deviation, and histogram—can
lead to erroneous conclusions.
The particular run chart shown in Figure D is referred to as an X _ chart and is used to
record the variation in the average value of samples. Other charts, such as the R chart
(range) or p chart (proportion) would have also served for explanation purposes. The
horizontal axis is labeled “Subgroup Number,” which identifies a particular sample
consisting of a fixed number of observations. These subgroups are plotted by order of
production, with the first one inspected being 1 and the last one on this chart being 25.
The vertical axis of the graph is the variable, which in this particular case is weight
measured in kilograms.
Each small solid diamond represents the average value within a subgroup. Thus, subgroup
number 5 consists of, say, four observations, 3.46, 3.49, 3.45, and 3.44, and their average
is 3.46 kg. This value is the one posted on the chart for subgroup number 5. Averages are
used on control charts rather than individual observations because average values will
indicate a change in variation much faster. Also, with two or more observations in a
sample, a measure of the dispersion can be obtained for a particular subgroup.
Where:
PFD is a crucial metric within the framework of functional safety, especially in industries
such as automotive, aerospace, process control, and healthcare, where the reliability of safety
systems is paramount to prevent hazardous or dangerous situations.
PFD is typically used in conjunction with Safety Integrity Levels (SILs) as defined by
standards such as IEC 61508 (for general industries) or IEC 61511 (for the process industry).
SILs categorize the safety integrity requirements of safety instrumented systems, with SIL 1
representing the lowest and SIL 4 the highest level of safety integrity.
Where:
Total number of dangerous failures: The number of failures that lead to the loss of the
safety function.
Total number of demands: The total number of times the safety system or function is
expected to perform its safety function within a specific period.
4. Summarize the data and rank order categories from largest to smallest.
1. Prevention Costs:
Definition: Prevention costs are expenses incurred to prevent defects or issues
from occurring in the first place.
Examples: Training programs, quality planning, process improvements,
implementing quality management systems, supplier evaluations, design
reviews, and quality audits.
Purpose: By investing in prevention activities, organizations aim to identify
and eliminate potential issues early in the development cycle, thereby reducing
the likelihood of defects and failures.
2. Appraisal Costs:
Definition: Appraisal costs are expenses associated with evaluating and
assessing the product or service quality to ensure compliance with standards
and requirements.
Examples: Inspection, testing, quality control measures, equipment
calibration, audits, and supplier evaluation.
Purpose: Appraisal costs are incurred to identify defects, errors, or non-
conformities, ensuring that products or services meet specified quality
standards before reaching the customer.
3. Internal Failure Costs:
Definition: Internal failure costs arise from defects or issues discovered
before delivering products or services to the customer, occurring within the
organization's internal processes.
Examples: Rework, scrap, retesting, downtime due to defects, waste,
production delays, and corrective actions for issues found during
manufacturing or service delivery.
Purpose: These costs represent the expenses incurred due to failures or
defects that impact the organization internally, before reaching the customer,
emphasizing the importance of early defect detection and prevention.
4. External Failure Costs:
Definition: External failure costs result from defects or issues identified after
products or services have reached the customer or entered the market.
Examples: Warranty claims, customer complaints, returns or recalls, product
replacements, legal costs, reputation damage, and lost sales opportunities due
to poor quality.
Purpose: These costs reflect the impact of poor quality on the organization's
reputation, customer satisfaction, and financial losses incurred due to defects
discovered by customers.
78. Explain the following: a) ISO b) ISO 9000 c) ISO 9000 series
b) ISO 9000:
The ISO 9000 series encompasses a set of standards within the ISO 9000 family that
collectively address various aspects of quality management systems (QMS).
This series includes ISO 9001, ISO 9004, ISO 9000, and other related standards that
provide guidance on quality management principles, requirements, guidelines for
performance improvement, and terminology.
ISO 9001 is the most well-known and widely used standard within the ISO 9000
series. It specifies the requirements for a QMS that organizations can use to
demonstrate their ability to consistently provide products and services that meet
customer and regulatory requirements.
79. What is the measure of reliability and availability? Explain.
Explanation:
82. What are the elements of software reliability? State factors affecting it.
Software reliability encompasses various elements that contribute to the dependable and
consistent performance of software systems. Some key elements or factors influencing
software reliability include:
1. Correctness: The extent to which the software performs its intended functions
accurately and without errors.
2. Robustness: The software's ability to withstand unexpected inputs, error conditions,
or adverse situations without failing or crashing.
3. Fault Tolerance: The software's capability to continue operating or recover
gracefully from failures, ensuring minimal impact on the system and users.
4. Availability: The proportion of time the software is operational and accessible for use
when required, considering downtime due to failures, maintenance, or updates.
5. MTBF (Mean Time Between Failures): The average time interval between two
consecutive software failures during operation.
6. MTTF (Mean Time To Failure): The average time expected until a software
component or system experiences its first failure.
7. MTTR (Mean Time To Repair/Recovery): The average time taken to repair or
restore the software after a failure.
8. Reliability Growth: The process of improving software reliability over time through
defect identification, fixing, and system enhancements.
85. Discuss how reliability changes over the lifetime of a software product and a hardware
product.
86. Explain test case template. Design test case for login page.
87. Explain top-down integration testing.
88. Explain bottom-up integration testing.
89. What are the various approaches of integration testing and the challenges
90. Discuss types of software quality factors.
91. Explain the concept of quality.