0% found this document useful (0 votes)
16 views53 pages

STQA

Software testing is a critical process that ensures software functionality aligns with requirements, identifies defects early, improves product quality, and enhances customer satisfaction. Quality assurance (QA) focuses on preventing defects through process compliance and continuous improvement, while quality control (QC) verifies the final product against quality standards. The document also discusses software quality factors, the role of testing in the software development life cycle, and differentiates between various review types and techniques.

Uploaded by

venomshaikh122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views53 pages

STQA

Software testing is a critical process that ensures software functionality aligns with requirements, identifies defects early, improves product quality, and enhances customer satisfaction. Quality assurance (QA) focuses on preventing defects through process compliance and continuous improvement, while quality control (QC) verifies the final product against quality standards. The document also discusses software quality factors, the role of testing in the software development life cycle, and differentiates between various review types and techniques.

Uploaded by

venomshaikh122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

STQA

Unit No: I
1. What is software testing? Discuss the need of software testing.
Software Testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free. The
purpose of software testing is to identify the errors, faults, or missing requirements in contrast to actual
requirements. It mainly aims at measuring the specification, functionality, and performance of a software
program or application.

The following are important reasons why software testing techniques should be incorporated into
application development:
 Identifies defects early. Developing complex applications can leave room for errors. Software testing is
imperative, as it identifies any issues and defects with the written code so they can be fixed before the
software product is delivered.
 Improves product quality. When it comes to customer appeal, delivering a quality product is an important
metric to consider. An exceptional product can only be delivered if it's tested effectively before launch.
Software testing helps the product pass quality assurance (QA) and meet the criteria and specifications
defined by the users.
 Increases customer trust and satisfaction. Testing a product throughout its development lifecycle builds
customer trust and satisfaction, as it provides visibility into the product's strong and weak points. By the
time customers receive the product, it has been tried and tested multiple times and delivers on quality.
 Detects security vulnerabilities. Insecure application code can leave vulnerabilities that attackers can
exploit. Since most applications are online today, they can be a leading vector for cyber attacks and should
be tested thoroughly during various stages of application development. For example, a web application
published without proper software testing can easily fall victim to a cross-site scripting attack where the
attackers try to inject malicious code into the user's web browser by gaining access through the vulnerable
web application. The nontested application thus becomes the vehicle for delivering the malicious code,
which could have been prevented with proper software testing.
 Helps with scalability. A type of nonfunctional software testing process, scalability testing is done to
gauge how well an application scales with increasing workloads, such as user traffic, data volume and
transaction counts. It can also identify the point where an application might stop functioning and the
reasons behind it, which may include meeting or exceeding a certain threshold, such as the total number of
concurrent app users.
 Saves money. Software development issues that go unnoticed due to a lack of software testing can haunt
organizations later with a bigger price tag. After the application launches, it can be more difficult to trace
and resolve the issues, as software patching is generally more expensive than testing during the
development stages.

2. What is quality? Discuss various quality factors.


The American Heritage Dictionary defines quality as “a characteristic or attribute of something.” As an
attribute of an item, quality refers to measurable characteristics— things we are able to compare to known
standards such as length, color, electrical properties, and malleability. However, software, largely an
intellectual entity, is more challenging to characterize than physical objects. Nevertheless, measures of a
program’s characteristics do exist. These properties include cyclomatic complexity, cohesion, number of
function points, lines of code, and many others. When we examine an item based on its measurable
characteristics, two kinds of quality may be encountered: quality of design and quality of conformance.
Software quality factors
According to McCall’s model, product operation category includes five software quality factors, which deal
with the requirements that directly affect the daily operation of the software. They are as follows −
Correctness
These requirements deal with the correctness of the output of the software system. They include −
 Output mission
 The required accuracy of output that can be negatively affected by inaccurate data or inaccurate
calculations.
 The completeness of the output information, which can be affected by incomplete data.
 The up-to-dateness of the information defined as the time between the event and the response by the
software system.
 The availability of the information.
 The standards for coding and documenting the software system.

Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate of the
software system, and can refer to the entire system or to one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).It also deals with the time between recharging of the
system’s portable units, such as, information system units located in portable computers, or meteorological
units placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to
distinguish between the group of people to be given read as well as write permit.
Usability- Usability requirements deal with the staff resources needed to train a new employee and to operate
the software system.
3. Elaborate the difference between QA and QC in detail.

Quality Assurance (QA) Quality Control (QC)

It focuses on providing assurance that the quality It focuses on fulfilling the quality requested.
requested will be achieved.

It is the technique of managing quality. It is the technique to verify quality.

It is involved during the development phase. It is not included during the development phase.

It does not include the execution of the program. It always includes the execution of the program.

It is a managerial tool. It is a corrective tool.

It is process oriented. It is product oriented.

The aim of quality assurance is to prevent defects. The aim of quality control is to identify and
improve the defects.

It is a preventive technique. It is a corrective technique.

It is a proactive measure. It is a reactive measure.


It is responsible for the entire software development life It is responsible for the software testing life cycle.
cycle.

It pays main focus is on the intermediate process. Its primary focus is on final products.

All team members of the project are involved. Generally, the testing team of the project is
involved.

It aims to prevent defects in the system. It aims to identify defects or bugs in the system.

It is a less time-consuming activity. It is a more time-consuming activity.

Statistical Process Control (SPC) statistical Statistical Quality Control (SQC) statistical
technique is applied on Quality Assurance. technique is applied on Quality Control.

Example: Verification Example: Validation

4. Discuss about quality control process


Quality Control is a software engineering process that is used to ensure that the approaches, techniques,
methods, and processes designed for the project are followed correctly. Quality control activities operate
and verify that the application meet the defined quality standards.
 It focuses on an examination of the quality of the end products and the final outcome rather than focusing
on the processes used to create a product.
 It is a reactive process and is detection in nature.
 These activities monitor and verify that the project deliverables meet the defined quality standards.

5. Illustrate the concept of software quality assurance.


Software quality assurance (SQA) is an umbrella activity that is applied throughout the software process.
SQA encompasses
1. a quality management approach,
2. effective software engineering technology (methods and tools),
3. formal technical reviews that are applied throughout the software process,
4. a multitiered testing strategy,
5. control of software documentation and the changes made to it,
6. a procedure to ensure compliance with software development standards
7. measurement and reporting mechanisms
Software quality assurance is a "planned and systematic pattern of actions" [SCH98] that are required to
ensure high quality in software. The implication for software is that many different constituencies have
software quality assurance responsibility—software engineers, project managers, customers, salespeople,
and the individuals who serve within an SQA group.
6. What are Software quality factors? Explain their impact on testing.
According to McCall’s model, product operation category includes five software quality factors, which deal
with the requirements that directly affect the daily operation of the software. They are as follows −
Correctness
These requirements deal with the correctness of the output of the software system. They include −
 Output mission
 The required accuracy of output that can be negatively affected by inaccurate data or inaccurate
calculations.
 The completeness of the output information, which can be affected by incomplete data.
 The up-to-dateness of the information defined as the time between the event and the response by the
software system.
 The availability of the information.
 The standards for coding and documenting the software system.

Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate of the
software system, and can refer to the entire system or to one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software system. It
includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).It also deals with the time between recharging of the
system’s portable units, such as, information system units located in portable computers, or meteorological
units placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized persons, also to
distinguish between the group of people to be given read as well as write permit.
Usability- Usability requirements deal with the staff resources needed to train a new employee and to operate
the software system.

7. Discuss the Role of testing in each phase of software development life cycle.
1. Requirements Gathering and Analysis:
 Testing begins by validating requirements for consistency, clarity, and feasibility.
 Testers collaborate with stakeholders to ensure understanding and identify potential ambiguities or
conflicts in requirements.
2. System Design:
 During this phase, testing involves reviewing system design documents and architecture diagrams
to ensure they align with requirements.
 Testers prepare test scenarios and design test cases based on the system design.
3. Implementation (Coding):
 Unit testing is performed at this stage by developers to validate individual components or modules.
 Code review and static analysis tools may be utilized to detect defects early.
4. Integration and Testing:
 Integration testing verifies the interactions and interfaces between integrated components or
modules.
 Testers conduct functional and non-functional testing to ensure the software behaves as expected.
5. System Testing:
 This phase involves testing the entire system as a whole to ensure it meets specified requirements.
 Testers perform regression testing, user acceptance testing (UAT), performance testing, security
testing, etc.
6. Deployment and Maintenance:
 Post-deployment, testing continues with maintenance activities to identify and fix defects reported
by users or identified through monitoring.
 Regression testing is performed to ensure changes or fixes do not introduce new issues.

8. What is quality assurance? Write down the purpose of the quality assurance.
Quality Assurance Quality assurance consists of the auditing and reporting functions of management. The
goal of quality assurance is to provide management with the data necessary to be informed about product
quality, thereby gaining insight and confidence that product quality is meeting its goals. Of course, if the
data provided through quality assurance identify problems, it is management’s responsibility to address the
problems and apply the necessary resources to resolve quality issues.
The purpose of Quality Assurance includes:
1. Process Compliance: QA ensures that established processes, methodologies, and standards are followed
consistently throughout the development lifecycle. It involves defining, implementing, and maintaining
processes to meet quality goals.
2. Quality Standards Adherence: QA establishes and maintains quality standards, guidelines, and
procedures. It aims to ensure that these standards are met and continuously improved upon.
3. Defect Prevention: QA focuses on identifying and rectifying potential issues early in the development
process to prevent defects from occurring in the final product. This involves risk assessment, reviews, and
proactive measures.
4. Customer Satisfaction: By maintaining quality standards and meeting customer requirements, QA
contributes to higher customer satisfaction and confidence in the product or service.
5. Continuous Improvement: QA involves a continuous feedback loop for improvement. It aims to identify
areas for enhancement, refine processes, and incorporate lessons learned from previous projects to enhance
overall quality in subsequent iterations.
6. Metrics and Analysis: QA often involves the collection of metrics and data analysis to assess the
effectiveness of processes and identify areas that need improvement. These metrics help in making
informed decisions for quality enhancement.

9. Differentiate between verification and validation.

Verificatio Validation
n
The verifying process includes checking It is a dynamic mechanism of testing
documents, design, code, and program and validating the actual product

It does not involve executing the code It always involves executing the code

Verification uses methods like reviews, It uses methods like Black Box Testing, White
walkthroughs, inspections, and desk- checking Box Testing, and non-functional testing
etc.
Whether the software conforms to It checks whether the software meets the
specification is checked requirements and expectations of a customer
It can find bugs that the verification process
It finds bugs early in the development cycle
can not catch
Target is application and software
architecture, specification, complete Target is an actual product
design, high level, and database design
etc.
QA team does verification and make sure
With the involvement of testing team validation
that the software is as per the requirement
is executed on software code.
in the SRS document.
It comes before validation It comes after verification

10. What is software review? List different types of it and explain.


Software Review is systematic inspection of software by one or more individuals who work together to
find and resolve errors and defects in the software during the early stages of Software Development
Life Cycle (SDLC). Software review is an essential part of Software Development Life Cycle (SDLC)
that helps software engineers in validating the quality, functionality and other vital features and
components of the software. It is a whole process that includes testing the software product and it
makes sure that it meets the requirements stated by the client.
Types of Software Reviews:
There are mainly 3 types of software reviews:
1. Software Peer Review:
Peer review is the process of assessing the technical content and quality of the product and it is usually
conducted by the author of the work product along with some other developers.
Peer review is performed in order to examine or resolve the defects in the software, whose quality is
also checked by other members of the team.
Peer Review has following types:
a) Code Review:
Computer source code is examined in a systematic way.
b) Pair Programming:
It is a code review where two developers develop code together at the same platform.
c) Walkthrough:
Members of the development team is guided by author and other interested parties and the participants
ask questions and make comments about defects.
d) Technical Review:
A team of highly qualified individuals examines the software product for its client’s use and
identifies technical defects from specifications and standards.
e) Inspection:
In inspection the reviewers follow a well-defined process to find defects.
2. Software Management Review:
Software Management Review evaluates the work status. In this section decisions regarding
downstream activities are taken.
3. Software Audit Review:
Software Audit Review is a type of external review in which one or more critics, who are not a part of
the development team, organize an independent inspection of the software product and its processes to
assess their compliance with stated specifications and standards. This is done by managerial level
people.

11. Differentiate between Inspection and walkthrough

Inspectio
n Walkthrough

1. It is formal. It is informal.

Initiated by project
2. Initiated by author.
team.

A group of relevant
Usually team members of the same project take
persons from different
3. participation in the walkthrough. Author himself
departments participate
acts walkthrough leader.
in the inspection.

Checklist is used to
4. No checklist is used in the walkthrough.
find faults.

Inspection processes
Walkthrough process includes overview, little or
includes overview,
no preparation, little or no preparation
5. preparation, inspection,
examination (actual walkthrough meeting), and
and rework and follow
rework and follow up.
up.

Formalized procedure
6. No formalized procedure in the steps.
in each step.

Inspection takes longer


time as list of items in Shorter time is spent on walkthrough as there is
7.
checklist is tracked to no formal checklist used to evaluate program.
completion.

Planned meeting with


the fixed roles assigned
8. Unplanned
to all the members
involved.

9. Reader reads product Author reads product code and his teammate
code. Everyone comes up with the defects or suggestions.
inspects it and comes
up with detects.

Recorder records the Author make a note of defects and suggestions


10.
defects. offered by teammate.

Moderator has a role


that moderator making
11. sure that the Informal, so there is no moderator.
discussions proceed on
the productive lines.

12. What is the role of the software quality assurance (SQA) group?
1. Defining Standards and Procedures: SQA establishes and defines quality standards, guidelines,
methodologies, and best practices to be followed throughout the development process. This includes
creating documentation outlining these standards.
2. Process Improvement: SQA continually evaluates and improves development processes to enhance
efficiency and ensure higher-quality outcomes. They identify bottlenecks, inefficiencies, and areas for
improvement within the development lifecycle.
3. Quality Planning: SQA plans and strategizes quality assurance activities for each phase of the SDLC. This
involves outlining test strategies, defining testing environments, and establishing metrics to measure quality.
4. Quality Control: SQA conducts various types of testing (functional, non-functional, performance, security,
etc.) to verify that the software conforms to defined standards and meets user requirements.
5. Risk Management: SQA identifies and assesses risks associated with the software development process.
They implement risk mitigation strategies to address potential issues that could affect the quality or delivery
of the software.
6. Audits and Reviews: SQA performs regular audits and reviews of development processes, documentation,
code, and test results to ensure compliance with standards and to identify areas for improvement.
7. Training and Guidance: SQA provides training and guidance to project teams and stakeholders on quality
standards, processes, and tools to maintain consistency and adherence to quality practices.
8. Documentation and Reporting: SQA maintains comprehensive documentation of processes, test plans,
test cases, defects, and reports on the quality status of the software to stakeholders and management.
9. Customer Focus: SQA ensures that the end product meets customer expectations by validating that
requirements are met, and user needs are addressed effectively.
10. Continuous Improvement: SQA fosters a culture of continuous improvement, learning from past
experiences and implementing lessons learned to enhance quality in future projects.

13. Explain in details McCall’s Quality factor.

McCall's Quality Factors, developed by Dr. Harlan D. Mills and his colleagues at IBM in the 1970s, is a model
used to evaluate software quality. It identifies 11 key factors that contribute to the quality of software. These factors
cover various aspects of software and help in assessing, planning, and improving software development processes.
The McCall's Quality Factors are categorized into three main groups: product revision, product transition, and
product operation.
1. Product Revision Factors:
 Correctness: This factor assesses the degree to which the software meets its specified requirements
and performs its intended functions accurately.
 Reliability: Reliability refers to the ability of the software to maintain its performance under
specific conditions for a specific period. It evaluates how often the software fails and its ability to
recover from failures.
 Efficiency: Efficiency measures the software's performance concerning system resources, such as
CPU usage, memory, and response time, while accomplishing its tasks.
Integrity: Integrity evaluates the security and protection of the software against unauthorized access
and alterations, ensuring the data remains accurate and secure.
2. Product Transition Factors:
 Usability: Usability refers to how easily and effectively users can interact with and use the software.
It assesses user-friendliness, interface design, and user acceptance.
 Maintainability: Maintainability evaluates how easy it is to modify, update, and fix issues within
the software. It includes factors like modularity, code readability, and documentation.
 Flexibility: Flexibility measures the software's capability to accommodate future changes or
modifications in its functionality or environment without extensive modifications.
3. Product Operation Factors:
 Portability: Portability assesses the software's ability to be transferred from one environment to
another, allowing it to operate in different configurations and platforms.
 Reusability: Reusability measures the extent to which software components or modules can be
reused in other applications or contexts, reducing development time and effort.
 Interoperability: Interoperability evaluates the software's ability to operate and communicate with
other systems or software, ensuring seamless integration and data exchange.
 Testability: Testability assesses how easily the software can be tested to ensure that it meets its
specifications and requirements.

14. Write in brief about QA, QC and QM


1. Quality Assurance (QA):
 Definition: Quality Assurance refers to a systematic and planned set of activities focused on
ensuring that the processes, methods, and standards are implemented effectively to produce quality
products or services.
 Purpose: QA emphasizes preventing defects by implementing procedures, processes, and
guidelines. It ensures that the development team follows established standards, methodologies, and
best practices throughout the software development lifecycle (SDLC). QA aims for continuous
improvement in processes to enhance the overall quality of the end product.
2. Quality Control (QC):
 Definition: Quality Control involves a set of activities and techniques used to identify defects and
deviations in the product or service being developed. It involves inspecting, testing, and analyzing
the product to ensure it meets specified quality standards.
 Purpose: QC focuses on detecting and rectifying defects in the final product. It involves various
testing methodologies and techniques to verify that the software meets the predefined quality criteria
and user requirements. QC ensures that the output complies with established standards before its
release.
3. Quality Management (QM):
 Definition: Quality Management refers to the overall management approach that encompasses QA
and QC activities. It involves planning, controlling, and improving processes to ensure that the
organization consistently delivers products or services that meet or exceed customer expectations.
 Purpose: QM integrates QA and QC methodologies within an organization. It involves setting
quality objectives, defining processes, measuring performance, and implementing corrective actions.
QM aims for a systematic approach to quality that encompasses the entire organization, not just
specific projects or departments.

15. What are the various nature of error?


Common Categories of Software Errors:
#1) Functionality Errors: Functionality is a way the software is intended to behave. Software has a functionality
error if something that you expect it to do is hard, awkward, confusing, or impossible Expected Functionality
for Cancel button is that the ‘Create new project’ window should close and none of the changes should be saved
(i.e. no new project must be created). If the Cancel button is not clickable then it is a functionality error.
#2) Communication Errors: These errors occur in communication from software to end-user. Anything that the
end user needs to know in order to use the software should be made available on screen. Few examples of
communication errors are – No Help instructions/menu provided, features that are part of the release but are not
documented in the help menu, a button named ‘Save’ should not erase a file etc.
#3) Missing command errors: This happens to occur when an expected command is missing. However, there is
no option for the user to exit from this window without creating the project. Since ‘Cancel’ option/button is not
provided to the user, this is a missing command error
#4) Syntactic Error: Syntactic errors are misspelled words or grammatically incorrect sentences and are very
evident while testing software GUI. Please note that we are NOT referring to syntax errors in code. The
compiler will warn the developer about any syntax errors that occur in the code
#5) Error handling errors: Any errors that occur while the user is interacting with the software needs to be
handled in a clear and meaningful manner. If not, it is called as an Error Handling Error. Take a look at this
image. The error message gives no indication of what the error actually is. Is it missing mandatory field, saving
error, page loading error or is it a system error. Hence, this is an ‘Error Handing Error’.
#6) Calculation Errors: These errors occur due to any of the following reasons:
• Bad logic
• Incorrect formulae
• Data type mismatch
• Coding errors
• Function call issues , etc.
16. Write a short note on SQA plan.
A Software Quality Assurance (SQA) Plan is a comprehensive document outlining the approach, methodologies,
processes, and resources that will be employed to ensure the quality of a software product throughout its
development lifecycle. It serves as a roadmap guiding the quality assurance activities within a project or
organization.
Key components and aspects of an SQA plan include:
1. Objectives and Scope:
 Defines the overall objectives and goals of the SQA activities.
 Outlines the scope of the plan, specifying the phases or stages of the software development lifecycle
covered.
2. Roles and Responsibilities:
 Identifies the roles and responsibilities of individuals or teams involved in SQA activities, such as
QA managers, testers, developers, etc.
 Clarifies their duties and contributions to ensuring quality.
3. Quality Standards and Processes:
 Defines the quality standards, methodologies, best practices, and industry standards that will be
followed throughout the project.
 Describes the processes and procedures for quality assurance, including testing, reviews, audits, and
compliance checks.
4. Resource Allocation:
 Specifies the resources required for SQA activities, including human resources, tools, infrastructure,
and budgets.
 Allocates resources to different phases of the project as per the SQA requirements.
5. Testing and Review Strategies:
 Outlines the testing strategies, test plans, and review processes to be employed during different
stages of software development.
 Describes the types of testing (functional, non-functional, regression, etc.) and reviews (code
reviews, design reviews, etc.) to be conducted.
6. Metrics and Reporting:
 Defines key performance indicators (KPIs) and metrics to measure the effectiveness of SQA
activities.
 Specifies reporting mechanisms and frequency, detailing how and when quality-related information
will be communicated to stakeholders.
7. Risk Management:
 Identifies potential risks that could impact software quality and outlines risk mitigation strategies.
 Describes contingency plans and procedures for managing unexpected issues that may arise during
the project.
8. Change Management:
 Describes the process for handling changes in requirements, scope, or methodologies concerning
quality assurance.
 Specifies the procedures for documenting and implementing changes while maintaining quality
standards.

17. Explain different phases of SDLC

Stage 1: Planning and Requirement Analysis Requirement analysis is the most important and fundamental stage
in SDLC. It is performed by the senior members of the team with inputs from the customer, the sales
department, market surveys and domain experts in the industry. This information is then used to plan the basic
project approach and to conduct product feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the project is also
done in the planning stage. The outcome of the technical feasibility study is to define the various technical
approaches that can be followed to implement the project successfully with minimum risks.

Stage 2: Defining Requirements Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the customer or the market analysts. This is
done through an SRS (Software Requirement Specification) document which consists of all the product
requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture SRS is the reference for product architects to come out with the
best architecture for the product to be developed. Based on the requirements specified in SRS, usually more
than one design approach for the product architecture is proposed and documented in a DDS - Design
Document Specification. This DDS is reviewed by all the important stakeholders and based on various
parameters as risk assessment, product robustness, design modularity, budget and time constraints, the best
design approach is selected for the product. A design approach clearly defines all the architectural modules of
the product along with its communication and data flow representation with the external and third party
modules (if any). The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.

Stage 4: Building or Developing the Product In this stage of SDLC the actual development starts and the
product is built. The programming code is generated as per DDS during this stage. If the design is performed in
a detailed and organized manner, code generation can be accomplished without much hassle. Developers must
follow the coding guidelines defined by their organization and programming tools like compilers, interpreters,
debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++,
Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of
software being developed. 14

Stage 5: Testing the Product This stage is usually a subset of all the stages as in the modern SDLC models, the
testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing only
stage of the product where product defects are reported, tracked, fixed and retested, until the product reaches the
quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance Once the product is tested and ready to be deployed it is
released formally in the appropriate market. Sometimes product deployment happens in stages as per the
business strategy of that organization. The product may first be released in a limited segment and tested in the
real business environment (UAT- User acceptance testing). Then based on the feedback, the product may be
released as it is or with suggested enhancements in the targeting market segment. After the product is released
in the market, its maintenance is done for the existing customer base.
18. Explain any five desirable software qualities.
1. Reliability:
 Reliability refers to the software's ability to perform consistently and predictably under various
conditions without failure. A reliable software system delivers accurate results, operates as expected,
and maintains its performance over time. It should be robust enough to handle unexpected inputs or
conditions without crashing or causing errors.
2. Maintainability:
 Maintainability is the ease with which software can be modified, updated, or enhanced. A highly
maintainable system is structured in a way that allows developers to make changes or fix issues
efficiently without causing unintended side effects. This quality involves good code organization,
documentation, and adherence to coding standards.
3. Usability:
 Usability focuses on how easily and effectively users can interact with the software to accomplish
their tasks. A user-friendly interface, intuitive design, clear navigation, and responsiveness
contribute to a highly usable software product. Usability ensures that users can operate the software
efficiently and with minimal training or assistance.
4. Scalability:
 Scalability refers to the software's capability to handle increased workload or accommodate growth
without a significant impact on performance or functionality. A scalable system can adapt to
increased demands by adding resources or expanding its capacity, ensuring it remains efficient and
responsive as user numbers or data volume grows.
5. Security:
 Security is crucial for protecting the software from unauthorized access, data breaches, and
malicious attacks. A secure software system implements robust measures to safeguard sensitive
information, prevent vulnerabilities, and ensure compliance with security standards. It includes
encryption, authentication, access control, and regular security updates.

19. Write a short note on V-V model of software testing.


V-Model Design:
1. Requirements Gathering and Analysis: The first phase of the V-Model is the requirements
gathering and analysis phase, where the customer’s requirements for the software are gathered and
analyzed to determine the scope of the project.
2. Design: In the design phase, the software architecture and design are developed, including the high-
level design and detailed design.
3. Implementation: In the implementation phase, the software is actually built based on the design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the customer’s requirements
and is of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure that it continues to meet
the customer’s needs and expectations.
7. The V-Model is often used in safety: critical systems, such as aerospace and defence systems,
because of its emphasis on thorough testing and its ability to clearly define the steps involved in the
software development process.
20. Define quality and explain software quality attributes.
Quality can be defined as the degree to which a product or service meets specific requirements, standards, or
customer expectations. It refers to the characteristics or features of a product or service that fulfill the intended
purpose and satisfy the needs of the user. In the context of software, quality is the measure of how well a
software product meets its defined requirements and performs reliably while meeting user expectations.

Some of the key software quality attributes include:


1. Functionality:
 Functionality refers to the extent to which the software satisfies its specified requirements and
performs the intended tasks accurately. It includes features, capabilities, and suitability for the user's
needs.
2. Reliability:
 Reliability reflects the software's ability to perform consistently and accurately under specified
conditions without failure. A reliable software system should deliver accurate results and maintain
its performance over time.
3. Usability:
 Usability focuses on how easily and effectively users can interact with the software. It involves
aspects like user interface design, intuitiveness, ease of learning, and efficiency of use.
4. Efficiency:
 Efficiency relates to the software's ability to utilize system resources (such as memory, CPU, and
network) effectively while accomplishing its tasks. It includes factors like response time,
throughput, and resource consumption.
5. Maintainability:
 Maintainability represents the ease with which software can be modified, updated, or extended. It
includes factors like code readability, modularity, documentation, and ease of debugging.
6. Portability:
 Portability measures the software's ability to operate and be transferred from one environment or
platform to another without requiring extensive modifications. It involves adaptability to different
operating systems, hardware, or configurations.
7. Security:
 Security refers to the protection of software against unauthorized access, data breaches, and
malicious attacks. A secure software system implements measures to safeguard data, prevent
vulnerabilities, and ensure confidentiality, integrity, and availability.

21. Define the terms: error, fault and failure.


Error is a situation that happens when the Development team or the developer fails to understand a requirement
definition and hence that misunderstanding gets translated into buggy code. This situation is referred to as an
Error and is mainly a term coined by the developers.
 Errors are generated due to wrong logic, syntax, or loop that can impact the end-user experience.
 It is calculated by differentiating between the expected results and the actual results.
 It raises due to several reasons like design issues, coding issues, or system specification issues and
leads to issues in the application.
Sometimes due to certain factors such as Lack of resources or not following proper steps Fault occurs in software
which means that the logic was not incorporated to handle the errors in the application. This is an undesirable
situation, but it mainly happens due to invalid documented steps or a lack of data definitions.
 It is an unintended behavior by an application program.
 It causes a warning in the program.
 If a fault is left untreated it may lead to failure in the working of the deployed code.
 A minor fault in some cases may lead to high-end error.
 There are several ways to prevent faults like adopting programming techniques, development
methodologies, peer review, and code analysis.
Failure is the accumulation of several defects that ultimately lead to Software failure and results in the loss of
information in critical modules thereby making the system unresponsive. Generally, such situations happen very
rarely because before releasing a product all possible scenarios and test cases for the code are simulated. Failure
is detected by end-users once they face a particular issue in the software.
 Failure can happen due to human errors or can also be caused intentionally in the system by an
individual.
 It is a term that comes after the production stage of the software.
 It can be identified in the application when the defective part is executed.

22. State the objective of testing.


The key objectives of testing include:
1. Finding Defects: One of the fundamental goals of testing is to identify defects, errors, or bugs within the
software. This process involves executing test cases to uncover discrepancies between expected and actual
results. Early detection and rectification of defects help in enhancing the software's quality.
2. Validation and Verification: Testing verifies that the software meets specified requirements (validation)
and conforms to its design and functional specifications (verification). It ensures that the software does what
it is supposed to do and meets user expectations.
3. Aiding Decision-Making: Test results provide valuable insights into the software's readiness for release.
They assist stakeholders in making informed decisions regarding the software's quality, risks, and whether it
is suitable for deployment or needs further refinement.
4. Ensuring Reliability and Stability: Testing aims to establish the reliability and stability of the software by
ensuring its consistent performance under varying conditions. It includes assessing the software's behavior,
performance, and robustness.
5. Enhancing User Satisfaction: Testing helps in delivering a product that aligns with user needs,
preferences, and expectations. By identifying and addressing issues before release, it contributes to a more
user-friendly and reliable software experience.
6. Cost-Efficiency: Detecting and addressing defects earlier in the development cycle is more cost-effective
than fixing them after the software is deployed. Effective testing helps in reducing overall project costs by
identifying issues in their early stages.
7. Compliance and Standards Adherence: Testing ensures that the software complies with industry
standards, regulatory requirements, and any specific guidelines set forth for the software application.
8. Continuous Improvement: Testing contributes to ongoing improvement by providing feedback on
development processes and identifying areas for enhancement. It helps in refining the software development
lifecycle and practices for future projects.
Unit No: II
23. What is White Box testing and Black Box testing?
1. White Box Testing:
 Definition: White Box Testing, also known as Clear Box Testing or Structural Testing, is a testing
technique that examines the internal structure, logic, and code implementation of the software being
tested.
 Approach: Testers with knowledge of the internal code structure create test cases based on the
understanding of the software's internal workings. This involves testing individual code segments,
branches, paths, and statements to validate the software's behavior.
 Objectives: It aims to ensure that all code paths are tested, uncover errors due to logical or coding
mistakes, and assess the completeness and correctness of the code.
2. Black Box Testing:
 Definition: Black Box Testing is a testing method that evaluates the functionality of a software
application without considering its internal code structure or implementation details.
 Approach: Testers perform black-box testing based solely on the software's specifications,
requirements, and external behavior. They treat the software as a "black box," examining inputs and
outputs to validate the system's functionality against expected outcomes.
 Objectives: It focuses on validating whether the software meets user requirements, functions
correctly, and produces expected results without considering how the software achieves those results
internally.

24. Discuss in details Experience Based Testing.


When applying experience based test techniques, the test cases are derived from the tester’s skill and intuition.
Their past works with similar applications and technologies also play a role 24 in this. These techniques can be
helpful in looking out for tests that were not easily identified by other structured ones. Depending on the tester’s
approach, it may achieve widely varying degrees of coverage and effectiveness. Coverage can be difficult to
assess and may not be measurable with these techniques.

We should use experience based technique when:


• Requirements and specifications are not available.
• Requirements are inadequate.
• Limited knowledge of the software product.
• Time constraints to follow a structured approach.

Types of experienced based testing


Error Guessing – Tester applies his experience to guess the areas in the application that are prone to error
Exploratory Testing – As the name implies, the tester explores the application, and uses his experience to
navigate thru different functionalities

25. Explain BVA and Equivalence Partitioning.


Boundary Value Analysis For reasons that are not completely clear, a greater number of errors tends to occur at
the boundaries of the input domain rather than in the "center." It is for this reason that boundary value analysis
(BVA) has been developed as a testing technique. Boundary value analysis leads to a selection of test cases that
exercise bounding values. Boundary value analysis is a test case design technique that complements
equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection
of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases
from the output domain as well
Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes
of data from which test cases can be derived. An ideal test case singlehandedly uncovers a class of errors (e.g.,
incorrect processing of all character data) that might otherwise require many cases to be executed before the
general error is observed. Equivalence partitioning strives to define a test case that uncovers classes of errors,
thereby reducing the total number of test cases that must be developed. Test case design for equivalence
partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts introduced
in the preceding section, if a set of objects can be linked by relationships that are symmetric, transitive, and
reflexive, an equivalence class is present. An equivalence class represents a set of valid or invalid states for
input conditions. Typically, an input condition is a specific numeric value, a range of values, a set of related
values, or a Boolean condition.
26. Explain unit testing in details
Unit Testing
In conventional applications, unit testing focuses on the smallest compliable program unit— the subprogram
(e.g., module, subroutine, procedure, component). Once each of these units has been tested individually, it is
integrated into a program structure while a series of regression tests are run to uncover errors due to interfacing
between the modules and side effects caused by the addition of new units. Finally, the system as a whole is
tested to ensure that errors in requirements are uncovered
Unit testing refers to testing program units in isolation. However, there is no consensus on the definition of a
unit. Some examples of commonly understood units are functions, procedures, or methods. Even a class in an
object-oriented programming language can be considered as a program unit. Syntactically, a program unit is a
piece of code, such as a function or method of class, that is invoked from outside the unit and that can invoke
other program units. Moreover, a program unit is assumed to implement a well-defined function providing a
certain level of abstraction to the implementation of higher level functions. The function performed by a
program unit may not have a direct association with a system-level function. Thus, a program unit may be
viewed as a piece of code implementing a “low”-level function

Key reasons to perform unit testing


1. Unit tests help to fix bugs early in the development cycle and save costs.
2. It helps the developers to understand the testing code base and enables them to make
changes quickly
3. Good unit tests serve as project documentation
4. Unit tests help with code re-use. Migrate both your code and your tests to your new
project. Tweak the code until the tests run again.

27. Explain validation testing and its requirement?


Validation Testing
At the culmination of integration testing, software is completely assembled as a package, interfacing errors have
been uncovered and corrected, and a final series of software tests— validation testing—may begin. Validation
can be defined in many ways, but a simple (albeit harsh) definition is that validation succeeds when software
functions in a manner that can be reasonably expected by the customer. At this point a battle-hardened software
developer might protest.
Reasonable expectations are defined in the Software Requirements Specification— a document that describes
all user-visible attributes of the software. The specification contains a section called Validation Criteria.
Information contained in that section forms the basis for a validation testing approach.
Validation Test Criteria Software validation is achieved through a series of black-box tests that demonstrate
conformity with requirements. A test plan outlines the classes of tests to be conducted and a test procedure
defines specific test cases that will be used to demonstrate conformity with requirements. Both the plan and
procedure are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are
achieved, all performance requirements are attained, documentation is correct, and human engineered and other
requirements are met (e.g., transportability, compatibility, error recovery, maintainability).
After each validation test case has been conducted, one of two possible conditions exist: (1) The function or
performance characteristics conform to specification and are accepted or
(2) a deviation from specification is uncovered and a deficiency list is created. Deviation or error discovered at
this stage in a project can rarely be corrected prior to scheduled delivery. It is often necessary to negotiate with
the customer to establish a method for resolving deficiencies.
28. Explain software metrics and its importance
Software metrics are quantitative measures used to assess various aspects of software development, maintenance,
and quality. These measurements provide valuable insights into the software development process, helping teams
understand, monitor, and improve the quality, performance, and efficiency of software projects. Software metrics
are employed throughout the software development lifecycle to facilitate decision-making, manage risks, and
enhance overall software quality.

Importance of Software Metrics:

1. Quality Assessment: Metrics offer objective evaluations of software quality by quantifying aspects such as
defect density, code complexity, and adherence to coding standards. This information aids in identifying
areas needing improvement and tracking the progress of quality enhancement efforts.
2. Performance Monitoring: Metrics track project progress, resource utilization, and productivity. They help
in identifying bottlenecks, inefficiencies, or deviations from project plans, allowing timely interventions for
better resource allocation and project management.
3. Process Improvement: By analyzing metrics related to software development processes, teams can identify
process inefficiencies, streamline workflows, and implement best practices for increased efficiency and
better outcomes.
4. Risk Management: Metrics provide early indicators of potential risks and issues. For instance, metrics
related to defect density or regression rates can forecast potential challenges, allowing teams to take
proactive measures to mitigate risks.
5. Decision Support: Metrics serve as a basis for informed decision-making. They help stakeholders assess
the feasibility of project goals, make trade-off decisions, prioritize tasks, and allocate resources effectively.
6. Benchmarking and Comparison: Metrics allow for comparisons within a project or across different
projects. By comparing metrics across similar projects, teams can identify successful practices and areas for
improvement.

29. What is integration testing? Explain its various types.


Integration Testing is a software testing technique that evaluates the interaction between individual software
components or modules to ensure they function together as expected when integrated. It involves combining and
testing individual units or components to uncover defects that arise due to their interactions.
Various Types of Integration Testing:
1. Big Bang Integration Testing:
 In this approach, all individual modules or components are integrated simultaneously, forming a
complete system. Testing is performed on the entire system as a whole. This method is efficient but
can be challenging to pinpoint specific integration issues.
2. Top-Down Integration Testing:
 This method begins with testing the higher-level modules or components first, gradually integrating
and testing lower-level modules. Stub modules (dummy implementations) are used for lower-level
modules not yet integrated. It verifies the major functionalities early but might delay the testing of
detailed functionalities.
3. Bottom-Up Integration Testing:
 Opposite to Top-Down Integration Testing, this approach starts with testing lower-level modules
first and gradually integrates higher-level modules. Driver modules (test code to simulate higher-
level modules) are used for higher-level modules not yet integrated. It allows detailed functionalities
to be tested early but delays the verification of major functionalities.
4. Sandwich/Hybrid Integration Testing:
 This method combines both Top-Down and Bottom-Up approaches. It starts integration testing from
both ends (top and bottom) towards the middle, aiming to identify and resolve integration issues in
the central parts of the system.
5. Incremental Integration Testing:
 In this iterative approach, modules are integrated and tested in small increments or increments, one
at a time. After each integration, tests are conducted to ensure the newly added components work
correctly with the existing system. It is often used in Agile methodologies and allows continuous
testing as new components are added.
6. Component-Based Integration Testing:
 This focuses on testing individual software components or modules and their interactions.
Components are tested in isolation before integrating them with other modules. Once the individual
components function correctly, integration testing verifies their interactions.
30. Write a short note on system testing.
SYSTEM TESTING
Software is only one element of a larger computer-based system. Ultimately, software is incorporated with other
system elements (e.g., hardware, people, information), and a series of system integration and validation tests are
conducted. These tests fall outside the scope of the software process and are not conducted solely by software
engineers. However, steps taken during software design and testing can greatly improve the probability of
successful software integration in the larger system. A classic system testing problem is "finger-pointing." This
occurs when an error is uncovered, and each system element developer blames the other for the problem. Rather
than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
(1) Design error-handling paths that test all information coming from other elements of the system,
(2) Conduct a series of tests that simulate bad data or other potential errors at the software
interface,
(3) Record the results of tests to use as "evidence" if finger-pointing does occur, and
(4) Participate in planning and design of system tests to ensure that software is adequately
tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-
based system. Although each test has a different purpose, all work to verify that system elements have been
properly integrated and perform allocated functions.
31. What is smoke testing and its benefits?
Smoke Testing, also known as Build Verification Testing (BVT), is a type of preliminary testing performed on the
initial or 'freshly baked' build of an application to check whether the most critical functionalities work properly
without conducting exhaustive testing. The term "smoke" comes from the concept of a machine being checked for
basic functionality by turning it on and observing if it emits smoke (indicating a major problem).

Benefits of Smoke Testing:

1. Early Detection of Major Issues: Smoke Testing helps in quickly identifying major flaws or issues in the
application's critical functionalities. It aims to catch severe defects that could hinder further testing or
integration efforts.
2. Time and Cost Efficiency: By executing a minimal set of tests focused on critical functionalities, smoke
testing saves time and resources during the initial phase of testing. It allows testers to detect show-stopping
issues early, reducing the time spent on subsequent testing phases if the basic functionalities fail.
3. Risk Mitigation: It reduces the risk of progressing with a build that has severe issues. Verifying essential
functionalities through smoke testing minimizes the chances of wasting effort on a build that is not viable
for further testing or deployment.
4. Quick Feedback Loop: Smoke testing provides quick feedback to development teams, allowing them to
address critical defects promptly. This accelerates the development process by ensuring that basic
functionalities are working before proceeding with more comprehensive testing.
5. Streamlined Development Process: It encourages a continuous integration and continuous testing
approach by verifying the basic stability of each new build. This promotes a more streamlined and efficient
development cycle.

32. What are test plans and test cases? Explain with example.

Test Plan:

Definition: A Test Plan is a comprehensive document that outlines the overall approach, scope, resources,
schedules, and objectives of the testing process for a software project. It provides a roadmap for testing activities
and sets the direction for the testing team.

Test Case:
Definition: A Test Case is a detailed set of conditions, actions, and expected results developed to verify specific
functionalities or aspects of the software. Each test case represents a unique test scenario that helps in evaluating
whether the software behaves as expected.

33. Explain cyclomatic complexity with example.


he cyclomatic complexity of a code section is the quantitative measure of the number of linearly independent
paths in it. It is a software metric used to indicate the complexity of a program. It is computed using the Control
Flow Graph of the program. The nodes in the graph indicate the smallest group of commands of a program, and a
directed edge in it connects the two nodes i.e. if the second command might immediately follow the first
command.
For example, if the source code contains no control flow statement then its cyclomatic complexity will be 1, and
the source code contains a single path in it. Similarly, if the source code contains one if condition then
cyclomatic complexity will be 2 because there will be two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside the control flow is the edge joining two basic
blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
M = E – N + 2P where E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is strongly connected, and
cyclometric complexity is defined as
M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

34. Write a short note on black box testing.


Black Box Testing is a software testing technique that focuses on assessing the functionality of an application
without having detailed knowledge of its internal code, structure, or implementation. Testers perform this type of
testing by treating the software as a "black box," where they examine inputs and outputs to evaluate the correctness
of the system's functionalities based on specifications, requirements, and expected behaviors.

Key Characteristics of Black Box Testing:

1. No Knowledge of Internal Structure: Testers conduct black box testing without any knowledge of the
internal workings, algorithms, or code implementation of the software. They solely rely on externally
visible behaviors.
2. Based on Specifications and Requirements: Testing is performed based on predefined specifications,
requirements documents, user stories, or functional specifications provided for the software.
3. Focus on Functionalities: It emphasizes verifying whether the software meets user expectations and
performs its functions as intended, rather than delving into code-level details.
4. Test Cases Creation: Testers create test cases based on input conditions, test data, and expected outputs
without considering the software's internal logic.
5. Various Techniques: Black box testing utilizes techniques such as equivalence partitioning, boundary
value analysis, decision tables, state transition testing, and more to design test cases.
6. Types of Testing: It encompasses various types of testing, including functional testing, non-functional
testing (e.g., usability, performance), and regression testing, among others.
35. Distinguish between structural and functional testing.

Structural Testing Functional Testing

Functional testing can be performed either


Structural testing is done in manual mode.
manually or automatically.

Structural test cases are designed based on Functional testing would depend on both external
external specifications and internal code specifications and the internal workings of the
structure is not considered component.

Structural test cases are based on Functional test cases are based on actions that a
input/output conditions component can perform.

Structural testing is used to find errors in Functional testing verifies that the system adheres
data structure usage and internal coding to acceptable standards of information processing
logic. and does not contain any defects.

Structural testing is done after the coding


Functional testing is performed during development
process is completed by maintenance
and/or maintenance.
groups.

Structural test cases do not depend on data Functional test cases may have to use some specific
values. value for a test case to pass or fail (error checking).

Structural test cases are based on hardware Functional testing is achieved by software
level error checking. techniques.

Structural testing involves static data Functional testing involves the analysis of dynamic
structures and algorithms. data structures and object-oriented programming.

36. Write a short note on white box testing.


White box testing techniques analyse the internal structures the used data structures, internal design, code
structure, and the working of the software rather than just the functionality as in black box testing. It is also
called glass box testing or clear box testing or structural testing. White Box Testing is also known as transparent
testing or open box testing.
White box testing is a software testing technique that involves testing the internal structure and workings of a
software application. The tester has access to the source code and uses this knowledge to design test cases that
can verify the correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and it is used to test the software’s
internal logic, flow, and structure. The tester creates test cases to examine the code paths and logic flows to
ensure they meet the specified requirements.

37. Difference between alpha beta testing.


Alpha Testing Beta Testing

Alpha testing involves both the white box


Beta testing commonly uses black-box testing.
and black box testing.

Alpha testing is performed by testers who


Beta testing is performed by clients who are not part of the
are usually internal employees of the
organization.
organization.

Alpha testing is performed at the


Beta testing is performed at the end-user of the product.
developer’s site.

Reliability and security testing are not Reliability, security and robustness are checked during beta
checked in alpha testing. testing.

Beta testing also concentrates on the quality of the product but


Alpha testing ensures the quality of the
collects users input on the product and ensures that the product
product before forwarding to beta testing.
is ready for real time users.

Alpha testing requires a testing


Beta testing doesn’t require a testing environment or lab.
environment or a lab.

Alpha testing may require a long


Beta testing requires only a few weeks of execution.
execution cycle.

Developers can immediately address the Most of the issues or feedback collected from the beta testing
critical issues or fixes in alpha testing. will be implemented in future versions of the product.

Multiple test cycles are organized in alpha


Only one or two test cycles are there in beta testing.
testing.

38. What is complexity metrics and their significance in testing


Complexity metrics in software testing refer to quantitative measures used to assess the complexity of a software
system's code or design. These metrics help in evaluating the intricacy and structural characteristics of the
codebase, aiding testers and developers in understanding, managing, and improving the software quality.
Significance of Complexity Metrics in Testing:
1. Code Understanding and Maintainability:
 Complexity metrics provide insights into the code's structure and logic, helping developers
understand the codebase's intricacies. This understanding is crucial for maintaining, enhancing, and
debugging the software.
2. Identification of Potential Defects:
High complexity can correlate with an increased likelihood of defects. Metrics such as cyclomatic
complexity or nesting depth can highlight areas in the code that may have a higher probability of
containing defects, guiding focused testing efforts.
3. Test Case Design and Coverage:
 Complexity metrics aid in designing comprehensive test cases by identifying complex areas that
require thorough testing. It helps prioritize testing efforts for critical or complex parts of the
software.
4. Impact Analysis:
 Understanding complexity metrics assists in assessing the potential impact of code changes.
Changes in highly complex areas may have a higher risk of introducing new defects, requiring
careful testing and validation.
5. Software Quality Improvement:
 Monitoring complexity metrics allows for continual improvement efforts. It helps in setting
complexity thresholds, guiding developers to write more maintainable, less complex, and higher-
quality code.

39. Discuss “strategic approach to software testing.”


Strategic approach to software testing
A software testing strategy is the set of steps that need to be done to assure the highest possible quality of an
end-product. It is a plan of actions that an in-house QA department or an outsourced QA team follows to
provide the level of quality set by you. If you choose the strategy that your project does not require to be
perfect, you waste time and resources for nothing.
1. Waterfall testing strategy
2. Agile testing strategy
3. DevOps testing strategy
4. Risk based testing strategy
5. Exploratory testing strategy
6. Alpha beta testing strategy
7. Regression testing strategy
To make it clearer if the Test Plan is some destination, then QA Test strategy is a map to reach that destination.
The classical strategy for testing computer software begins with “testing in the small” and works outward
toward “testing in the large.” Stated in the jargon of software testing, we begin with unit testing, then progress
toward integration testing, and culminate with validation and system testing.
40. Define software metrics. Give its purpose. Explain its types.
Software metrics refer to quantitative measures used in software engineering to assess various attributes of software
products, processes, and projects. These metrics provide objective data to evaluate, manage, and improve different
aspects of the software development lifecycle.
Purpose of Software Metrics:
1. Measure Performance and Quality: Metrics help in quantifying and evaluating the performance, quality,
and efficiency of software development processes and products.
2. Decision Making: They aid in making informed decisions related to project management, resource
allocation, prioritization, and process improvement.
3. Process Improvement: Metrics facilitate the identification of areas for improvement, allowing teams to
optimize processes, increase productivity, and enhance quality.
4. Predictive Analysis: They assist in predicting project outcomes, estimating effort, identifying potential
risks, and managing project timelines.
Types of Software Metrics:
1. Product Metrics:
 Measure attributes of the software product itself. Examples include:
 Lines of Code (LOC)
 Cyclomatic Complexity
 Code Coverage
 Defect Density
 Function Points
2. Process Metrics:
 Evaluate characteristics of the software development process. Examples include:
 Development Time
 Lead Time
 Review Efficiency
 Turnaround Time
 Defect Removal Efficiency
3. Project Metrics:
 Assess the attributes and performance of the software project. Examples include:
 Cost Variance
 Schedule Variance
 Effort Variance
 Return on Investment (ROI)
 Productivity Metrics

41. What is system testing? List its various types. Explain any two in short.
System testing is a level of software testing where a complete, integrated software system is tested as a whole to
evaluate its compliance with specified requirements. It focuses on verifying that the entire software system meets
its intended purpose, functions correctly, and operates as expected in its intended environment.
Various types of system testing include:
1. Functional Testing
2. Non-Functional Testing
3. Usability Testing
4. Performance Testing
5. Security Testing
6. Compatibility Testing
7. Regression Testing
8. Acceptance Testing
Two types of system testing explained briefly:
1. Performance Testing: This type of testing evaluates how well the system performs under various
conditions, assessing aspects like speed, responsiveness, scalability, and stability. For instance, load testing
examines the system's behavior under normal and peak load conditions, stress testing pushes the system
beyond its limits to determine breaking points, and scalability testing assesses the system's ability to handle
growing demands.
2. Security Testing: Security testing focuses on assessing the system's resistance to unauthorized access,
vulnerabilities, and potential threats. It involves various techniques such as vulnerability scanning,
penetration testing, authentication checks, encryption testing, and access control testing to ensure the
system's robustness against security risks and breaches.

42. What is error guessing?


Error Guessing – Tester applies his experience to guess the areas in the application that are prone to error.
It's a simple technique of guessing and detecting the potential defects that may creep into the software product.
In this technique, a tester makes use of his skills, acquired knowledge and past experience to identify the
vulnerable areas of the software product that are likely to be affected by the bugs.
Error guessing technique may be considered as a risk analysis method, where an experienced tester applies his
wisdom and gained experience, to spot the areas or functionalities of the software product that are likely to be
tainted with the potential defects. Thereafter, tester assigns each area with low-risk, medium-risk and high-risk
defect prone areas, and accordingly prepares the test cases to locate these defects.
What could not be found through formal testing technique may be spotted through error guessing. However, it
is preferred that the formal testing technique should be followed by the error guessing technique

43. Explain exploratory testing in detail.


Exploratory Testing – As the name implies, the tester explores the application, and uses his experience to
navigate thru different functionalities.
Exploratory testing is a testing technique and simultaneously, a progressive learning approach to perform
maximum testing with the minimal planning. During the course of exploratory testing, a tester constantly
studies and analyzes the software product and accordingly applies his skills, traits and experience to develop
strategy and test cases to perform and carry out the necessary testing.
Exploratory testing best use may be seen in the event of inadequate specifications &
requirement and severely limited time
44. What is check list testing?
Checklist Based Testing– In this technique, we apply a tester's experience to create a checklist of different
functionalities and use cases for testing.
In this technique, experienced tester based on his past experience prepares the checklist,
which work as a manual to direct the testing process. The checklist is of high and standard level and
consistently reminds the tester of what to be tested. Checklist prepared by a tester is not the static and the final
list, i.e. changes may be brought into it proportion to needs & requirements, occurring during the course of
testing.
Further, it is pertinent to mention that the checklist is the only tool to ensure the complete
test coverage in this testing
45. What is equivalence testing.
Equivalence testing, also known as equivalence partitioning
Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes
of data from which test cases can be derived. An ideal test case singlehandedly uncovers a class of errors (e.g.,
incorrect processing of all character data) that might otherwise require many cases to be executed before the
general error is observed. Equivalence partitioning strives to define a test case that uncovers classes of errors,
thereby reducing the total number of test cases that must be developed. Test case design for equivalence
partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts introduced
in the preceding section, if a set of objects can be linked by relationships that are symmetric, transitive, and
reflexive, an equivalence class is present. An equivalence class represents a set of valid or invalid states for
input conditions. Typically, an input condition is a specific numeric value, a range of values, a set of related
values, or a Boolean condition.

46. Write a short note on boundary value testing and decision table testing.
Boundary Value Analysis
For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries of the
input domain rather than in the "center." It is for this reason that boundary value analysis (BVA) has been
developed as a testing technique.
Boundary value analysis leads to a selection of test cases that exercise bounding values. Boundary value
analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any
element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than
focusing solely on input conditions, BVA derives test cases from the output domain as well

Decision Table Testing,


In many software applications, a module may be required to evaluate a complex combination of conditions and
select appropriate actions based on these conditions. Decision tables provide a notation that translates actions
and conditions (described in a processing narrative) into a tabular form. The table is difficult to misinterpret and
may even be used as a machine readable input to a table driven algorithm
Decision table organization is illustrated in Figure below. Referring to the figure, the table is divided into four
sections. The upper left-hand quadrant contains a list of all conditions. The lower left-hand quadrant contains a
list of all actions that are possible based on combinations of conditions. The right-hand quadrants form a matrix
that indicates condition combinations and the corresponding actions that will occur for a specific combination.
Therefore, each column of the matrix may be interpreted as a processing rule. The following steps are applied to
develop a decision table:
1. List all actions that can be associated with a specific procedure (or module).
2. List all conditions (or decisions made) during execution of the procedure.
3. Associate specific sets of conditions with specific actions, eliminating impossible combinations of
conditions; alternatively, develop every possible permutation of conditions.
4. Define rules by indicating what action(s) occurs for a set of conditions
47. Explain state transition testing.
State Transition Testing is a black box testing technique in which changes made in input conditions cause state
changes or output changes in the Application under Test(AUT). State transition testing helps to analyze
behaviour of an application for different input conditions. Testers can provide positive and negative input test
values and record the system behavior.
It is the model on which the system and the tests are based. Any system where you get a different output for the
same input, depending on what has happened before, is a finite state system.
State Transition Testing Technique is helpful where you need to test different system transitions
When to Use State Transition?
⚫This can be used when a tester is testing the application for a finite set of input values.
⚫When the tester is trying to test sequence of events that occur in the application under test. I.e., this will allow
the tester to test the application behavior for a sequence of input
values.
⚫When the system under test has a dependency on the events/values in the past.
When to Not Rely On State Transition?
⚫When the testing is not done for sequential input combinations.
⚫If the testing is to be done for different functionalities like exploratory testing
48. Write a note on basic path testing.
Basis Path Testing is a white-box testing technique based on the control structure of a program or a module.
Using this structure, a control flow graph is prepared and the various possible paths present in the graph are
executed as a part of testing. Therefore, by definition, Basis path testing is a technique of selecting the paths in
the control flow graph, that provide a basis set of execution paths through the program or module. Since this
testing is based on the control structure of the program, it requires complete knowledge of the program’s
structure. To design test cases using this technique, five steps are followed :
1. Control Flow Graph (CFG): Before conducting path testing, a control flow graph is constructed to
represent all possible paths through the code. It provides a visual representation of the program's control
structures, including branches, loops, and conditionals.
2. Paths through Code: In basic path testing, the goal is to identify and test each feasible path or route that
the program's execution can take. This includes every possible combination of branches and conditions
within the code.
3. Criterion for Coverage: Testers aim for different coverage criteria, such as statement coverage, decision
coverage, condition coverage, and path coverage. Path coverage, in particular, ensures that every feasible
path from the start to the end of each module or function is executed at least once.
4. Identifying Independent Paths: Testers identify independent paths, focusing on unique sequences of
decisions and conditions that are not covered by other paths. This prevents redundant testing and ensures
that each distinct path is exercised.
5. Test Case Design: Test cases are designed to follow specific paths through the code, covering different
combinations of conditions, loops, and branches. Each test case represents a particular path in the control
flow graph.

49. Write a note on branch testing.


Branch Testing is defined as a testing method, which has the main goal to ensure that each one of the possible
branches from each decision point is executed at least once and thereby ensuring that all reachable code is
executed. In the branch testing, each outcome from a code module is tested as if the outcomes are binary, you
need to test both True and False outcomes.
Branch testing also provides a method to measure the fraction of independent code segments and also helps you
to find out which is sections of code don’t have any branches.
Feature of Branch Testing:
There is some feature of Branch Testing which is generally provided to help of any software project. These are
given below:
1. It allows you to validate-all the branches in the code.
2. It is a non-functional testing type.
3. It ensures that no branch leads to abnormal behavior of the application.
4. It provides to find a quantitative measure of code coverage.
5. Branch testing is generally ignored branches inside the Boolean expressions.
6.
50. Write in brief about test case design. Give example.
The design of tests for software and other engineered products can be as challenging as the initial design of the
product itself. Yet, for reasons that we have already discussed, software engineers often treat testing as an
afterthought, developing test cases that may "feel right" but have little assurance of being complete. Recalling
the objectives of testing, we must design tests that have the highest likelihood of finding the most errors with a
minimum amount of time and effort.
A rich variety of test case design methods have evolved for software. These methods provide the developer with
a systematic approach to testing. More important, methods provide a mechanism that can help to ensure the
completeness of tests and provide the highest likelihood for uncovering errors in software.
Any engineered product (and most other things) can be tested in one of two ways:

(1) Knowing the specified function that a product has been designed to perform, tests can be conducted that
demonstrate each function is fully operational while at the same time searching for errors in each function;
(2) Knowing the internal workings of a product, tests can be conducted to ensure that "all gears mesh," that is,
internal operations are performed according to specifications and all internal components have been adequately
exercised. The first test approach is called black- box testing and the second, white-box testing.

51. Discuss levels of testing.


Software Testing is an activity performed to identify errors so that errors can be removed to obtain a product with
greater quality. To assure and maintain the quality of software and to represents the ultimate review of
specification, design, and coding, Software testing is required. There are different levels of testing :
1. Unit Testing :
In this type of testing, errors are detected individually from every component or unit by individually
testing the components or units of software to ensure that if they are fit for use by the developers. It is
the smallest testable part of the software.
2. Integration Testing :
In this testing, two or more modules which are unit tested are integrated to test i.e. technique
interacting components and are then verified if these integrated modules work as per the expectation
or not and interface errors are also detected.
3. System Testing :
In system testing, complete and integrated Softwares are tested i.e. all the system elements forming
the system is tested as a whole to meet the requirements of the system.
4. Acceptance Testing :
It is a kind of testing conducted to ensure whether the requirement of the users are fulfilled prior to its
delivery and the software works correctly in the user’s working environment.

52. What are coverage criteria? list and explain any two coverage criteria in short.
Coverage criteria, also known as coverage metrics or coverage measures, are quantitative indicators used to
measure the extent to which a specific aspect of the software has been tested. These criteria determine the
effectiveness and completeness of the testing process by specifying what portions of the software should be
exercised by the test cases.
Some common coverage criteria in software testing include:
1. Statement Coverage (or Line Coverage):
 Explanation: Statement coverage measures the percentage of executable code lines that have been
executed at least once during testing.
 How It Works: It aims to ensure that each line of code is executed by at least one test case, helping
to identify unexecuted code.
 Example: If a piece of code contains ten executable lines and the test suite causes all ten lines to
execute, the statement coverage is 100%.
2. Branch Coverage (or Decision Coverage):
 Explanation: Branch coverage evaluates the proportion of decision points or branches in the code
that have been exercised by the test cases.
 How It Works: It ensures that both true and false outcomes of conditional statements (branches) are
tested.
Example: In an 'if-else' statement, if the test suite executes both the true and false paths of the
condition, branch coverage for that decision point is complete.
3. Path Coverage:
 Explanation: Path coverage aims to test every possible path through the code from start to finish.
 How It Works: It verifies that every unique path in the program, including loops and conditional
statements, is traversed by at least one test case.
 Example: If a function has multiple loops and conditional statements, achieving path coverage
requires executing all feasible paths, which might be impractical for complex code.
4. Condition Coverage (or Predicate Coverage):
 Explanation: Condition coverage ensures that each boolean sub-expression in a decision takes on
both true and false values during testing.
 How It Works: It focuses on testing individual conditions within compound conditions, aiming to
evaluate all combinations of conditions.
 Example: In a complex condition like (A && B) || (C || D), condition coverage would ensure that
both A && B and C || D are evaluated to both true and false during testing.

53. Write a short note on regression testing.


Regression Testing
Regression testing is a method of testing that is used to ensure that changes made to the software do not
introduce new bugs or cause existing functionality to break. It is typically done after changes have been made to
the code, such as bug fixes or new features, and is used to verify that the software still works as intended.
Regression testing can be performed in different ways, such as:
Retesting: This involves testing the entire application or specific functionality that was affected by the changes.
Re–execution: This involves running a previously executed test suite to ensure that the changes did not break
any existing functionality.
Comparison: This involves comparing the current version of the software with a previous version to ensure that
the changes did not break any existing functionality.

Unit No: III


54. Explain in detail SQA challenges.
Here are the key challenges in Software Quality Assurance:
1. Changing Requirements and Scope Creep: Frequent changes in requirements during the software
development lifecycle can challenge SQA efforts. Scope creep—uncontrolled changes or continuous
expansion of project scope—can impact established test plans and quality assurance strategies.
2. Complexity of Systems: Modern software systems are intricate, featuring complex architectures,
integrations, and diverse technologies. Ensuring comprehensive testing coverage across these complex
systems poses a significant challenge for SQA teams.
3. Time and Resource Constraints: Balancing time-to-market pressures with the need for comprehensive
testing and quality assurance within budgetary constraints is challenging. Limited resources, including
skilled personnel, tools, and testing environments, can hinder effective SQA.
4. Test Automation and Maintenance: While test automation is essential for efficient testing, challenges lie
in identifying suitable test cases for automation, maintaining test scripts, and ensuring automation tools are
aligned with the evolving software.
5. Compatibility and Configuration Management: Ensuring software compatibility across various
platforms, devices, browsers, and configurations presents challenges. Additionally, managing different
configurations and versions of software components can be complex.
6. Security and Compliance: Addressing security concerns and compliance with industry regulations require
constant vigilance. Evolving security threats demand robust security testing and compliance adherence,
adding complexity to SQA efforts.
7. Data Management and Privacy: Managing test data and ensuring its privacy and integrity is crucial.
Challenges arise in obtaining realistic test data, especially in compliance with data privacy regulations like
GDPR or HIPAA.
8. Maintaining Traceability and Documentation: Maintaining traceability between requirements, test cases,
and defects while ensuring comprehensive documentation is challenging, particularly in larger projects with
numerous stakeholders.
9. Global Collaboration and Communication: Collaboration among globally distributed teams introduces
communication barriers, cultural differences, and timezone challenges, impacting the coordination and
effectiveness of SQA efforts.
10. Continuous Improvement and Adaptation: Adapting to new methodologies, tools, and industry best
practices for SQA, and fostering a culture of continuous improvement poses a challenge in ensuring that
SQA practices remain efficient and effective over time.

55. Explain the defect management process in detail with a neat diagram.
The defect management process is the core of software testing. Once the defects have been identified, the
most significant activity for any organization is to manage the flaws, not only for the testing team but also
for everyone involved in the software development or project management process.
The Defect Management Process is process where most of the organizations manage the Defect Discovery,
Defect Removal, and then the Process Improvement
o Various Stages of Defect Management Process
The defect management process includes several stages, which are as follows:
1. Defect Prevention
2. Deliverable Baseline
3. Defect Discovery
4. Defect Resolution
5. Process Improvement
6. Management Reporting
56. Explain formal technical review and its benefits in detail.
Formal Technical Review (FTR) is a software quality control activity performed by
software engineers.

Objectives of formal technical review (FTR):


Some of these are:
• Useful to uncover error in logic, function and
implementation for any representation of the software.
• The purpose of FTR is to verify that the software meets specified requirements.
• To ensure that software is represented according to predefined standards.
• It helps to review the uniformity in software that is development in a uniform manner.
• To makes the project more manageable.

In addition, the purpose of FTR is to enable junior engineer to observe the analysis,
design, coding and testing approach more closely. FTR also works to promote back up
and continuity become familiar with parts of software they might not have seen
otherwise. Actually, FTR is a class of reviews that include walkthroughs, inspections,
round robin reviews and other small group technical assessments of software. Each FTR
is conducted as meeting and is considered successful only if it is properly planned,
controlled and attended.
EXAMPLE: suppose during the development of the software without FTR design cost 10
units, coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it we
have to re design the software and final cost will become 70 units. that is why FTR is so
helpful while developing the software.

57. List quality improvement methodologies and explain any three in detail.
Quality Improvement Methodologies

1. Six Sigma 4. Kaizen


2. Lean Manufacturing 5. Agile Methodologies
3. TQM or Reengineering 6. PDSA

PDSA: The basic Plan-Do-Study-Act (PDSA) cycle was first developed by Shewhart and then modified
by Deming. It is an effective improvement technique

The four steps in the cycle are exactly as stated. First, plan carefully what is to be done. Next, carry out
the plan (do it). Third, study the results—did the plan work as intended, or were the results different?
Finally, act on the results by identifying what worked as planned and what didn’t. Using the knowledge
learned, develop an improved plan and repeat the cycle.

Kaizen : Kaizen is a Japanese word for the philosophy that defines management’s role
in continuously encouraging and implementing small improvements involving
everyone. It is the process of continuous improvement in small increments that make
the process more— efficient, effective, under control, and adaptable. Improvements
are usually accomplished at little or no expense, without sophisticated techniques or
expensive equipment. It focuses on simplification by breaking down complex
processes into their sub-processes and then improving them.

Six Sigma : Six Sigma is the process of producing high and improved quality output.
This can be done in two phases – identification and elimination. The cause of defects
is identified and appropriate elimination is done which reduces variation in whole
processes. A six sigma method is one in which 99.99966% of all the products to be
produced have the same features and are of free from defects.

58. Write a short note on ISO 9000 standards.


The ISO 9000 series of standards is based on the assumption that if a proper stage is
followed for production, then good quality products are bound to follow automatically.
The types of industries to which the various ISO standards apply are as follows.

1. ISO 9001: This standard applies to the organizations engaged in design,


development, production, and servicing of goods. This is the standard that
applies to most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design
products but are only involved in the production. Examples of these category
industries contain steel and car manufacturing industries that buy the product
and plants designs from external sources and are engaged in only
manufacturing those products. Therefore, ISO 9002 does not apply to software
development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.

59. Explain the process of software review in detail.


Software Review is systematic inspection of a software by one or more individuals who
work together to find and resolve errors and defects in the software during the early stages
of Software Development Life Cycle (SDLC). Software review is an essential part of
Software Development Life Cycle (SDLC) that helps software engineers in validating the
quality, functionality and other vital features and components of the software. It is a whole
process that includes testing the software product and it makes sure that it meets the
requirements stated by the client.
Usually performed manually, software review is used to verify various documents like
requirements, system designs, codes, test plans and test cases.
Objectives of Software Review:
The objective of software review is:

1. To improve the productivity of the development team.

2. To make the testing process time and cost effective.

3. To make the final software with fewer defects.

4. To eliminate the inadequacies.

Process of Software Review:


Types of Software Reviews:
There are mainly 3 types of software reviews:

1. Software Peer Review:

2. Software Management Review:

3. Software Audit Review:

60. Discuss phases of formal review.


Formal Review generally takes place in piecemeal approach that consists of six different
steps that are essential. Formal review generally obeys formal process. It is also one of the
most important and essential techniques required in static testing.
Six steps are extremely essential as they allow team of developers simply to ensure and
check software quality, efficiency, and effectiveness. These steps are given below :
1. Planning :
For specific review, review process generally begins with ‘request for review’
simply by author to moderator or inspection leader. Individual participants,
according to their understanding of document and role, simply identify and
determine defects, questions, and comments. Moderator also performs entry
checks and even considers exit criteria.

2. Kick-Off :
Getting everybody on the same page regarding document under review is the
main goal and aim of this meeting. Even entry result and exit criteria are also
discussed in this meeting. It is basically an optional step. It also provides better
understanding of team about relationship among document under review and
other documents. During kick-off, Distribution of document under review,
source documents, and all other related documentation can also be done.

3. Preparation :
In preparation phase, participants simply work individually on document under
review with the help of related documents, procedures, rules, and provided
checklists. Spelling mistakes are also recorded on document under review but
not mentioned during meeting.
These reviewers generally identify and determine and also check for any defect,
issue or error and offer their comments, that later combined and recorded with
the assistance of logging form, while reviewing document.
4. Review Meeting :
This phase generally involves three different phases i.e. logging, discussion, and
decision. Different tasks are simply related to document under review is
performed.

5. Rework :
Author basically improves document that is under review based on the defects
that are detected and improvements being suggested in review meeting.
Document needs to be reworked if total number of defects that are found are
more than an unexpected level. Changes that are done to document must be easy
to determine during follow-up, therefore author needs to indicate changes are
made.

6. Follow-Up :
Generally, after rework, moderator must ensure that all satisfactory actions need
to be taken on all logged defects, improvement suggestions, and change
requests. Moderator simply makes sure that whether author has taken care of all
defects or not. In order to control, handle, and optimize review process,
moderator collects number of measurements at every step of process. Examples
of measurements include total number of defects that are found, total number of
defects that are found per page, overall review effort, etc.

61. Write in brief about defect life cycle.


Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which
a defect goes through during its lifetime. It varies from organization to organization
and also from project to project as it is governed by the software testing process and
also depends upon the tools used.

Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states that
defect or bug goes through in its entire life. The purpose of Defect life cycle is to
easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
Defect States
#1) New: This is the first state of a defect in the Defect Life Cycle. When any new
defect is found, it falls in a ‘New’ state, and validations & testing are performed on
this defect in the later stages of the Defect Life Cycle.
#2) Assigned: In this stage, a newly created defect is assigned to the development
team to work on the defect. This is assigned by the project lead or the manager of the
testing team to a developer.
#3) Open/Active: Here, the developer starts the process of analyzing the defect and
works on fixing it, if required.
If the developer feels that the defect is not appropriate then it may get transferred to
any of the below four states namely Duplicate, Deferred, Rejected, or Not a Bug-
based upon a specific reason. We will discuss these four states in a while.
#4) Fixed: When the developer finishes the task of fixing a defect by making the
required
changes then he can mark the status of the defect as “Fixed”.
#5) Pending Retest: After fixing the defect, the developer assigns the defect to the tester
to retest the defect at their end, and until the tester works on retesting the defect, the state
of the defect remains in “Pending Retest”.
#6) Retest: At this point, the tester starts the task of retesting the defect to verify if the
defect is fixed accurately by the developer as per the requirements or not.
#7) Reopen: If any issue persists in the defect, then it will be assigned to the developer
again for testing and the status of the defect gets changed to ‘Reopen’.
#8) Verified: If the tester does not find any issue in the defect after being assigned to
the developer for retesting and he feels that if the defect has been fixed accurately then
the status of the defect gets assigned to ‘Verified’.
#9) Closed: When the defect does not exist any longer, then the tester changes the status
of the defect to “Closed”.

62. Write a short note on software reliability.

Software Reliability means Operational reliability. It is described as the ability of a system


or component to perform its required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.

Software Reliability is an essential connect of software quality, composed with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software
turn to be high. While any system with a high degree of complexity, containing software, will
be hard to reach a certain level of reliability, system developers tend to push complexity into
the software layer, with the speedy growth of system size and ease of doing so by upgrading
the software.

63. What are quality improvement tools? List and explain any two.

Quality improvement tools are techniques, methodologies, or frameworks used by


organizations to analyze processes, identify problems, and implement solutions to enhance
product quality, efficiency, and overall performance. These tools assist in data analysis,
problem-solving, decision-making, and process optimization. Here are explanations of two
common quality improvement tools:

1. Pareto Analysis:
 Explanation: The Pareto Principle, also known as the 80/20 rule, suggests
that roughly 80% of effects come from 20% of causes. Pareto Analysis helps
identify and prioritize the most significant factors contributing to a problem.
 How It Works: Data related to defects, issues, or problems are collected and
categorized. A Pareto chart is created, displaying the frequency or impact of
each category in descending order. This chart helps identify the vital few (the
most significant issues causing the majority of problems) versus the trivial
many.
 Example: In software development, if defects are categorized by type (e.g.,
functionality, usability, performance), a Pareto chart can highlight which types
of defects contribute most to overall issues, allowing teams to prioritize efforts
for maximum impact.
2. Root Cause Analysis (RCA):
 Explanation: RCA is a problem-solving technique used to identify the
underlying causes of issues or problems rather than just addressing symptoms.
It helps prevent recurrence by tackling the fundamental reasons for problems.
 How It Works: RCA involves a systematic approach of investigating and
analyzing problems to determine their root causes. Techniques such as the "5
Whys" (repeatedly asking "why" to trace problems to their origins) or fishbone
diagrams (Ishikawa or cause-and-effect diagrams) are used to map out and
understand cause-and-effect relationships leading to the issue.
 Example: If a software application frequently crashes, RCA might involve
identifying multiple potential causes such as coding errors, resource
constraints, or hardware issues. The 5 Whys technique could be employed to
dig deeper into each cause until the core issue causing the crashes is
uncovered.

Other commonly used quality improvement tools include:

 Histograms: Graphic representations of data distributions, displaying frequencies or


counts of specific characteristics.
 Scatter Diagrams: Used to visualize relationships or correlations between two
variables.
 Control Charts: Monitoring tools to track process variations and identify trends,
outliers, or patterns over time.
 Six Sigma Tools: Various statistical and analytical tools used within the Six Sigma
methodology for process improvement and variation reduction.

64. Explain scatter diagrams in details.


Scatter Diagrams

The simplest way to determine if a cause-and-effect relationship exists between


two variables is to plot a scatter diagram. Figure C shows the relationship between
automotive speed and gas mileage. The figure

fig C

shows that as speed increases, gas mileage decreases. Automotive speed is plotted
on the x-axis and is the independent variable. The independent variable is usually
controllable. Gas mileage is on the y-axis and is the dependent, or response,
variable. Other examples of relationships are as follows:
 Cutting speed and tool life.
 Temperature and lipstick hardness.
 Striking pressure and electrical current.
 Temperature and percent foam in soft drinks.
 Yield and concentration.
 Training and errors.
 Breakdowns and equipment age.
 Accidents and years with the organization.

65. Short note on six sigma and kaizen.


Six Sigma : Six Sigma is the process of producing high and improved quality output.
This can be done in two phases – identification and elimination. The cause of defects
is identified and appropriate elimination is done which reduces variation in whole
processes. A six sigma method is one in which 99.99966% of all the products to be
produced have the same features and are of free from defects.

Characteristics of Six Sigma:


The Characteristics of Six Sigma are as follows:

1. Statistical Quality Control:


Six Sigma is derived from the Greek Letter σ which denote Standard
Deviation in statistics. Standard Deviation is used for measuring the
quality of output.
2. Methodical Approach:
The Six Sigma is a systematic approach of application in DMAIC and
DMADV which can be used to improve the quality of production. DMAIC
means for Design- Measure- Analyze-Improve-Control. While DMADV
stands for Design-Measure- Analyze-Design-Verify.

3. Fact and Data-Based Approach:


The statistical and methodical method shows the scientific basis of the
technique.
4. Project and Objective-Based Focus:
The Six Sigma process is implemented to focus on the
requirements and conditions.
Kaizen : Kaizen is a Japanese word for the philosophy that defines management’s role in
continuously encouraging and implementing small improvements involving everyone. It
is the process of continuous improvement in small increments that make the process more
— efficient, effective, under control, and adaptable. Improvements are usually
accomplished at little or no expense, without sophisticated techniques or expensive
equipment. It focuses on simplification by breaking down complex processes into their
sub-processes and then improving them.

66. Explain cause and effect diagrams.


Cause-And-Effect (C&E) Diagram

A cause-and-effect (C&E) diagram is a picture composed of lines and symbols


designed to represent a meaningful relationship between an effect and its causes. It
was developed by Dr. Kaoru Ishikawa in 1943 and is sometimes referred to as an
Ishikawa diagram or a fishbone diagram because of its shape. C&E diagrams are
used to investigate either a “bad” effect and to take action to correct the causes or a
“good” effect and to learn those causes that are responsible. For every effect, there
are likely to be numerous

causes. Figure above illustrates a C&E diagram with the effect on the right and
causes on the left. The effect is the quality characteristic that needs improvement.
Causes are sometimes broken down into the major causes of work methods,
materials, measurement, people, equipment, and the environment.

Each major cause is further subdivided into numerous minor causes. For example,
under work methods, we might have training, knowledge, ability, physical
characteristics, and so
forth. C&E diagrams are the means of picturing all these major and minor causes.
Figure below shows a C&E diagram for house paint peeling using four major
causes.
The first step in the construction of a C&E diagram is for the project team to identify the
effect or quality problem. It is placed on the right side of a large piece of paper by the
team leader. Next, the major causes are identified and placed on the diagram. Determining
all the minor causes requires brainstorming by the project team. Brainstorming is an idea
generating technique that is well suited to the C&E diagram. It uses the creative thinking
capacity of the team.
67. Explain run charts.
RUN CHART

A run chart, which is shown in Figure D, is a very simple technique for analyzing the
process in the development stage or, for that matter, when other charting techniques
are not applicable. The important point is to draw a picture of the process and let it
“talk” to you. A picture is worth a thousand words, provided someone is listening.
Plotting the data points is a very effective way of finding out about the process. This
activity should be done as the first step in data analysis. Without a run chart, other data
analysis tools—such as the average, sample standard deviation, and histogram—can
lead to erroneous conclusions.
The particular run chart shown in Figure D is referred to as an X _ chart and is used to
record the variation in the average value of samples. Other charts, such as the R chart
(range) or p chart (proportion) would have also served for explanation purposes. The
horizontal axis is labeled “Subgroup Number,” which identifies a particular sample
consisting of a fixed number of observations. These subgroups are plotted by order of
production, with the first one inspected being 1 and the last one on this chart being 25.
The vertical axis of the graph is the variable, which in this particular case is weight
measured in kilograms.
Each small solid diamond represents the average value within a subgroup. Thus, subgroup
number 5 consists of, say, four observations, 3.46, 3.49, 3.45, and 3.44, and their average
is 3.46 kg. This value is the one posted on the chart for subgroup number 5. Averages are
used on control charts rather than individual observations because average values will
indicate a change in variation much faster. Also, with two or more observations in a
sample, a measure of the dispersion can be obtained for a particular subgroup.

68. What is defect? List and explain common types of defect.


Defects are defined as the deviation of the actual and expected result of system or
software application. Defects can also be defined as any deviation or irregularity from
the specifications mentioned in the product functional specification document
Types of Defects: Following are some of the basic types of defects in the software
development:
1. Arithmetic Defects: It include the defects made by the developer in some
arithmetic expression or mistake in finding solution of such arithmetic
expression. This type of defects are basically made by the programmer due to
access work or less knowledge. Code congestion may also lead to the arithmetic
defects as programmer is unable to properly watch the written code.
2. Logical Defects: Logical defects are mistakes done regarding the
implementation of the code. When the programmer doesn’t understand the
problem clearly or thinks in a wrong way then such types of defects happen.
Also while implementing the code if the programmer doesn’t take care of the
corner cases then logical defects happen. It is basically related to the core of the
software.
3. Syntax Defects: Syntax defects means mistake in the writing style of the code.
It also focuses on the small mistake made by developer while writing the code.
Often the developers do the syntax defects as there might be some small
symbols escaped. For example, while writing a code in C++ there is possibility
that a semicolon(;) is escaped.
4. Multithreading Defects: Multithreading means running or executing the
multiple tasks at the same time. Hence in multithreading process there is
possibility of the complex debugging. In multithreading processes sometimes
there is condition of the deadlock and the starvation is created that may lead to
system’s failure.
5. Interface Defects: Interface defects means the defects in the interaction of the
software and the users. The system may suffer different kinds of the interface
testing in the forms of the complicated interface, unclear interface or the
platform based interface.
6. Performance Defects: Performance defects are the defects when the system or
the software application is unable to meet the desired and the expected results.
When the system or the software application doesn’t fulfill the users’s
requirements then that is the performance defects. It also includes the response
of the system with the varying load on the system.

69. Explain Rate of occurrence of failure.


The Rate of Occurrence of Failure (ROCOF) is a metric used in reliability engineering and
risk assessment to measure the frequency or likelihood of failures within a system or a
component over a specific period. It represents the number of failures that occur within a
given time frame, typically expressed as failures per unit of time, such as failures per hour,
day, month, or year.

ROCOF is essential in assessing and predicting the reliability, availability, and


maintainability of systems, especially in critical industries like aerospace, manufacturing,
telecommunications, and healthcare. It helps in understanding the failure patterns, estimating
the probability of failures, and planning maintenance strategies to minimize downtime and
disruptions.

The formula to calculate the Rate of Occurrence of Failure is:

�����=Number of FailuresTotal Operating TimeROCOF=Total Operating TimeNumb


er of Failures

Where:

 Number of Failures: The count of failures experienced within a specified period.


 Total Operating Time: The duration or total time the system/component was in
operation during that specific period.

70. Explain Probability of Failure on Demand.


Probability of Failure on Demand (PFD) is a quantitative measure used in risk assessment,
particularly in the context of safety-critical systems and components. It represents the
likelihood or probability that a safety system or a safety instrumented function will fail to
perform its intended safety function when demanded upon to do so.

PFD is a crucial metric within the framework of functional safety, especially in industries
such as automotive, aerospace, process control, and healthcare, where the reliability of safety
systems is paramount to prevent hazardous or dangerous situations.

PFD is typically used in conjunction with Safety Integrity Levels (SILs) as defined by
standards such as IEC 61508 (for general industries) or IEC 61511 (for the process industry).
SILs categorize the safety integrity requirements of safety instrumented systems, with SIL 1
representing the lowest and SIL 4 the highest level of safety integrity.

The Probability of Failure on Demand is calculated based on the following formula:


���=Total number of dangerous failures in a given periodTotal number of demands made
on the safety system during that periodPFD=Total number of demands made on the safety sy
stem during that periodTotal number of dangerous failures in a given period

Where:

 Total number of dangerous failures: The number of failures that lead to the loss of the
safety function.
 Total number of demands: The total number of times the safety system or function is
expected to perform its safety function within a specific period.

71. What is TQM?


TQM or Reengineering
Reengineering : reengineering is the fundamental rethinking and radical redesign of
business processes to achieve dramatic improvements in critical measures of
performance. Many practitioners believe that TQM is associated with only incremental
improvements. Nothing could be further from the truth—for many years, the Malcolm
Baldrige National Quality Award has defined continuous improvement as referring to
both incremental and “breakthrough” improvement. The Japanese have not only relied on
kaizen but have developed policy management (hoshin kanri) and policy deployment
(hoshin tenkai) in large part to produce the kind of large-scale breakthroughs that
Hammer and Champy promote. Nor is this concept uniquely Japanese. Joseph Juran has
had a long-standing emphasis on breakthrough efforts aimed at achieving unprecedented
levels of performance
72. Explain pareto diagram with example.
A Pareto Diagram, also known as a Pareto Chart, is a type of chart that combines both bar
and line graphs. It visually represents the frequency or impact of various factors, issues,
or categories in descending order, highlighting the most significant contributors to a
problem or situation. The Pareto principle, often known as the 80/20 rule, suggests that
roughly 80% of effects come from 20% of causes.

Construction of a Pareto diagram is very simple. There are five steps:

1. Determine the method of classifying the data: by problem, cause,


nonconformity, and so forth.
2. Decide if dollars (best), frequency, or both are to be used to rank the
characteristics.

3. Collect data for an appropriate time interval or use historical data.

4. Summarize the data and rank order categories from largest to smallest.

5. Construct the diagram and find the vital few.

Examples of the vital few are:


· A few customers account for the majority of sales.
· A few processes account for the bulk of the scrap or rework cost.
· A few nonconformities account for the majority of customer complaints.
· A few suppliers account for the majority of rejected parts.
· A few problems account for the bulk of the process downtime.
· A few products account for the majority of the profit.
· A few items account for the bulk of the inventory cost.

73. Discuss formal technical review in details.


Formal Technical Review (FTR) is a software quality control activity performed by
software engineers.
Objectives of formal technical review (FTR):
 Useful to uncover errors in logic, function, and implementation for any representation
of the software.
 The purpose of FTR is to verify that the software meets specified requirements.
 To ensure that software is represented according to predefined standards.
 It helps to review the uniformity in software that is developed in a uniform manner.
 To make the project more manageable.
In addition, the purpose of FTR is to enable junior engineers to observe the analysis,
design, coding, and testing approach more closely. FTR also works to promote backup
and continuity to become familiar with parts of the software they might not have seen
otherwise. Actually, FTR is a class of reviews that include walkthroughs, inspections,
round-robin reviews, and other small-group technical assessments of software. Each FTR
is conducted as a meeting and is considered successful only if it is properly planned,
controlled, and attended.
Example:
suppose during the development of the software without FTR design cost 10 units,
coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it we
have to re design the software and final cost will become 70 units. that is why FTR is so
helpful while developing the software.

74. Explain the steps of defect management process.


Stages of DMP :
There are different stages of DMP that takes place as given below :
1. Defect Prevention :
Defect elimination at early stage is one of the best ways to reduce its impact. At
early stage, fixing or resolving defects required less cost, and impact can also be
minimized. But at a later stage, finding defects and then fixing it requires very
high cost and impact of defect can also be increased. It’s not possible to remove
all defects but at least we can try to reduce its effects and cost required to fix the
same. This process simply improves quality of software by removing defects at
early stage and also increases productivity by simply preventing injection of
defects into software product.
2. Deliverable Baseline :
When deliverable such product or document reaches its pre-defined milestone
then deliverable is considered as baseline. Pre-defined milestone generally
defines what the project or software is supposed to achieve. If there is any
failure to reach or meet pre-defined milestone, it simply means that project is not
proceeding towards plan and generally triggers corrective action to be taken by
management. When a deliverable is baselines, further changes are controlled.
3. Defect Discovery :
Defect discovery at early stage is very important. Afterword’s, it might cause
greater damage. A defect is only considered ‘discovered” if developers have
acknowledged it to be valid one.
4. Defect Resolution :
Defect is being resolved and fixed by developers and then places it in the same
place from where the defect was initially identified.
5. Process Improvement :
All defects that are identified are critical and cause some impact on system. It
doesn’t mean that defects that have a low impact on system are not critical. For
process improvement, each and every defect that is identified are needed to
fixed. Identification and analysis of process should be done in which defect was
occurred so that we can determine different ways to improve process to prevent
any future occurrence of similar defects.

75. What is the format of defect report? Explain


A typical defect report contains the information in an xls Sheet as follows.
1. Defect ID :
Nothing but a serial number of defects in the report.
2. Defect Description :
A short and clear description of the defect detected.
3. Action Steps :
What the client or QA did in an application that results in the defect. Step by step actions
they took.
4. Expected Result :
What results are expected as per the requirements when performing the action steps
mentioned.
5. Actual Result :
What results are actually showing up when performing the action steps.
6. Severity :
Trivial (A small bug that doesn’t affect the software product usage).
1. Low –
A small bug that needs to be fixed and again it’s not going to affect the
performance of the software.
2. Medium –
This bug does affect the performance. Such as being an obstacle to do a certain
action. Yet there is another way to do the same thing.
3. High –
It highly impacts the software though there is a way around to successfully do
what the bug cease to do.
4. Critical –
These bugs heavily impacts the performance of the application. Like crashing
the system, freezes the system or requires the system to restart for working
properly.
7. Attachments :
A sequence of screenshots of performing the step by step actions and getting the
unexpected result. One can also attach a short screen recording of performing the steps and
encountering defects. Short videos help developers and/or QA to understand the bugs easily
and quickly.
8. Additional information :
The platform you used, operating system and version. And other information which
describes the defects in detail for assisting the developer understand the problem and fixing
the code for getting desired results.

76. List types of quality cost. Explain in details.


Quality costs refer to the expenses incurred by an organization due to deficiencies in its
products or services. These costs are categorized into various types that are associated with
prevention, appraisal, internal failure, and external failure. Here are the types of quality costs,
along with detailed explanations:

1. Prevention Costs:
 Definition: Prevention costs are expenses incurred to prevent defects or issues
from occurring in the first place.
 Examples: Training programs, quality planning, process improvements,
implementing quality management systems, supplier evaluations, design
reviews, and quality audits.
 Purpose: By investing in prevention activities, organizations aim to identify
and eliminate potential issues early in the development cycle, thereby reducing
the likelihood of defects and failures.
2. Appraisal Costs:
 Definition: Appraisal costs are expenses associated with evaluating and
assessing the product or service quality to ensure compliance with standards
and requirements.
 Examples: Inspection, testing, quality control measures, equipment
calibration, audits, and supplier evaluation.
 Purpose: Appraisal costs are incurred to identify defects, errors, or non-
conformities, ensuring that products or services meet specified quality
standards before reaching the customer.
3. Internal Failure Costs:
 Definition: Internal failure costs arise from defects or issues discovered
before delivering products or services to the customer, occurring within the
organization's internal processes.
 Examples: Rework, scrap, retesting, downtime due to defects, waste,
production delays, and corrective actions for issues found during
manufacturing or service delivery.
 Purpose: These costs represent the expenses incurred due to failures or
defects that impact the organization internally, before reaching the customer,
emphasizing the importance of early defect detection and prevention.
4. External Failure Costs:
 Definition: External failure costs result from defects or issues identified after
products or services have reached the customer or entered the market.
 Examples: Warranty claims, customer complaints, returns or recalls, product
replacements, legal costs, reputation damage, and lost sales opportunities due
to poor quality.
 Purpose: These costs reflect the impact of poor quality on the organization's
reputation, customer satisfaction, and financial losses incurred due to defects
discovered by customers.

77. How to measure quality cost?


Measuring quality costs involves assessing and quantifying the expenses incurred due to
quality-related activities or deficiencies in products or services. By measuring quality
costs, organizations can analyze their investments in quality management, identify areas
for improvement, and make informed decisions to enhance overall quality while reducing
unnecessary expenditures. Here are steps and methods to measure quality costs:
1. Categorize Costs: Identify and categorize quality costs into prevention, appraisal,
internal failure, and external failure costs, covering expenses related to avoiding,
identifying, and addressing quality issues.
2. Data Collection: Gather cost-related data from various sources, including financial
records, project reports, quality assurance records, customer complaints, warranty
claims, and product returns.
3. Cost Calculation: Calculate the total expenses for each cost category, ensuring
accuracy in capturing all relevant expenses incurred within the organization.
4. Compute Ratios and Indices: Express quality costs as a percentage of total sales or
compare against industry benchmarks to determine cost effectiveness and identify
areas for improvement.
5. Analysis and Interpretation: Analyze cost data to discern patterns, trends, and areas
with high costs. Use this analysis to identify root causes and prioritize improvements.
6. Implement Improvement Actions: Implement corrective actions and process
enhancements based on the analysis to reduce internal and external failure costs and
increase investment in preventive measures.
7. Continuous Monitoring: Regularly monitor and review quality cost data to track
progress, measure the effectiveness of improvement initiatives, and adjust strategies
as needed to sustain quality improvements.

78. Explain the following: a) ISO b) ISO 9000 c) ISO 9000 series

a) ISO (International Organization for Standardization):

 ISO is an independent, non-governmental international organization that develops and


publishes international standards to ensure quality, safety, efficiency, and
interoperability of products, services, and systems worldwide.
 It creates standards covering various industries and sectors, providing guidelines,
specifications, and requirements for quality management, environmental management,
information security, and more.
 ISO standards are developed through a consensus-based approach involving experts,
stakeholders, and national standards bodies from different countries to create globally
accepted benchmarks for best practices.

b) ISO 9000:

 ISO 9000 is a series of standards developed by the International Organization for


Standardization (ISO) related to quality management systems (QMS).
 The ISO 9000 family of standards provides guidelines and frameworks for
establishing, implementing, maintaining, and continually improving quality
management within organizations.
 It consists of several standards, including ISO 9001 (QMS requirements), ISO 9004
(QMS guidelines for performance improvement), ISO 9000 (fundamentals and
vocabulary related to QMS), among others.

c) ISO 9000 Series:

 The ISO 9000 series encompasses a set of standards within the ISO 9000 family that
collectively address various aspects of quality management systems (QMS).
 This series includes ISO 9001, ISO 9004, ISO 9000, and other related standards that
provide guidance on quality management principles, requirements, guidelines for
performance improvement, and terminology.
 ISO 9001 is the most well-known and widely used standard within the ISO 9000
series. It specifies the requirements for a QMS that organizations can use to
demonstrate their ability to consistently provide products and services that meet
customer and regulatory requirements.
79. What is the measure of reliability and availability? Explain.

Explanation:

 Reliability: Measures the probability of a system or component to function without


failure. It focuses on the likelihood of failure-free operation over a specified time,
indicating the system's robustness and consistency.
 Availability: Reflects the system's ability to remain operational and accessible for use
when needed. It considers both planned and unplanned downtime, indicating how
reliably the system can be accessed during its operational period.
1. Both reliability and availability metrics are crucial in assessing the performance and
dependability of systems, guiding maintenance schedules, optimizing operational
efficiency, and ensuring continuity of services or production. High reliability ensures
fewer failures, while high availability ensures the system is accessible and operational
when required. Organizations often strive to achieve high reliability and availability to
meet user expectations, minimize disruptions, and maximize productivity.
Reliability:
 Measure: Reliability refers to the probability that a system, equipment, or
component will perform its intended functions without failure over a specific
period in a given environment.
 Calculation: It is often measured as the probability of functioning without
failure for a specified duration, represented as MTBF (Mean Time Between
Failures) or as a failure rate (e.g., failures per unit of time).
 Example: If a machine operates continuously for 500 hours without any
failure, its MTBF for that period would be 500 hours.
2. Availability:
 Measure: Availability represents the proportion of time a system is
operational and accessible for use when required during a given period.
 Calculation: Availability is typically calculated as the ratio of uptime (time
system is operational) to the sum of uptime and downtime (time system is
unavailable due to failures or maintenance).
 Example: If a system operates for 900 hours out of 1,000 hours in total
(including downtime), its availability would be 900/1000 = 90%.

80. What are the advantages of ISO 9000 standards?


Some of the key advantages include:

1. Enhanced Quality Management: ISO 9000 standards provide a structured


framework for establishing and maintaining robust quality management systems
(QMS). This helps organizations streamline processes, improve product/service
quality, and enhance customer satisfaction.
2. Global Recognition and Credibility: Achieving ISO 9000 certification demonstrates
an organization's commitment to meeting internationally recognized quality standards.
It enhances the organization's credibility in the global marketplace, increasing trust
among customers, partners, and stakeholders.
3. Improved Consistency and Efficiency: Implementation of ISO 9000 leads to
standardized processes and procedures. This promotes consistency in operations,
reduces errors, enhances efficiency, and facilitates better resource utilization.
4. Customer Satisfaction: By focusing on meeting customer requirements and
continuously improving processes, ISO 9000 standards contribute to higher customer
satisfaction. Meeting or exceeding customer expectations leads to increased loyalty
and positive brand perception.
5. Risk Management and Compliance: ISO 9000 standards emphasize risk-based
thinking and compliance with legal and regulatory requirements. Organizations can
better identify, assess, and mitigate risks, ensuring compliance while reducing
potential legal and financial risks.
6. Facilitates Continuous Improvement: The standards encourage a culture of
continual improvement by implementing the Plan-Do-Check-Act (PDCA) cycle. This
cycle fosters a systematic approach to identifying areas for improvement,
implementing changes, and monitoring outcomes for further enhancements.

81. Discuss any 5 guidelines for formal technical review.


Guidelines for the conducting of formal technical reviews should be established in advance.
These guidelines must be distributed to all reviewers, agreed upon, and then followed. A
review that is unregistered can often be worse than a review that does not minimum set of
guidelines for FTR.
1. Review the product, not the manufacturer (producer): FTR includes people
and egos. while properly conducted FTR will give all participants a good feeling
of accomplishment, and when conducted improperly, the FTR team will be
pointed out clearly. the meeting tone will be only loose and constructive, the
review team leader will ensure that the proper way of communication will be
happening or not into the team or it will get out of control.
2. Set an agenda and maintain it: one of the measure keys in the meeting is drift.
FTR will always keep on track and schedule. the review leader is making and
maintaining the meeting schedule and should not be afraid that many people
drift sets in.
3. Limit debate and rebuttal: whenever the issue is raised by the reviewer there
will not be an impact on a universal agreement. rather than spending time with
the questions, then issues will be further discussed off-line.
4. Enunciate problem areas, but don’t attempt to solve every problem
noted: A review is not a problem-solving session. The solution to a problem can
often be accomplished for the producer alone or with the help of only one other
individual. problem-solving should be postponed for a while until after the
review meeting.
5. Take written notes (record purpose): it is a good idea to take notes on the
wallboard so that will be wording, and priorities can be assessed by other
reviewers as information is recorded.

82. What are the elements of software reliability? State factors affecting it.

Software reliability encompasses various elements that contribute to the dependable and
consistent performance of software systems. Some key elements or factors influencing
software reliability include:

1. Correctness: The extent to which the software performs its intended functions
accurately and without errors.
2. Robustness: The software's ability to withstand unexpected inputs, error conditions,
or adverse situations without failing or crashing.
3. Fault Tolerance: The software's capability to continue operating or recover
gracefully from failures, ensuring minimal impact on the system and users.
4. Availability: The proportion of time the software is operational and accessible for use
when required, considering downtime due to failures, maintenance, or updates.
5. MTBF (Mean Time Between Failures): The average time interval between two
consecutive software failures during operation.
6. MTTF (Mean Time To Failure): The average time expected until a software
component or system experiences its first failure.
7. MTTR (Mean Time To Repair/Recovery): The average time taken to repair or
restore the software after a failure.
8. Reliability Growth: The process of improving software reliability over time through
defect identification, fixing, and system enhancements.

Factors affecting software reliability:

1. Software Complexity: Higher complexity leads to increased chances of errors or


bugs, impacting reliability. Complex interactions among modules or components may
introduce vulnerabilities.
2. Software Development Process: The methodology, practices, and rigor of the
development process significantly affect reliability. Processes emphasizing quality
assurance, testing, and defect management tend to produce more reliable software.
3. Testing and Quality Assurance: The effectiveness of testing strategies, test
coverage, and the thoroughness of quality assurance efforts influence software
reliability.
4. Software Maintenance: Regular updates, patches, and maintenance activities impact
reliability. Poorly managed maintenance may introduce new defects or affect system
stability.
5. External Dependencies: Reliability can be affected by external factors like hardware
compatibility, third-party libraries, or interfaces with other systems.
6. Usage Environment: Variations in the operating environment, such as different
platforms, configurations, or user behaviors, may affect software reliability.
7. Documentation and User Support: Clarity of documentation and the availability of
support resources impact how users interact with the software, affecting reliability
through correct usage.
8. Resource Constraints: Limitations in resources such as memory, processing power,
or network bandwidth might impact software reliability under certain conditions.

83. Write in brief any three-reliability metrics.


Here are brief explanations of three commonly used reliability metrics in software
engineering:

1. Mean Time Between Failures (MTBF):


 Definition: MTBF refers to the average time interval between two
consecutive failures experienced by a system, component, or software
application during operation.
Calculation: It is calculated by dividing the total operating time by the
number of failures that occurred within that period.
 Purpose: MTBF is used to estimate the expected time duration between
failures, providing insights into the system's reliability and helping in
maintenance planning and system design improvements.
2. Mean Time To Failure (MTTF):
 Definition: MTTF represents the average time expected until the first failure
of a software component, system, or device.
 Calculation: It is calculated by dividing the cumulative operating time by the
number of observed failures.
 Purpose: MTTF helps in predicting the expected reliability of a system or
component during normal operation. It aids in reliability modeling, risk
assessment, and determining the expected lifespan of the software.
3. Mean Time To Repair/Recovery (MTTR):
 Definition: MTTR signifies the average time taken to repair or restore a failed
system, component, or software to operational status after a failure.
 Calculation: It is calculated by dividing the total downtime due to failures by
the number of failures that occurred within that period.
 Purpose: MTTR measures the system's maintainability and how quickly
failures can be addressed, aiding in assessing the efficiency of the maintenance
process and minimizing downtime.

84. How to use defect for process improvement.


1. Defect Identification and Documentation:
 Gather data on identified defects through testing, customer feedback, or
quality assurance activities.
 Document each defect systematically, including its nature, root cause, impact,
and where it occurred in the development or operational process.
2. Defect Analysis and Categorization:
 Analyze defects to identify patterns, trends, or commonalities among them.
 Categorize defects based on type, severity, frequency, and the phase of the
development or production process where they occur.
3. Root Cause Analysis (RCA):
 Conduct Root Cause Analysis (RCA) to determine the underlying reasons for
the identified defects.
 Use techniques such as the 5 Whys, Fishbone (Ishikawa) diagrams, or Pareto
analysis to uncover the primary causes leading to defects.
4. Identify Process Weaknesses:
 Evaluate the processes and workflows associated with the identified defects.
 Determine any weaknesses, inefficiencies, or gaps in the processes that
contribute to defect occurrence.
5. Implement Corrective Actions:
 Develop and implement corrective actions or process improvements based on
the identified root causes.
 Focus on addressing the underlying issues to prevent similar defects from
recurring.
6. Test and Validate Changes:
Test and validate the implemented process improvements or changes in a
controlled environment.
 Ensure that the modifications effectively address the identified root causes
without introducing new issues.
7. Measure and Monitor Results:
 Track and measure the impact of process improvements on defect reduction or
elimination.
 Monitor metrics related to defect rates, defect density, or customer reported
issues to gauge improvement.
8. Continuous Improvement and Feedback Loop:
 Foster a culture of continuous improvement by regularly reviewing processes
and incorporating feedback.
 Encourage team collaboration and communication to share lessons learned and
apply improvements across the organization.
9. Documentation and Knowledge Sharing:
 Document all process improvements, actions taken, and outcomes for future
reference.
 Share insights gained from defect analysis and process improvement efforts
across teams to facilitate learning and prevent similar issues in other areas.

85. Discuss how reliability changes over the lifetime of a software product and a hardware
product.
86. Explain test case template. Design test case for login page.
87. Explain top-down integration testing.
88. Explain bottom-up integration testing.
89. What are the various approaches of integration testing and the challenges
90. Discuss types of software quality factors.
91. Explain the concept of quality.

You might also like