0% found this document useful (0 votes)
41 views39 pages

UNIT 4 Answers

This document covers key concepts in software testing, including definitions of debugging, testing, and various methodologies such as unit testing, integration testing, and black-box testing. It also discusses software quality assurance, reliability, and the importance of testing strategies in object-oriented software development. Additionally, it details unit testing procedures and considerations, as well as white-box testing techniques like basis path testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views39 pages

UNIT 4 Answers

This document covers key concepts in software testing, including definitions of debugging, testing, and various methodologies such as unit testing, integration testing, and black-box testing. It also discusses software quality assurance, reliability, and the importance of testing strategies in object-oriented software development. Additionally, it details unit testing procedures and considerations, as well as white-box testing techniques like basis path testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

UNIT – IV

Short Questions: 2M
1. What is debugging?
Sol: Debugging is the process of finding and fixing errors or bugs in the
source code of any software.

2. Define Testing? Or Software Testing?

Testing is a process of evaluating whether a system, device, or software meets its


objectives. It can also be used to manage risk.

Software testing is a method to assess the functionality of the software program.


The process checks whether the actual software matches the expected
requirements and ensures the software is bug-free. The purpose of software
testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification,
functionality, and performance of a software program or application.

3. List out the types of Testing methodologies?

Functional Testing:

 Unit Testing

 Integration Testing

 System Testing

 Acceptance Testing

 Alpha Testing

 Beta Testing

Non-Functional Testing:

 Performance Testing

o Load Testing

o Stress Testing

 Security Testing

 Usability Testing

 Compatibility Testing
 Ad-hoc Testing

 Documentation Testing

 Sanity Testing

 Black-box testing

 White-box testing

4. Define Code review?

Sol: Code review is a systematic process where one or more developers review the
code written by another developer.

5. Define Integration testing?

Sol: Integration testing is the process of testing the interface between two
software units or modules. It focuses on determining the correctness of the
interface. (Or)

Integration testing is a software testing technique that focuses on verifying the


interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any problems
or bugs that arise when different components are combined and interact with
each other. Integration testing is typically performed after unit testing and
before system testing. It helps to identify and resolve integration issues early in
the development cycle, reducing the risk of more severe and costly problems
later on.

6. What is smoke testing?

Sol: Smoke testing is a quick way to check if the core functions of a software
application are working as expected. It's also known as build verification testing or
confidence testing.

7. Define White box Testing?

Sol: White box testing techniques analyze the internal structures the used
data structures, internal design, code structure, and the working of the software
rather than just the functionality as in black box testing.

It is also called glass box testing clear box testing or structural testing. White
Box Testing is also known as transparent testing or open box testing.
8. What are the Program analysis tools?

Program analysis tools in software engineering are used to examine and evaluate
programs to ensure they are efficient, correct, maintainable, and free of bugs.
These tools help developers understand code behavior, detect errors, optimize
performance, and ensure code quality. They can be categorized into different
types based on the type of analysis they perform.

 Static Analysis Tools

 Dynamic Analysis Tools

 Code Coverage Tools

 Performance Profilers

 Security Analysis Tools

 Code Refactoring Tools

 Dependency Analysis Tools

 Model Checking Tools

 Memory Analyzers

 Build and Continuous Integration Tools

9. What is Black box testing?

Black-box testing is a type of software testing in which the tester is not


concerned with the software’s internal knowledge or implementation details but
rather focuses on validating the functionality based on the
provided specifications or requirements.

10. What is unit testing?

Unit testing is the process of testing the smallest parts of your code, like
individual functions or methods, to make sure they work correctly. It’s a key
part of software development that improves code quality by testing each unit
in isolation.

11. Define SQA?

Software Quality Assurance (SQA) is the practice of monitoring all software


engineering processes, activities, and methods used in a project to ensure
proper quality of the software and conformance against the defined standards.

12. Write about Some important quality standards?

Quality standards are essential benchmarks that ensure products, services, or


processes meet consistent and acceptable levels of performance, safety, and
reliability.
 ISO 9001 (Quality Management Systems)

 ISO 14001 (Environmental Management Systems)

 ISO 45001 (Occupational Health and Safety Management Systems)

 Six Sigma (Process Improvement)

 ISO 22000 (Food Safety Management Systems)

 ISO/IEC 27001 (Information Security Management Systems)

 CMMI (Capability Maturity Model Integration)

 IATF 16949 (Automotive Quality Management System)

 SA8000 (Social Accountability Standard)

 Lean Manufacturing (Efficiency and Waste Reduction)

13. Define Software reliability?

In software engineering, software reliability refers to the probability of a software


system operating without failures for a specified period in a specified
environment. It's a customer-oriented view of software quality, focusing on the
software's operational dependability rather than its design or structure.

Topic: Coding And Testing: Coding, Code review, Software documentation,


Testing, Black-box testing, White-Box testing, Debugging, Program analysis
tools, Integration testing, Testing object-oriented programs, Smoke testing,
and Some general issues associated with testing.

42. Differentiate between black box testing and white box testing?
Sol:
Black Box Testing White Box Testing

It is a software testing method in which It is a software testing method in


the program or the internal structure which the tester knows about the
stays hidden, and the tester has no code or the internal structure and
knowledge about it. the program involved.

Mostly, software testers do this. Mostly, hardware developers do this.

The user doesn’t require knowledge A user requires knowledge for


regarding implementation. implementation.
It is a form of external or outer software It’s the inner or internal way of
testing. software testing.

The tester cannot access the source The tester is aware of the source
code of the software. code and internal workings of the
software.

You can conduct Black box testing at You can conduct White box testing
the software interface. It requires no by ensuring that all the internal
concern with the software’s internal operations take place according to
logical structure. the specifications.

It tests the functions of the software. It tests the structure of the software.

You can initiate this test on the basis You can only start this type of
of the requirement specifications testing software after a detailed
document. design document.

Testers don’t require a knowledge of Testers mandatorily require a


programming. knowledge of programming.

This testing assesses software This testing assesses software logic.


behavior.

Higher levels of software testing Lower levels of software testing


generally involve Black Box testing. usually involve White Box testing.

You can also call it closed testing. You can also call it clear box testing.

It consumes less time. It consumes more time.

It does not work well for algorithm It is completely suitable and


testing. preferable for algorithm testing.

One can perform it using various trial You can better test data domains
and error methods and ways. and internal or inner boundaries.

Black Box Testing types: White Box Testing types:

 Regression Testing  Condition Testing


 Functional Testing  Path Testing
 Non-Functional Testing  Loop Testing
Example: A user searching something Example: When a user inputs to
on a search engine like Google using check and verify the loops.
certain keywords.

43. Describe testing strategies used for Object-Oriented Software


Development?

Sol: Testing in OOAD involves verifying the behavior of individual objects,


classes, and their interactions within the system. It also includes testing the
overall system architecture and the integration of various components. Effective
testing strategies are essential to ensure the reliability, performance, and
maintainability of the software.
Types of Testing in OOAD
Various types of testing are used in OOAD to verify different aspects of the software.
Each type of testing focuses on a specific level of the software hierarchy, from
individual objects and classes to the overall system architecture.
44. Unit testing
Unit testing is a software development process that tests individual software units or
components to make sure they perform as intended. Units in OOAD are frequently
classes or methods. Unit testing guarantees that each component is correct and aids
in the early detection of bugs in the development process.

2. Integration testing
This type of testing makes sure that various units or components interact with one
another and function as a cohesive unit. This is testing the integration of different
classes and modules in OOAD. Problems with interfaces and how components
interact with one another are found with the aid of integration testing.
3. System testing
System testing assesses the system as a whole to make sure it functions as intended
and satisfies the requirements. This entails testing the security, performance, and
other non-functional components of the system. In OOAD, system testing guarantees
that the system satisfies the intended goals and is prepared for implementation.

4. Performance testing
Performance testing ensures that a software system performs well under various
conditions. Load testing checks how the system behaves under expected user loads.
Stress testing pushes the system beyond its limits to find its breaking point.
Scalability testing assesses how well the system can handle future growth. Resource
utilization testing measures how efficiently the system uses resources.

5. Security testing
Security testing identifies and fixes security vulnerabilities. Vulnerability assessment
prioritizes security issues. Penetration testing simulates cyberattacks to find
weaknesses. Authentication and authorization testing ensures secure user access.
Data security testing protects sensitive data. Security configuration testing identifies
and fixes misconfigurations.
44. Explain with examples Basis Path Testing?

Sol:

Basis Path Testing in Software Engineering

Basis Path Testing is a white-box testing technique used to test the control flow of a
program. It is designed to ensure that the program’s logic is correct by testing all
possible independent paths through the code. It focuses on defining a set of test cases
that cover all the independent paths and help identify errors in the program’s logic.

The main goal of Basis Path Testing is to ensure that all decision points, loops, and
branches in the code are tested at least once.

Steps in Basis Path Testing:

1. Control Flow Graph (CFG): The first step in basis path testing is to construct a
Control Flow Graph (CFG) for the program. In this graph:
o Each node represents a block of code or a decision point.
o Each edge represents a transition between blocks based on the
program's flow.
2. Cyclomatic Complexity: Cyclomatic complexity is a measure used to
determine the number of independent paths in a program. It can be calculated
using the formula:

V(G)=E−N+2PV(G) = E - N + 2PV(G)=E−N+2P

where:

o V(G)V(G)V(G) = Cyclomatic complexity


o EEE = Number of edges
o NNN = Number of nodes
o PPP = Number of connected components (usually 1 for a single program)
3. Identify Independent Paths: Based on the cyclomatic complexity, identify the
independent paths in the program. These paths represent unique combinations
of decisions and loops.
4. Create Test Cases: Create test cases that cover all independent paths. The goal
is to ensure that each independent path is executed at least once.

Example

Let's look at an example of a simple code snippet and apply basis path testing.

Code Snippet:
int example(int a, int b) {
if (a > b) {
if (a > 0) {
return a;
} else {
return b;
}
} else {
if (b > 0) {
return b;
} else {
return a;
}
}
}

Step 1: Create the Control Flow Graph (CFG)

The program has the following decision points:

 The first if (a > b) decision.


 The second if (a > 0) and if (b > 0) decisions.

Here’s the Control Flow Graph (CFG) for this code:

1. Start -> Decision: a > b (first if)


o Yes -> a > 0 (second if)
 Yes -> return a
 No -> return b
o No -> b > 0 (third if)
 Yes -> return b
 No -> return a
2. End

Step 2: Calculate Cyclomatic Complexity

For this program:

 EEE (Edges) = 7 (from the graph)


 NNN (Nodes) = 6
 PPP (Connected Components) = 1 (since there’s only one function)

Cyclomatic complexity V(G)V(G)V(G) is:

V(G)=E−N+2P=7−6+2(1)=3V(G) = E - N + 2P = 7 - 6 + 2(1) = 3V(G)=E−N+2P=7−6+2(1)=3

So, the cyclomatic complexity is 3, which means there are 3 independent paths.
Step 3: Identify Independent Paths

Based on the cyclomatic complexity, we need to identify 3 independent paths. These


paths are:

1. Path 1: (Start -> a > b (No) -> b > 0 (Yes) -> return b -> End)
2. Path 2: (Start -> a > b (No) -> b > 0 (No) -> return a -> End)
3. Path 3: (Start -> a > b (Yes) -> a > 0 (Yes) -> return a -> End)

Step 4: Create Test Cases

We now create test cases that cover these 3 independent paths:

1. Test Case 1: a = 1, b = 2
o Expected result: b (because a > b is false and b > 0 is true)
o This follows Path 1.
2. Test Case 2: a = -1, b = 2
o Expected result: a (because a > b is false and b > 0 is false)
o This follows Path 2.
3. Test Case 3: a = 3, b = 2
o Expected result: a (because a > b is true and a > 0 is true)
o This follows Path 3.

45. Define unit testing. Explain about unit testing considerations and
procedures.
Sol:
Unit Testing is a software testing technique in which individual units or
components of a software application are tested in isolation. These units are the
smallest pieces of code, typically functions or methods, ensuring they perform as
expected.

Unit testing helps in identifying bugs early in the development cycle, enhancing code
quality, and reducing the cost of fixing issues later. It is an essential part of Test-
Driven Development (TDD), promoting reliable code.
(Or)
Unit testing in software engineering involves testing individual, small units of code
(like functions or methods) in isolation to ensure they function correctly, promoting
early bug detection and improving code quality.

Unit testing Considerations:


 Identify Testable Units: Determine which parts of the code should be tested
independently.
 Isolate Units: Use techniques like mocks and stubs to simulate dependencies,
ensuring the unit functions independently.
 Cover Different Scenarios: Write test cases that cover normal, edge, and error
conditions.
 Automate Tests: Automate unit tests to run frequently, ensuring that code
changes don't introduce regressions.
 Use a Testing Framework: Employ a unit testing framework (e.g., JUnit,
NUnit, PyTest) to streamline the testing process.
 Readability and Maintainability: Write tests that are easy to understand and
maintain.

Procedures for Unit Testing:

Planning:
Identify the units to be tested.
Determine the inputs, outputs, and expected behavior of each unit.
Plan the test cases to cover different scenarios.

Writing Test Cases:


Create test cases that cover normal, edge, and error conditions.
Use assertions to verify that the unit behaves as expected.

Running Tests:
Execute the test cases using a unit testing framework.
Analyze the results to identify any failures.

Analyzing Results and Fixing Bugs:


Identify the root cause of any test failures.
Fix the code and rerun the tests to ensure the issue is resolved.

Refactoring and Retesting:


After making changes, rerun the tests to validate code integrity.
Refactor the code as needed, ensuring that the tests still pass.

46. Describe White Box and Basis Path Testing methods?


Sol:

White box testing techniques analyze the internal structures the used data
structures, internal design, code structure, and the working of the software rather
than just the functionality as in black box testing.
It is also called glass box testing clear box testing or structural testing. White Box
Testing is also known as transparent testing or open box testing.
Types Of White Box Testing
White box testing can be done for different purposes at different places. There are
three main types of White Box testing which is follows:-

 Unit Testing: Unit Testing checks if each part or function of the application
works correctly. It will check the application meets design requirements during
development.
 Integration Testing: Integration Testing Examines how different parts of the
application work together. After unit testing to make sure components work well
both alone and together.
 Regression Testing: Regression Testing Verifies that changes or updates
don’t break existing functionality of the code. It will check the application still
passes all existing tests after updates.

White Box Testing Techniques


To achieve complete code coverage, white box testing uses the following techniques:

 Statement Coverage: Testing each line of code at least once.


 Branch Coverage: Testing all possible outcomes of decision points (e.g., if-else
statements).
 Path Coverage: Verifying all possible execution paths through the code.
 Loop Testing: Ensuring loops in the code operate correctly and efficiently.
 Input/Output Validation: Checking that the software produces the correct
output for valid inputs and handles invalid inputs appropriately.

Basis Path Testing is a white-box testing technique based on a program's or module's


control structure. A control flow graph is created using this structure, and the many
possible paths in the graph are tested using this structure.

Four stages are followed to create test cases using this technique −

 Create a Control Flow Graph.


 Calculate the Graph's Cyclomatic Complexity
 Identify the Paths That Aren't Connected
 Create test cases based on independent paths.

Control Flow Graph – A control flow graph (or simply, flow graph) is a directed
graph which represents the control structure of a program or module. A control flow
graph (V, E) has V number of nodes/vertices and E number of edges in it. A control
graph can also have :
 Junction Node – a node with more than one arrow entering it.
 Decision Node – a node with more than one arrow leaving it.
 Region – area bounded by edges and nodes (area outside the graph is also
counted as a region.).
Below are the notations used while constructing a flow graph :

 Sequential Statements –

 If – Then – Else –
 Do – While –

 While – Do –
 Switch – Case –

Cyclomatic Complexity – The cyclomatic complexity V(G) is said to be a measure of


the logical complexity of a program. It can be calculated using three different
formulae :

1. Formula based on edges and nodes :


V(G) = e - n + 2*P
Where, e is number of edges, n is number of vertices, P is number of connected
components. For example, consider first graph given above,
where, e = 4, n = 4 and p = 1

So,
Cyclomatic complexity V(G)
=4-4+2*1
=2
2. Formula based on Decision Nodes :
V(G) = d + P
where, d is number of decision nodes, P is number of connected nodes. For
example, consider first graph given above,
where, d = 1 and p = 1

So,
Cyclomatic Complexity V(G)
=1+1
=2
3. Formula based on Regions :
V(G) = number of regions in the graph
For example, consider first graph given above

Cyclomatic complexity V(G)


= 1 (for Region 1) + 1 (for Region 2)
=2
Hence, using all the three above formulae, the cyclomatic complexity obtained
remains same. All these three formulae can be used to compute and verify the
cyclomatic complexity of the flow graph.

Note –
1. For one function [e.g. Main( ) or Factorial( ) ], only one flow graph is
constructed. If in a program, there are multiple functions, then a separate flow
graph is constructed for each one of them. Also, in the cyclomatic complexity
formula, the value of ‘p’ is set depending of the number of graphs present in total.
2. If a decision node has exactly two arrows leaving it, then it is counted as one
decision node. However, if there are more than 2 arrows leaving a decision node,
it is computed using this formula :
d=k-1
Here, k is number of arrows leaving the decision node.

Independent Paths : An independent path in the control flow graph is the one which
introduces at least one new edge that has not been traversed before the path is
defined. The cyclomatic complexity gives the number of independent paths present in
a flow graph. This is because the cyclomatic complexity is used as an upper-bound
for the number of tests that should be executed in order to make sure that all the
statements in the program have been executed at least once. Consider first graph
given above here the independent paths would be 2 because number of independent
paths is equal to the cyclomatic complexity. So, the independent paths in above first
given graph :
 Path 1:
A -> B
 Path 2:
C -> D

47. Differentiate software testing strategies and Methods. Discuss any two
methods of software testing.
Sol:
Software testing is the process of evaluating a software application to identify if it
meets specified requirements and to identify any defects. The following are
common testing strategies:
1. Black box testing– Tests the functionality of the software without looking at
the internal code structure.

2. White box testing – Tests the internal code structure and logic of the
software.

3. Unit testing – Tests individual units or components of the software to ensure


they are functioning as intended.

4. Integration testing– Tests the integration of different components of the


software to ensure they work together as a system.

5. Functional testing– Tests the functional requirements of the software to


ensure they are met.

6. System testing– Tests the complete software system to ensure it meets the
specified requirements.

7. Acceptance testing – Tests the software to ensure it meets the customer’s or


end-user’s expectations.

8. Regression testing – Tests the software after changes or modifications have


been made to ensure the changes have not introduced new defects.

9. Performance testing – Tests the software to determine its performance


characteristics such as speed, scalability, and stability.

10. Security testing– Tests the software to identify vulnerabilities and ensure it
meets security requirements.

Two methods of software testing:

Two common methods of software testing are unit testing, which focuses on individual
components or modules, and integration testing, which verifies the interactions
between different modules or components.

Unit testing:

Unit Testing is a software testing technique in which individual units or components


of a software application are tested in isolation. These units are the smallest pieces of
code, typically functions or methods, ensuring they perform as expected.

Unit testing helps in identifying bugs early in the development cycle, enhancing code
quality, and reducing the cost of fixing issues later. It is an essential part of Test-
Driven Development (TDD), promoting reliable code.

To ensure that each unit performs its intended function correctly and independently.
Unit tests typically involve writing test cases that provide specific inputs to a unit and
then verifying that the output matches the expected outcome.

Unit tests are often automated, meaning they are executed automatically as part of the
software development process.

Integration testing:

Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface. The purpose of
integration testing is to expose faults in the interaction between integrated units. Once
all the modules have been unit-tested, integration testing is performed.

The goal of integration testing is to identify any problems or bugs that arise when
different components are combined and interact with each other. Integration testing is
typically performed after unit testing and before system testing. It helps to identify and
resolve integration issues early in the development cycle, reducing the risk of more
severe and costly problems later on.

 Integration testing can be done by picking module by module. This can be done
so that there should be a proper sequence to be followed.

 And also if you don’t want to miss out on any integration scenarios then you
have to follow the proper sequence.

 Exposing the defects is the major focus of the integration testing and the time of
interaction between the integrated units.

48. What is unit testing? Explain in detail with traditional test strategies.[7M]
[July -2023 Set -1] [Understanding].
Unit Testing:

Unit Testing is a software testing technique in which individual units or components


of a software application are tested in isolation. These units are the smallest pieces of
code, typically functions or methods, ensuring they perform as expected.

Traditional test strategies:

In software engineering, traditional test strategies refer to the approaches and


techniques used to ensure that software functions as expected and is free of defects.
These strategies often rely on well-defined processes and are typically employed in the
development lifecycle before the software is released. Below is a detailed explanation of
the traditional test strategies commonly used in software engineering:
Unit Testing

Unit testing involves testing individual units or components of the software in


isolation from the rest of the system. The goal is to ensure that each function or
method behaves as expected.

Key Features:

 Focuses on testing small, isolated pieces of code.


 Usually automated and run frequently during development.
 Developers write unit tests before or during the development of each component
(Test-Driven Development or TDD).
 Unit tests typically mock external dependencies to focus only on the logic of the
unit being tested.

Example: Testing a function that adds two numbers, ensuring that the function
returns the correct result.

2. Integration Testing

Definition: Integration testing focuses on verifying the interactions between different


software components or systems. It ensures that when individual units are combined,
they function correctly together.

Key Features:

 Involves testing multiple modules or components that interact with each other.
 Can be performed incrementally (incremental integration testing) or all at once
(big bang integration testing).
 Often performed after unit testing but before system testing.
 Focuses on issues like data flow, control flow, and error handling between
integrated components.

Example: Testing the interaction between a database module and a user interface to
ensure data is properly retrieved and displayed.

3. System Testing

Definition: System testing evaluates the complete system as a whole to ensure that
the software meets the specified requirements. It is often the first level of testing done
after integration testing and involves testing the system in an environment that
simulates real-world usage.

Key Features:

 Tests the software as a whole (end-to-end testing).


 Ensures the system behaves as expected in different scenarios and under
various conditions.
 Covers functional and non-functional aspects (e.g., performance, security).
 Often includes both positive (correct input) and negative (incorrect input) test
cases.

Example: Testing a banking application to ensure users can perform transactions,


view balances, and log out correctly.

4. Acceptance Testing

Definition: Acceptance testing determines whether the software meets the user or
business requirements. This is the final stage of testing before the software is delivered
to the customer or deployed to production.

Key Features:

 Focuses on validating the software against the business requirements and user
needs.
 Can be done in the form of alpha and beta testing.
 Often involves real users (or stakeholders) to verify the software’s behavior.
 Acceptance testing can be done manually or automatically.

Example: A user testing a new feature of an e-commerce website to ensure it works as


expected from an end-user perspective.

5. Regression Testing

Definition: Regression testing ensures that changes made to the software (like bug
fixes, enhancements, or new features) do not introduce new defects or cause existing
functionality to break.

Key Features:

 Involves re-executing previous test cases after changes are made to the
software.
 Essential in agile and iterative development cycles where software is frequently
updated.
 Can be manual or automated (automated regression testing is highly
recommended for efficiency).
 Ensures that no unintended side effects occur due to changes in the software.

Example: After adding a new feature to a website, regression tests are run to verify
that previously working functionality, such as logging in or navigating, still works as
expected.

6. Alpha and Beta Testing

Definition: Alpha and beta testing are types of acceptance testing done before the
software is released to the public.
 Alpha Testing: Conducted by internal developers or QA teams to identify bugs
and issues in the software before it is released to real users.
 Beta Testing: Conducted by a limited number of external users (actual end-
users) to gather feedback and identify potential issues in real-world
environments.

Key Features:

 Alpha Testing: Occurs in a controlled environment, typically in-house by the


development team.
 Beta Testing: Involves external users who provide feedback on usability and
issues in real-world scenarios.

Example: An e-commerce application is given to a select group of users (beta testers)


who provide feedback before the official release.

7. Performance Testing

Definition: Performance testing evaluates how well the software performs under
various conditions, focusing on its responsiveness, stability, scalability, and resource
usage.

Key Features:

 Includes tests for load, stress, scalability, and endurance.


 Tests whether the software can handle high user traffic or large amounts of
data without performance degradation.
 Helps identify bottlenecks, memory leaks, and issues that could impact the
user experience.

Example: Testing how a web application performs when handling hundreds or


thousands of concurrent users.

8. Security Testing

Definition: Security testing focuses on identifying vulnerabilities in the software that


could be exploited by malicious users.

Key Features:

 Involves testing for weaknesses in authentication, authorization, data


encryption, and overall system defenses.
 Commonly involves penetration testing, vulnerability scanning, and threat
modeling.
 Ensures that the system is protected against various security threats like SQL
injection, cross-site scripting (XSS), and denial of service (DoS) attacks.
Example: Testing a financial application to ensure that users' personal information is
secure and cannot be accessed by unauthorized individuals.

9. Usability Testing

Definition: Usability testing evaluates how user-friendly and intuitive the software is.
It ensures that the end users can easily navigate and use the system without
difficulty.

Key Features:

 Focuses on the overall user experience (UX) of the software.


 Involves real users who provide feedback on the interface, navigation, and ease
of use.
 Commonly performed during the development phase and can continue after
release to improve the product.

Example: Testing a mobile app to ensure that the interface is intuitive, users
can easily perform key tasks, and there are no confusing or unnecessary steps.

49.Describe test strategies for Conventional Software?


Sol:
Unit Testing :
The unit test focuses on the internal processing logic and data structures within
the boundaries of a component. This type of testing can be conducted in parallel
for multiple components.
Unit-test considerations:-
1. The module interface is tested to ensure proper information flows (into and
out).
2. Local data structures are examined to ensure temporary data store during
execution.
3. All independent paths are exercised to ensure that all statements in a module
have been executed at least once.
4. Boundary conditions are tested to ensure that the module operates properly at
boundaries. Software often fails at its boundaries.
5. All error-handling paths are tested.
If data do not enter and exit properly, all other tests are controversial. Among the
potential errors that should be tested when error handling is evaluated are:
(1) Error description is unintelligible,
(2) Error noted does not correspond to error encountered,
(3) Error condition causes system intervention prior to error handling,
(4) exception-condition processing is incorrect, (5) Error description does not
provide enough information to assist in the location of the cause of the error

Unit-test procedures:-

The design of unit tests can occur before coding begins or after source code has
been generate. Because a component is not a stand-alone program, driver and/or
stub software must often be developed for each unit test.

Driver is nothing more than a “main program” that accepts test case data, passes
such data to the component (to be tested), and prints relevant results. Stubs
serve to replace modules that are subordinate (invoked by) the component to be
tested.

A stub may do minimal data manipulation, prints verification of entry, and


returns control to the module undergoing testing.

Drivers and stubs represent testing “overhead.” That is, both are software that
must be written (formal design is not commonly applied) but that is not delivered
with the final software product.
17.3.2 Integration Testing:

Data can be lost across an interface; one component can have an inadvertent,
adverse effect on another; sub functions, when combined, may not produce the
desired major function. The objective of Integration testing is to take unit-tested
components and build a program structure that has been dictated by design. The
program is constructed and tested in small increments, where errors are easier to
isolate and correct. A number of different incremental integration strategies are:-

a) Top-down integration testing is an incremental approach to construction of the


software architecture. Modules are integrated by moving downward through the
control hierarchy. Modules subordinate to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are substituted for
all components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.

The top-down integration strategy verifies major control or decision points early in
the test process. Stubs replace low-level modules at the beginning of top-down
testing. Therefore, no significant data can flow upward in the program structure.
As a tester, you are left with three choices:

(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module,
or
(3) Integrate the software from the bottom of the hierarchy upward.

b) Bottom-up integration:
Begins construction and testing with components at the lowest levels in the
program structure. Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always
available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level components are combined into clusters (sometimes called builds)


that perform a specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure. Integration follows the following pattern—D are drivers and M are
modules. Drivers will be removed prior to integration of modules.
Regression testing:- Each time a new module is added as part of integration
testing, the software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause problems with
functions that previously worked flawlessly.

Regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.

Regression testing may be conducted manually or using automated


capture/playback tools. Capture/playback tools enable the software engineer to
capture test cases and results for subsequent playback and comparison.

The regression test suite contains three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by
the change.
• Tests that focus on the software components that have been changed.

As integration testing proceeds, the number of regression tests can grow

Smoke testing:- It is an integration testing approach that is commonly used


when product software is developed. It is designed as a pacing mechanism for
time-critical projects, allowing the software team to assess the project on a
frequent basis. In essence, the smoke-testing approach encompasses the
following activities:

1. Software components that have been translated into code are integrated into a
build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “showstopper”
errors that have the highest likelihood of throwing the software project behind
schedule.

3. The build is integrated with other builds, and the entire product is smoke tested
daily. The integration approach may be top down or bottom up.

Smoke testing provides a number of benefits when it is applied on complex, time


critical software projects:
• Integration risk is minimized. Because smoke tests are conducted daily,
incompatibilities and other show-stopper errors are uncovered early,
• The quality of the end product is improved. Smoke testing is likely to uncover
functional errors as well as architectural and component-level design errors.
• Error diagnosis and correction are simplified. Errors uncovered during smoke
testing are likely to be associated with “new software increments”—that is, the
software that has just been added to the build(s) is a probable cause of a newly
discovered error.
• Progress is easier to assess. With each passing day, more of the software has
been integrated and more has been demonstrated to work. This improves team
morale and gives managers a good indication that progress is being made.

Strategic options:- The major disadvantage of the top-down approach is the need
for stubs and the attendant testing difficulties that can be associated with them.
The major disadvantage of bottom-up integration is that “the program as an entity
does not exist until the last module is added”.
Selection of an integration strategy depends upon software characteristics and,
sometimes, project schedule. In general, a combined approach or sandwich testing
may be the best compromise.

As integration testing is conducted, the tester should identify critical modules. A


critical module has one or more of the following characteristics:

(1) Addresses several software requirements,


(2) Has a high level of control,
(3) Is complex or error prone?
(4) Has definite performance requirements.

Critical modules should be tested as early as is possible. In addition, regression


tests should focus on critical module function.
Integration test work products:- It is documented in a Test Specification. This work
product incorporates a test plan and a test procedure and becomes part of the
software configuration. Program builds (groups of modules) are created to
correspond to each phase.

The following criteria and corresponding tests are applied for all test phases:

1. Interface integrity. Internal and external interfaces are tested as each


module (or cluster) is incorporated into the structure.
2. Functional validity. Tests designed to uncover functional errors are
conducted.
3. Information content. Tests designed to uncover errors associated with
local or global data structures are conducted.
4. Performance. Tests designed to verify performance bounds established
during software design are conducted.

A history of actual test results, problems, or peculiarities is recorded in a


Test Report that can be appended to the Test Specification.
Topic : Software Reliability And Quality Management:- Software reliability.
Statistical testing, Software quality, Software quality management system,
ISO 9000.SEI Capability maturity model. Few other important quality
standards, and Six Sigma.

50. What are the different goals and Metrics used for Statistical SQA? Discuss.
Sol:

Primary Goals of Statistical SQA

Statistical Software Quality Assurance aims to systematically evaluate and improve


software quality through quantitative methods. The primary goals include:

1. Reliability Assessment
o Measure the probability of software performing its intended function
without failure
o Quantify system dependability and consistency
o Identify potential points of failure and their likelihood
2. Performance Evaluation
o Analyze software efficiency and resource utilization
o Measure response times, throughput, and computational complexity
o Assess scalability and system performance under various conditions
3. Defect Detection and Prevention
o Statistically estimate the number and severity of potential defects
o Predict error rates and identify high-risk components
o Develop proactive quality improvement strategies
4. Process Optimization
o Use statistical techniques to streamline development processes
o Identify bottlenecks and inefficiencies
o Continuously improve software development lifecycle

Metrics in Statistical SQA

Quantitative Quality Metrics

1. Defect Density
o Calculation: Number of defects per unit of software size
o Measured in defects per thousand lines of code (KLOC)
o Provides normalized comparison across different software projects
2. Failure Rate
o Measures frequency of software failures over a specific time period
o Expressed as failures per operational hour or per execution cycle
o Helps assess software reliability and stability
3. Mean Time Between Failures (MTBF)
o Average time interval between system failures
o Indicates overall system reliability
o Calculated by dividing total operational time by number of failures
4. Mean Time To Repair (MTTR)
o Average time required to diagnose and fix a software failure
o Measures maintenance efficiency
o Helps in resource allocation and support strategy planning

Statistical Process Control Metrics

1. Process Capability Index (Cp)


o Measures how well a process can potentially meet specification limits
o Indicates process variation and consistency
o Cp > 1.33 generally considered high-quality process
2. Process Performance Index (Cpk)
o Extends Cp by considering the process mean's location relative to
specification limits
o Accounts for both process variation and centering
o Higher Cpk indicates better process control

Complexity Metrics

1. Cyclomatic Complexity
o Quantifies program complexity by measuring linearly independent paths
o Helps predict testing effort and potential defect locations
o Lower values indicate more maintainable code
2. Halstead Complexity Measures
o Calculates software metrics based on operators and operands
o Provides insights into program complexity and potential reliability
o Includes volume, difficulty, effort, and time estimates

Statistical Testing Metrics

1. Code Coverage
o Percentage of code executed during testing
o Types include:
 Statement coverage
 Branch coverage
 Path coverage
o Higher coverage indicates more comprehensive testing
2. Mutation Score
o Evaluates test suite effectiveness by introducing artificial defects
o Measures ability of tests to detect intentional code modifications
o Higher scores suggest more robust testing strategies

51. Explain about software reliability. How to enhance it?


SOL:

Software Reliability

Software reliability is the probability that a software system will perform its required
functions under specified conditions for a designated period without experiencing
failures. It is a critical quality attribute that measures the system's ability to maintain
performance and prevent unexpected behavior.

In software engineering, software reliability refers to the probability of a software


system operating without failures for a specified period under specified conditions. It's
a crucial aspect of software quality, directly impacting user experience and business
outcomes.

1. Error Avoidance: During the development of the software, each and every
possibility of availability of the error should be avoided as much as possible.
Therefore, for the development of high reliable software we need the following
attributes:
 Experienced developers: To get the high reliability and in order to avoid error
as much as possible, we need experienced developers.
 Software engineering tools: For the high reliable software, best software
engineering tools are needed.
 CASE tools: CASE tools used should be suitable and adaptable.

2. Error Detection: Instead of using the best possible methods to avoid the
error but it is still possible that some errors may be present in the software.
Hence in order to get the high reliable software, every error should be
detected. The process of error detection is done in form of testing. This
includes various testing processes. There are some certain steps to follow to
detect the errors in software. The most common testing used here for the error
detection is reliability testing.

3. Error Removal: Now when the errors are detected in the software, then we
need to fix them. In order to fix the errors, in other words in order to remove
the errors from the software, we need testing processes that are repeated
again and again. For the error removal process, one error is removed and the
tester checks whether the error is completed removed or not. Therefore, this
technique is the repetition of reliability testing process.

4. Fault-tolerance: Fault-tolerance means the giving the desired and correct


result in spite of failure in the system. In spite of using error avoidance, error
detection and removal it is practically not possible to develop a software
hundred percent error free. Some errors still may exist in spite of carrying out
different testing processes again and again. In order to make the high reliable
software, we need the software to be fault-tolerant. There are several
techniques used to make the system fault-tolerant. This includes:

 N-version programming: N copies of software are made in different versions.


 Recovery blocks: Different algorithms are used to develop the different
blocks.
 Rollback recovery: Each time system is accessed it is tested.

52. Briefly discuss SQA and software Testing strategic Issues.


Sol:
Software Quality Assurance (SQA) is simply a way to assure quality in the software.
It is the set of activities that ensure processes, procedures as well as standards are
suitable for the project and implemented correctly.

Software Quality Assurance is a process that works parallel to Software


Development. It focuses on improving the process of development of software so that
problems can be prevented before they become major issues. Software Quality
Assurance is a kind of Umbrella activity that is applied throughout the software
process.

Software Testing strategic Issues:


 State testing objectives explicitly.
 Understand the users of the software and develop a profile for e are and
develop a profile for each user category.
 Develop a testing plan that emphasizes “rapid cycle testing. rapid cycle
testing.” „
 Build “robust” software that is designed to test itself .
 Use effective formal technical reviews formal technical reviews as a filter
prior to testing.
 Conduct formal technical reviews to assess the test strategy and to assess
the test strategy and test cases themselves.
 Develop a continuous improvement Develop a continuous improvement
approach for the testing process.

53. Describe the role of software quality assurance in software engineering.


Sol:
Software Quality Assurance (SQA) is a systematic process designed to ensure that
software products meet specified requirements, standards, and customer
expectations.

1. Establishing Quality Standards

 Develop comprehensive quality guidelines and best practices


 Create detailed testing protocols and evaluation criteria
 Define clear quality metrics and performance benchmarks
 Ensure alignment with industry standards and organizational requirements
2. Process Monitoring and Improvement

 Continuously evaluate software development processes


 Identify potential risks and inefficiencies
 Recommend process improvements
 Implement quality management frameworks like Six Sigma or CMMI

3. Comprehensive Testing

 Conduct various types of testing:


o Unit testing
o Integration testing
o System testing
o Performance testing
o Security testing
o User acceptance testing
 Develop and maintain test cases and test plans
 Automate testing processes where possible
 Ensure thorough coverage of potential scenarios and edge cases

4. Defect Management

 Identify, document, and track software defects


 Prioritize and categorize issues
 Collaborate with development teams to resolve problems
 Maintain detailed defect tracking and resolution logs
 Analyze root causes of recurring issues

5. Quality Metrics and Reporting

 Measure software quality using objective metrics


 Generate comprehensive quality reports
 Track key performance indicators (KPIs)
 Provide insights into software reliability and performance
 Create dashboards for stakeholder communication

Tools and Techniques

Testing Tools

 Automated testing frameworks


 Performance monitoring tools
 Static and dynamic code analysis tools
 Continuous integration/continuous deployment (CI/CD) platforms

Methodological Approaches

 Agile quality assurance


 Shift-left testing
 Risk-based testing
 Model-based testing

Challenges in Software Quality Assurance

 Rapidly evolving technology landscapes


 Increasing software complexity
 Balancing speed and quality
 Managing diverse testing environments
 Keeping pace with emerging development methodologies

54. Discuss the elements of software quality assurance and software reliability.
Sol:
Elements of Software Quality Assurance (SQA)

1. Standards: The IEEE, ISO, and other standards organizations have produced
a broad array of software engineering standards and related documents. The job
of SQA is to ensure that standards that have been adopted are followed and that
all work products conform to them.

2. Reviews and audits: Technical reviews are a quality control activity


performed by software engineers for software engineers. Their intent is to uncover
errors. Audits are a type of review performed by SQA personnel (people employed
in an organization) with the intent of ensuring that quality guidelines are being
followed for software engineering work.

3. Testing: Software testing is a quality control function that has one primary
goal—to find errors. The job of SQA is to ensure that testing is properly planned
and efficiently conducted for primary goal of software.

4. Error/defect collection and analysis : SQA collects and analyzes error and
defect data to better understand how errors are introduced and what software
engineering activities are best suited to eliminating them.

5. Change management: SQA ensures that adequate change management


practices have been instituted.

6. Education: Every software organization wants to improve its software


engineering practices. A key contributor to improvement is education of software
engineers, their managers, and other stakeholders. The SQA organization takes
the lead in software process improvement which is key proponent and sponsor of
educational programs.

7. Security management: SQA ensures that appropriate process and technology


are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of software failure
and for initiating those steps required to reduce risk.

9. Risk management : The SQA organization ensures that risk management


activities are properly conducted and that risk-related contingency plans have
been established.

55. Explain the activities of software quality assurance group to assist the software
team in achieving high quality?

Sol:

Software Quality Assurance (SQA) Activities


1. SQA Management Plan: Make a plan for how you will carry out the SQA
throughout the project. Think about which set of software engineering activities
are the best for project. check level of SQA team skills.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the
performance of the project on the basis of collected data on different check points.
3. Measure Change Impact: The changes for making the correction of an error
sometimes re introduces more errors keep the measure of impact of change on
project. Reset the new change to check the compatibility of this fix with whole
project.
4. Multi testing Strategy: Do not depend on a single testing approach. When
you have a lot of testing approaches available use them.
5. Manage Good Relations: In the working environment managing good
relations with other teams involved in the project development is mandatory. Bad
relation of SQA team with programmers team will impact directly and badly on
project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share all
QA records, including test cases, defects, changes, and cycles, for stakeholder
awareness and future reference.
7. Reviews software engineering activities: The SQA group identifies and
documents the processes. The group also verifies the correctness of software
product.
8. Formalize deviation handling: Track and document software deviations
meticulously. Follow established procedures for handling variances.

56. How does quality assurance differ from software testing? Explain with a suitable
example.

Sol:

Quality Assurance vs. Software Testing

Quality Assurance (QA)

Quality Assurance is a broader, proactive approach that focuses on preventing defects


throughout the entire software development process. It encompasses:
 Process-Oriented: QA is concerned with establishing and maintaining
standards, processes, and methodologies that ensure high-quality software
development.
 Preventive Approach: The goal is to identify and address potential issues
before they occur by implementing robust development practices.
 Systematic Review: QA involves reviewing and improving the entire software
development lifecycle, including requirements gathering, design, coding, and
deployment.
 Continuous Improvement: It aims to continuously enhance the development
process to minimize the likelihood of defects.

Software Testing

Software Testing is a more specific, reactive approach that focuses on identifying


defects in the actual software product. It includes:

 Product-Oriented: Testing is primarily concerned with finding bugs, verifying


functionality, and ensuring the software meets specified requirements.
 Defect Detection: Testers actively search for errors, inconsistencies, and
performance issues in the software.
 Validation and Verification: Testing ensures that the software works as
expected and meets the defined acceptance criteria.
 Different Testing Types: Includes various methodologies like unit testing,
integration testing, system testing, and acceptance testing.

Comparative Example: E-Commerce Website Development

QA Perspective

In developing an e-commerce website, Quality Assurance would involve:

 Establishing coding standards and best practices


 Creating a comprehensive development workflow
 Implementing code review processes
 Defining clear requirements and acceptance criteria
 Setting up continuous integration and deployment pipelines
 Ensuring team members are trained on quality standards

Software Testing Perspective

For the same e-commerce website, Software Testing would include:

 Checking that product search functionality returns correct results


 Verifying the shopping cart calculates prices accurately
 Testing payment gateway integration
 Ensuring user authentication works correctly
 Performing load testing to check system performance
 Validating responsive design across different devices
57. What are the types of software risks? How they affect software quality?

Sol: The risks are categorized into five main areas:

1. Technical Risks
2. Project Management Risks
3. Operational Risks
4. Human-Related Risks
5. Quality Assurance Risks

a) Architectural Risks

 Inefficient or poorly designed software architecture


 Challenges in system scalability and performance
 Difficulty in system maintenance and future enhancements
 Impact: Reduces system flexibility, increases development complexity

b) Technology Selection Risks

 Choosing inappropriate or outdated technologies


 Incompatibility between different technological components
 Limited support or rapid technological obsolescence
 Impact: Increases maintenance costs, limits future growth potential

c) Performance Risks

 Inadequate system performance under various load conditions


 Memory leaks and inefficient resource utilization
 Slow response times and poor user experience
 Impact: Diminishes user satisfaction, reduces system reliability

2. Project Management Risks

a) Scheduling Risks

 Unrealistic project timelines


 Incorrect effort and resource estimations
 Unexpected delays and scope creep
 Impact: Increases project costs, reduces development quality

b) Budget Risks

 Insufficient financial resources


 Unexpected additional expenses
 Cost overruns during development
 Impact: Compromises project scope, limits feature development
c) Resource Allocation Risks

 Inadequate team skills


 Poor team communication
 Ineffective resource distribution
 Impact: Reduces productivity, increases likelihood of errors

3. Operational Risks

a) Security Risks

 Vulnerabilities to cyber attacks


 Insufficient data protection mechanisms
 Weak authentication and authorization processes
 Impact: Compromises system integrity, exposes sensitive information

b) Compliance Risks

 Non-adherence to industry regulations


 Failure to meet legal requirements
 Inadequate data privacy controls
 Impact: Legal consequences, potential financial penalties

c) Integration Risks

 Challenges in integrating with existing systems


 Compatibility issues between different software components
 Complex data migration processes
 Impact: Increases system complexity, reduces interoperability

4. Human-Related Risks

a) Skill Gap Risks

 Lack of required technical expertise


 Insufficient training and knowledge
 Inexperienced development team
 Impact: Increases probability of errors, reduces code quality

b) Communication Risks

 Poor communication between stakeholders


 Misunderstood requirements
 Ineffective knowledge transfer
 Impact: Leads to incorrect implementation, increases rework
5. Quality Assurance Risks

a) Testing Risks

 Incomplete or inadequate testing


 Limited test coverage
 Insufficient validation of edge cases
 Impact: Increases likelihood of undetected bugs

b) Documentation Risks

 Poor or incomplete documentation


 Lack of clear system specifications
 Inadequate user manuals
 Impact: Reduces system maintainability, complicates knowledge transfer

58. Write a short note on software quality assurance tasks, goals, Metrics and
statistical SQA?

Sol:

Introduction

Software Quality Assurance (SQA) is a systematic process designed to ensure that


software products meet specified requirements, standards, and customer expectations.
It encompasses a wide range of activities aimed at maintaining and improving software
quality throughout the development lifecycle.

SQA Tasks

1. Process Verification

 Review and validate software development processes


 Ensure adherence to established methodologies and best practices
 Conduct process audits and documentation reviews

2. Product Testing

 Develop comprehensive test plans and test cases


 Execute various testing types:
o Unit testing
o Integration testing
o System testing
o Acceptance testing
o Performance testing
o Security testing
3. Quality Control

 Identify and track software defects


 Perform root cause analysis
 Implement corrective and preventive actions
 Manage quality gates and release criteria

4. Configuration Management

 Control software versions and configurations


 Manage changes through formal change control processes
 Maintain version control and release management

SQA Goals

Technical Goals

 Minimize software defects


 Ensure software reliability and stability
 Validate compliance with technical specifications
 Optimize software performance

Business Goals

 Reduce development and maintenance costs


 Improve customer satisfaction
 Mitigate risks associated with software failures
 Enhance organizational reputation for quality

Quality Metrics

Defect-Related Metrics

 Defect density (defects per lines of code)


 Defect detection rate
 Defect resolution time
 Defect leakage rate

Performance Metrics

 Mean Time Between Failures (MTBF)


 Mean Time To Repair (MTTR)
 System availability
 Response time
 Resource utilization
Process Metrics

 Test coverage percentage


 Code review effectiveness
 Requirements traceability
 Effort spent on quality activities

Statistical SQA Approaches

Quantitative Quality Assessment

 Statistical process control


 Variance analysis
 Reliability prediction models
 Reliability growth modeling

Probability-Based Techniques

 Failure mode and effects analysis (FMEA)


 Fault tree analysis
 Reliability block diagrams
 Bayesian statistical inference

Quality Estimation Methods

 Orthogonal defect classification


 Software reliability engineering
 Stochastic modeling of software failures
 Predictive quality estimation techniques

Statistical Quality Control Techniques

 Control charts for software processes


 Six Sigma methodologies
 Hypothesis testing for quality validation
 Regression analysis for defect prediction

You might also like