0% found this document useful (0 votes)
26 views17 pages

Software Verification, Validation and Testing (UNIT-1)

Uploaded by

Tisha Thakral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views17 pages

Software Verification, Validation and Testing (UNIT-1)

Uploaded by

Tisha Thakral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Software Verification, Validation and Testing (UNIT-1)

1) Terminology

Below are some of the key terms commonly encountered in SVVT:

1. Verification

● Definition: The process of evaluating software at various stages of


development to ensure that it meets specified requirements.
● Key Point: "Are we building the product right?"
● Example: Code reviews, walkthroughs, and inspections.

2. Validation

● Definition: The process of evaluating the software during or


at the end of the development process to determine whether it
meets the business and user needs.
● Key Point: "Are we building the right product?"
● Example: User Acceptance Testing (UAT), system testing.

3. Test Case

● Definition: A set of conditions or variables under which a


tester determines whether an application, system, or feature is
working as expected.
● Key Point: Includes test inputs, execution conditions, and
expected results.
● Example: "Login with valid credentials."

4. Test Plan

● Definition: A detailed document that outlines the scope,


objectives, resources, approach, and schedule of the testing
activities.
● Key Point: Acts as a blueprint for the testing process.
● Example: Test plan for a new feature rollout.
5. Test Script

● Definition: A set of instructions that describe how to test a


specific functionality of the software, often automated.
● Key Point: Can be manual or automated.
● Example: Selenium test scripts for login functionality.

6. Test Suite

● Definition: A collection of test cases or scripts designed to be


executed together to validate certain behaviours or
functionalities of the software.
● Key Point: Groups related test cases.
● Example: Regression test suite for a major product release.

7. Smoke Testing

● Definition: A preliminary set of tests to check if the critical


functionalities of a build are working.
● Key Point: Also called "sanity testing."
● Example: Testing if the application launches and core
features are accessible.

8. Regression Testing

● Definition: Testing existing functionality to ensure that recent


changes (like new code or fixes) have not broken any other
part of the system.
● Key Point: Ensures new code does not negatively impact
existing functionality.
● Example: Running previously passed test cases after adding
new features.

9. Unit Testing

● Definition: Testing individual units or components of the


software in isolation.
● Key Point: Typically written and executed by developers.
● Example: Testing a function that calculates the total price of a
shopping cart.

10. Integration Testing

● Definition: Testing combined components or units of the


software to ensure they work together.
● Key Point: Focuses on the interaction between modules.
● Example: Testing interaction between the login and user
profile modules.

11. System Testing

● Definition: Testing the entire integrated system as a whole to


validate end-to-end functionalities.
● Key Point: Performed by testers in a simulated environment.
● Example: Testing a complete e-commerce site, from product
search to checkout.

12. Acceptance Testing

● Definition: The process of verifying whether a software


system meets business requirements and is ready for release.
● Key Point: Often involves User Acceptance Testing (UAT).
● Example: End users testing a CRM system before going live.

13. Exploratory Testing


● Definition: A type of testing where testers actively explore the
system without predefined test cases to find defects.
● Key Point: Relies on tester’s intuition and experience.
● Example: Randomly clicking through an application to
discover unexpected behavior.

14. Black Box Testing


● Definition: Testing the software’s functionality without
knowing its internal code or logic.
● Key Point: Focuses solely on inputs and expected outputs.
● Example: Inputting data into a form and checking for the
correct response.

15. White Box Testing

● Definition: Testing the internal structures or workings of an


application, often at the code level.
● Key Point: Also known as "clear box" or "glass box" testing.
● Example: Verifying a sorting algorithm by examining its
implementation.

16. Gray Box Testing

● Definition: A combination of black-box and white-box testing,


where testers have limited knowledge of the internal workings
of the software.
● Key Point: Strikes a balance between functionality and
structure testing.
● Example: Knowing some system details but testing it like an
end-user.

17. Performance Testing

● Definition: Testing to ensure the system meets required


performance criteria under expected workloads.
● Key Point: Includes stress, load, and endurance testing.
● Example: Checking how an application performs with 1,000
concurrent users.

18. Stress Testing

● Definition: A type of performance testing that examines how a


system behaves under extreme conditions, such as high traffic
or data processing.
● Key Point: Identifies the system’s breaking point.
● Example: Testing a website’s stability under a denial of service
(DoS) attack simulation.

25. Bug/Defect

● Definition: A flaw or error in the software that causes it to


behave unexpectedly or produce incorrect results.
● Key Point: A key focus in both verification and validation
efforts.
● Example: A calculation error in a financial application.

27. Alpha Testing

● Definition: Pre-release testing performed by developers and


testers in a controlled environment to catch bugs before public
release.
● Key Point: Often the first phase of user testing.
● Example: Testing an app internally before making it available
to beta users.

28. Beta Testing

● Definition: Pre-release testing done by actual end users under


real-world conditions to gather feedback.
● Key Point: The last step before official release.
● Example: Distributing an app to a select group of users for
feedback on features and bugs.
2) Evolving Nature of Area
In the context of Software Verification, Validation, and Testing
(SVVT), the field has seen significant evolution in recent years due
to rapid technological advancements and changes in software
development methodologies. Here's a breakdown of key trends
shaping the evolving nature of SVVT:

1. Shift to Agile and DevOps

● Traditional vs. Modern Approaches: Traditional SVVT


methods followed a linear or waterfall model where testing
occurred late in the development cycle. However, Agile and
DevOps emphasize continuous integration and continuous
testing. Testing is now integrated into every phase of the
software development lifecycle (SDLC), ensuring that
verification and validation occur in parallel with development.
● Test Automation: As Agile and DevOps demand faster
releases and feedback, automated testing has become
crucial. Automation frameworks are now more advanced, with
tools supporting continuous testing pipelines.

2. AI and Machine Learning in Testing

● Predictive Testing: AI and machine learning algorithms are


increasingly used to predict areas prone to defects, optimize
test cases, and prioritize testing efforts. This ensures that
testing efforts focus on high-risk areas, improving overall
efficiency.
● Self-Healing Tests: AI-driven tools can adapt to changes in
the application (like UI updates) without manual intervention,
making test suites more resilient and reducing maintenance
overhead.
3. Test Environments and Virtualization

● Cloud-Based Testing: Cloud computing has revolutionized


how test environments are created and managed.
Cloud-based testing environments offer scalability, flexibility,
and cost efficiency, allowing for easier testing across different
devices, platforms, and configurations.
● Virtualization and Containers: With technologies like Docker
and Kubernetes, creating isolated, reproducible test
environments has become easier. Virtual machines and
containers also help run multiple tests in parallel, improving
efficiency and coverage.

4. Emphasis on Security and Compliance

● Security Testing: With the growing concern over data


breaches and cyberattacks, security testing has become an
integral part of the SVVT process. Tools for vulnerability
scanning, penetration testing, and static analysis have
advanced significantly.
● Regulatory Compliance: Many industries, such as
healthcare, finance, and automotive, face strict regulatory
requirements. Testing is now focused not only on functionality
but also on ensuring that applications comply with industry
standards (e.g., GDPR, HIPAA, ISO 26262).

5. Test-Driven Development (TDD) and Behavior-Driven


Development (BDD)

● Test-First Approaches: TDD and BDD are gaining popularity


as they emphasize writing tests before coding. This approach
improves code quality and makes verification and validation
integral to the development process rather than a
post-development activity.
● Collaboration: BDD, in particular, fosters better collaboration
between developers, testers, and business stakeholders by
using a common language (e.g., Gherkin) to define test
scenarios.

6. Performance and Scalability Testing

● Microservices Architecture: The rise of microservices


architecture means that performance and scalability testing
must be done at both the individual service level and the
system level. Tools are evolving to handle the complexities of
distributed systems.
● Real-World Simulations: Testing tools now simulate
real-world conditions (e.g., load, network issues) to ensure
that applications can handle real-world demands without
failing.

7. User Experience (UX) Testing

● Shift Toward UX: Modern testing is also focusing on user


experience in addition to functionality. Usability testing,
accessibility testing, and user acceptance testing (UAT) have
become critical for ensuring that the software meets user
needs and is intuitive.
● Automated UX Testing: Automated tools can now perform
some aspects of UX testing, such as checking for
responsiveness across different devices and browsers, though
manual testing for subjective experiences remains vital.

8. Continuous Feedback and Monitoring

● Post-Deployment Monitoring: Testing doesn't stop after


release. Continuous monitoring tools provide feedback on
application performance and user behavior in production,
helping identify issues that escaped earlier testing phases.
● Shift-Left and Shift-Right Testing: "Shift-left" testing means
moving testing earlier in the development cycle, while
"shift-right" refers to testing after deployment. The combination
of both ensures a holistic approach to quality.
9. Exploratory Testing and Human-Centered Validation

● Beyond Automation: While automation plays a key role in


modern SVVT, exploratory testing by humans remains crucial.
Human testers can uncover issues that automated tools might
miss, particularly in terms of edge cases, usability, and
adaptability.

10. Emerging Technologies

● IoT, Blockchain, and 5G: As technologies like IoT (Internet of


Things), blockchain, and 5G become mainstream, testing
strategies are evolving to address new challenges. These
technologies introduce complex environments and
interactions, requiring new validation techniques.
● Quantum Computing: Though still in its infancy, quantum
computing promises to change how we validate complex
algorithms and cryptographic systems. SVVT will need to
evolve alongside these breakthroughs.
3) Errors , Faults & Failures

Error:
● An error refers to a human mistake made during software
development, such as in coding, design, or documentation.
● Example: A developer accidentally writes incorrect logic in the
code.

Fault (Defect/Bug):
● A fault occurs when an error in the code or design leads to an
incorrect implementation or an issue in the software.
● Example: The wrong logic (error) results in a bug that causes
a function to miscalculate results.
Failure:

● A failure happens when the software behaves incorrectly or


does not perform as expected during execution due to an
underlying fault.
● Example: The software crashes or returns wrong outputs
when the faulty function is called.

4) Correctness:

● Refers to the degree to which the software adheres to its


specified requirements and performs its intended functions
without any defects.
● Example: A correct program will always return the right result
for all valid inputs, as per the specifications.

Reliability:

● Refers to the ability of the software to consistently perform its


intended functions under defined conditions over time, without
failures.
● Example: Reliable software continues to function properly over
long periods and in various usage scenarios, handling
unexpected inputs or conditions gracefully.
5) Testing:

● Definition: The process of executing a program or system


with the intent to identify defects (faults) by comparing actual
outcomes with expected results.
● Purpose: To detect errors and ensure that the software
meets its requirements and functions correctly.
● Approach: Involves designing test cases, running tests, and
reporting the bugs found.
● Example: Running a test suite to check if a login function
works as expected.

Debugging:
● Definition: The process of finding and fixing the cause of a
detected defect (fault) in the software.
● Purpose: To correct errors in the code that were identified
during testing.
● Approach: Involves locating the root cause of the issue,
modifying the code, and verifying the fix.
● Example: Tracing through the code to identify why the login
function fails and then fixing the underlying bug.
1. Static Testing:
● Definition: A form of testing where the software is examined
without executing the code. It involves reviewing and
analysing code, design documents, and specifications.
● Purpose: To find errors early in the development process,
such as coding flaws, inconsistencies, or compliance issues.
● Techniques: Code reviews, walkthroughs, inspections, and
static analysis tools.
● Example: Reviewing a function's code logic to detect potential
issues or deviations from the design.

2. Dynamic Testing:
● Definition: A form of testing that involves executing the code
to validate the software’s behavior and functionality under
various conditions.
● Purpose: To identify defects that occur during the software’s
runtime and ensure the software works as expected.
● Techniques: Unit tests, integration tests, system tests, and
user acceptance tests.
● Example: Running test cases to check if the login feature
works correctly by providing valid and invalid inputs.
Exhaustive Testing: Theoretical Foundations
Exhaustive testing is a testing approach where all possible inputs,
paths, and scenarios of a software application are tested to ensure
correctness and reliability. While it sounds ideal, the practical
application of exhaustive testing is limited by several factors. Below
are the theoretical foundations that highlight its impracticality:

1. Impracticality of Testing All Data


● Input Domain Size: Software applications often accept a wide
range of inputs. For instance, consider a function that takes
two integers as input. The input domain includes all integer
pairs, which is virtually infinite. Testing every possible pair is
infeasible, if not impossible.
● Combination Explosion: As the number of input variables
increases, the combinations of inputs grow exponentially. For
example, if a function takes three boolean inputs, it has
23=82^3 = 823=8 combinations. If the function takes four
inputs, it jumps to 24=162^4 = 1624=16, and so forth. This
explosion in the number of test cases quickly becomes
unmanageable.

2. Impracticality of Testing All Paths

● Control Flow Complexity: Programs can have complex control


structures (e.g., loops, branches, and function calls). Each
path through the code may represent a different scenario, and
with each additional conditional or loop, the number of
possible paths increases significantly. For instance, a simple
function with just a few conditional statements can have
hundreds or thousands of potential execution paths.
● Path Combinations: To achieve complete path coverage,
every possible route through the code must be executed,
which is often not feasible in real-world applications. This is
especially true for applications with recursive functions or
those involving user interactions, where paths may not be
easily enumerable or executable.

3. No Absolute Proof of Correctness


● Inherent Limitations: Even with exhaustive testing, proving
that a program is entirely free of defects is fundamentally
impossible. As stated by Gödel's incompleteness theorems,
certain truths about a system cannot be proven within the
system itself. Thus, exhaustive testing cannot guarantee that
every possible error has been identified or that the program
will behave correctly in all scenarios.
● Changing Requirements: Software is often subject to
changes in requirements or updates. A piece of software that
was tested exhaustively may still become incorrect after any
modification. This dynamic nature of software development
means that exhaustive testing is a moving target; what was
once exhaustive testing may no longer be relevant after
changes.

You might also like