Software Testing Unit 1
Software Testing Unit 1
Definition:
Software Testing is the process of evaluating a software application or system to identify and rectify defects
or bugs in order to ensure that it meets specified requirements and functions correctly.It is a process of
evaluating a software application to identify and assess its quality, correctness, and suitability for use. It
involves executing the software with the intention of finding errors and verifying that it delivers the desired
output.
- Finding defects: The primary objective is to identify and report defects so that they can be fixed before the
software is released to users.
- Verification and validation: Testing ensures that the software meets its requirements and functions as expected.
- Improving software quality: By finding and fixing defects, software testing enhances the quality and reliability of
the final product.
- Ensuring software reliability: Thorough testing reduces the chances of unexpected failures or crashes in
production.
- User satisfaction: High-quality software meets user expectations and results in higher user satisfaction.
- Cost-effectiveness: Identifying and fixing defects early in the development process reduces the cost of fixing
issues later.
- Compliance and standards: Testing helps ensure that software complies with industry standards and regulations.
- Risk mitigation: Testing helps identify and manage risks associated with the software, especially in critical
applications.
The Software Testing Life Cycle is the sequence of activities followed during software testing, including test
planning, test design, test execution, defect reporting, and test closure.
2) Why should we Test?
- Verification and validation: Testing ensures that the software meets the specified requirements and functions as
intended.
- Defect identification: Testing helps identify and fix defects early in the development process.
- Risk management: Testing mitigates the risk of software failures or issues in production.
- Cost savings: Early detection of defects reduces the cost of fixing issues later.
- Functional Testing: Validates that the software functions as expected and fulfills its intended purpose.
- Non-Functional Testing: Tests non-functional aspects such as performance, security, usability, etc.
- Regression Testing: Ensures that new changes or enhancements do not adversely impact existing functionality.
- Software Testers: Professionals responsible for designing and executing test cases, identifying defects, and
reporting issues.
- Test Leads/Managers: Oversee the testing process, create test plans, and coordinate the testing effort.
- Developers: In Agile environments, developers may be involved in writing and executing unit tests.
- Business Analysts: Collaborate with testers to define test requirements based on user needs.
- End Users: Participate in User Acceptance Testing (UAT) to ensure the software meets their needs.
Skills and Qualities of a Software Tester:
- Attention to detail
Testing Levels:
- Unit Testing: Testing individual units or components of the software to ensure they work correctly in isolation.
- Integration Testing: Verifying that integrated units or modules function correctly as a group.
- System Testing: Testing the entire system to validate that it meets the specified requirements.
- Acceptance Testing: Evaluating the software's readiness for acceptance by end-users or stakeholders.
Testing Types:
- Performance Testing: Evaluating the software's speed, responsiveness, and scalability under different load
conditions.
- Security Testing: Identifying vulnerabilities and ensuring the software is secure from potential threats.
Test Coverage:
- Code Coverage: Ensuring that all code statements and branches are exercised during testing.
- Risk-based Testing: Prioritizing testing efforts based on identified risks and criticality.
In conclusion, software testing is a crucial phase in the software development life cycle. It helps ensure that the
software is of high quality, meets user expectations, and functions reliably in the intended environment. Testing
should be performed by dedicated testers with the appropriate skills and knowledge, and it should cover different
aspects of the software, including functionality, performance, security, and usability, among others. By conducting
thorough and systematic testing, software defects can be identified early, leading to a more robust and successful
software product.
- Black box testing, also known as behavioral testing or functional testing, is a software testing technique where
the tester examines the functionality of the software without having access to the internal code or structure.
- The primary focus of black box testing is on testing the software's input and output behavior and ensuring that it
meets the specified requirements and functions correctly from an end-user's perspective.
Key Characteristics:
1. No access to internal code: Testers performing black box testing are unaware of the internal workings of the
software and do not have access to the source code.
2. Requirements-based testing: Test cases are designed based on the software's requirements and specifications,
ensuring that all functionalities are adequately tested.
3. Test independence: Black box testing allows testers to work independently of developers, promoting impartial
testing.
4. External perspective: The testing approach simulates how an end-user would interact with the software without
being concerned with its internal implementation.
2. Boundary value analysis: This technique focuses on testing values at the boundaries of equivalence classes to
identify potential issues related to boundary conditions.
3. Decision table testing: It involves creating a table to capture different combinations of inputs and their
corresponding expected outputs, helping identify different scenarios to be tested.
4. State transition testing: Applicable to systems with different states, this technique aims to test how the software
transitions from one state to another and its behavior in each state.
- Testers do not need knowledge of the internal code, making it easier to apply in situations where code access is
restricted.
- It provides a customer-centric perspective, ensuring that the software meets user requirements and specifications.
- Testers with domain knowledge can effectively design test cases based on user expectations.
- It may not cover all code paths and conditions, as testers are unaware of the internal structure.
- The effectiveness of testing highly depends on the quality and completeness of the requirements specifications.
Definition:
- White box testing, also known as structural testing or glass box testing, is a software testing technique where the
tester has access to the internal code and logic of the software.
- The primary focus of white box testing is to verify the correctness of the code, ensuring that all statements,
branches, and paths are adequately tested.
Key Characteristics:
1. Access to internal code: Testers performing white box testing have knowledge of the software's internal
workings and can design test cases based on the code's structure.
2. Coverage analysis: White box testing aims to achieve code coverage, including statement coverage, branch
coverage, and path coverage.
3. Collaboration with developers: White box testing often involves collaboration between testers and developers to
understand the code and design effective test cases.
4. Emphasis on code logic: The focus is on testing the software's logic and algorithms to identify any errors or
omissions in the code.
1. Control flow testing: This technique focuses on testing different control flow paths in the code to ensure that all
possible routes are exercised.
2. Data flow testing: It aims to examine how data is processed and propagated through the program to detect
potential data-related issues.
3. Branch testing: The goal is to test each decision point (branch) in the code, ensuring that both true and false
conditions are evaluated.
- It allows thorough code coverage, ensuring that all paths and conditions are tested.
- Testers can identify complex logical errors and performance bottlenecks by analyzing the code.
- Collaboration between testers and developers promotes better communication and understanding of the software.
- Testers must possess programming and technical skills to perform effective white box testing.
- The testing process is tightly coupled with the implementation, making it more challenging to test early in the
development lifecycle.
In conclusion, black box testing and white box testing are two essential software testing techniques that serve
different purposes. While black box testing focuses on verifying functionality and meeting user requirements,
white box testing emphasizes code correctness and internal logic. A comprehensive testing strategy often involves
a combination of both techniques to ensure high-quality software that meets user expectations while being
structurally robust.
SOFTWARE TESTING LIFE CYCLE (STLC)
The Software Testing Life Cycle (STLC) is a systematic approach to testing a software application to ensure that it
meets the requirements and is free of defects. It is a process that follows a series of steps or phases, and each phase
has specific objectives and deliverables. The STLC is used to ensure that the software is of high quality, reliable,
and meets the needs of the end-users.
The main goal of the STLC is to identify and document any defects or issues in the software application as early
as possible in the development process. This allows for issues to be addressed and resolved before the software is
released to the public.
The stages of the STLC include Test Planning, Test Analysis, Test Design, Test Environment Setup, Test
Execution, Test Closure, and Defect Retesting. Each of these stages includes specific activities and deliverables
that help to ensure that the software is thoroughly tested and meets the requirements of the end users.
Overall, the STLC is an important process that helps to ensure the quality of software applications and provides a
systematic approach to testing. It allows organizations to release high-quality software that meets the needs of
their customers, ultimately leading to customer satisfaction and business success.
STLC Phases
There are following six major phases in every Software Testing Life Cycle Model (STLC Model):
1. Requirement Analysis
2. Test Planning
3. Test case development
4. Test Environment setup
5. Test Execution
6. Test Cycle closure
Each of these stages has a definite Entry and Exit criteria, Activities & Deliverables associated with it.
You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)
Phases of STLC
1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing Life Cycle (STLC). In
this phase quality assurance team understands the requirements like what is to be tested. If anything is missing or
not understandable then the quality assurance team meets with the stakeholders to better understand the detailed
knowledge of requirements.
The activities that take place during the Requirement Analysis stage include:
Reviewing the software requirements document (SRD) and other related documents
Interviewing stakeholders to gather additional information
Identifying any ambiguities or inconsistencies in the requirements
Identifying any missing or incomplete requirements
Identifying any potential risks or issues that may impact the testing process
Creating a requirement traceability matrix (RTM) to map requirements to test cases
At the end of this stage, the testing team should have a clear understanding of the software requirements and
should have identified any potential issues that may impact the testing process. This will help to ensure that the
testing process is focused on the most important areas of the software and that the testing team is able to deliver
high-quality results.
2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle where all testing
plans are defined. In this phase manager of the testing, team calculates the estimated effort and cost for the testing
work. This phase gets started once the requirement-gathering phase is completed.
The activities that take place during the Test Planning stage include:
Identifying the testing objectives and scope
Developing a test strategy: selecting the testing methods and techniques that will be used
Identifying the testing environment and resources needed
Identifying the test cases that will be executed and the test data that will be used
Estimating the time and cost required for testing
Identifying the test deliverables and milestones
Assigning roles and responsibilities to the testing team
Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities that will be
performed, and a clear understanding of the testing objectives, scope, and deliverables. This will help to ensure
that the testing process is well-organized and that the testing team is able to deliver high-quality results.
3. Test Case Development: The test case development phase gets started once the test planning phase is
completed. In this phase testing team notes down the detailed test cases. The testing team also prepares the
required test data for the testing. When the test cases are prepared then they are reviewed by the quality assurance
team.
The activities that take place during the Test Case Development stage include:
Identifying the test cases that will be developed
Writing test cases that are clear, concise, and easy to understand
Creating test data and test scenarios that will be used in the test cases
Identifying the expected results for each test case
Reviewing and validating the test cases
Updating the requirement traceability matrix (RTM) to map requirements to test cases
At the end of this stage, the testing team should have a set of comprehensive and accurate test cases that provide
adequate coverage of the software or application. This will help to ensure that the testing process is thorough and
that any potential issues are identified and addressed before the software is released.
4. Test Environment Setup: Test environment setup is a vital part of the STLC. Basically, the test environment
decides the conditions on which software is tested. This is independent activity and can be started along with test
case development. In this process, the testing team is not involved. either the developer or the customer creates the
testing environment.
5. Test Execution: After the test case development and test environment setup test execution phase gets started. In
this phase testing team starts executing test cases based on prepared test cases in the earlier step.
The activities that take place during the test execution stage of the Software Testing Life Cycle (STLC)
include:
Test execution: The test cases and scripts created in the test design stage are run against the software
application to identify any defects or issues.
Defect logging: Any defects or issues that are found during test execution are logged in a defect tracking
system, along with details such as the severity, priority, and description of the issue.
Test data preparation: Test data is prepared and loaded into the system for test execution
Test environment setup: The necessary hardware, software, and network configurations are set up for test
execution
Test execution: The test cases and scripts are run, and the results are collected and analyzed.
Test result analysis: The results of the test execution are analyzed to determine the software’s performance
and identify any defects or issues.
Defect retesting: Any defects that are identified during test execution are retested to ensure that they have
been fixed correctly.
Test Reporting: Test results are documented and reported to the relevant stakeholders.
It is important to note that test execution is an iterative process and may need to be repeated multiple times until
all identified defects are fixed and the software is deemed fit for release.
6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC) where all testing-
related activities are completed and documented. The main objective of the test closure stage is to ensure that all
testing-related activities have been completed and that the software is ready for release.
At the end of the test closure stage, the testing team should have a clear understanding of the software’s quality
and reliability, and any defects or issues that were identified during testing should have been resolved. The test
closure stage also includes documenting the testing process and any lessons learned so that they can be used to
improve future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all testing-related activities are
completed and documented. The main activities that take place during the test closure stage include:
Test summary report: A report is created that summarizes the overall testing process, including the number
of test cases executed, the number of defects found, and the overall pass/fail rate.
Defect tracking: All defects that were identified during testing are tracked and managed until they are
resolved.
Test environment clean-up: The test environment is cleaned up, and all test data and test artifacts are
archived.
Test closure report: A report is created that documents all the testing-related activities that took place,
including the testing objectives, scope, schedule, and resources used.
Knowledge transfer: Knowledge about the software and testing process is shared with the rest of the team
and any stakeholders who may need to maintain or support the software in the future.
Feedback and improvements: Feedback from the testing process is collected and used to improve future
testing processes
It is important to note that test closure is not just about documenting the testing process, but also about ensuring
that all relevant information is shared and any lessons learned are captured for future reference. The goal of test
closure is to ensure that the software is ready for release and that the testing process has been conducted in an
organized and efficient manner.
The V-Model is a linear and sequential model that consists of the following phases:
1. Requirements Gathering and Analysis: The first phase of the V-Model is the requirements gathering and
analysis phase, where the customer’s requirements for the software are gathered and analyzed to determine
the scope of the project.
2. Design: In the design phase, the software architecture and design are developed, including the high-level
design and detailed design.
3. Implementation: In the implementation phase, the software is actually built based on the design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the customer’s requirements and is
of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure that it continues to meet the
customer’s needs and expectations.
7. The V-Model is often used in safety-critical systems, such as aerospace and defense systems, because of its
emphasis on thorough testing and its ability to clearly define the steps involved in the software development
process.
Verification: It involves static analysis technique (review) done without executing code. It is the process of
evaluation of the product development phase to find whether specified requirements meet.
Validation: It involves dynamic analysis technique (functional, non-functional), testing done by executing
code. Validation is the process to evaluate the software after the completion of the development phase to
determine whether software meets the customer expectations and requirements.
So V-Model contains Verification phases on one side of the Validation phases on the other side. Verification
and Validation phases are joined by coding phase in V-shape. Thus it is called V-Model.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer to understand their
requirements and expectations. This stage is known as Requirement Gathering.
System Design: This phase contains the system design and the complete hardware and communication setup
for developing product.
Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and wit h the outside
world (other systems) is clearly understood.
Module Design: In this phase the system breaks down into small modules. The detailed design of modules
is specified, also known as Low-Level Design (LLD).
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test Plans are
executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In integration testing,
the modules are integrated and the system is tested. Integration testing is performed on the Architecture
design phase. This test verifies the communication of modules among themselves.
System Testing: System testing test the complete application with its functionality, inter dependency, and
communication.It tests the functional and non-functional requirements of the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the production
environment. UAT verifies that the delivered system meets user’s requirement and system is ready for use in
real world.
Industrial Challenge: As the industry has evolved, the technologies have become more complex, increasingly
faster, and forever changing, however, there remains a set of basic principles and concepts that are as applicable
today as when IT was in its infancy.
Large to Small: In V-Model, testing is done in a hierarchical perspective, For example, requirements
identified by the project team, create High-Level Design, and Detailed Design phases of the project. As each
of these phases is completed the requirements, they are defining become more and more refined and
detailed.
Data/Process Integrity: This principle states that the successful design of any project requires the
incorporation and cohesion of both data and processes. Process elements must be identified at each and
every requirements.
Scalability: This principle states that the V-Model concept has the flexibility to accommodate any IT
project irrespective of its size, complexity or duration.
Cross Referencing: Direct correlation between requirements and corresponding testing activity is known as
cross-referencing.
Tangible Documentation: This principle states that every project needs to create a document. This
documentation is required and applied by both the project development team and the support team.
Documentation is used to maintaining the application once it is available in a production environment.
Why preferred?
It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific deliverables and a
review process.
Proactive defect tracking – that is defects are found at early stage.
When to use?
Where requirements are clearly defined and fixed.
The V-Model is used when ample technical resources are available with technical expertise.
Small to medium-sized projects with set and clearly specified needs are recommended to use the V-shaped
model.
Since it is challenging to keep stable needs in large projects, the project should be small.
Advantages:
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle thereby enhancing the
probability of building an error-free and good quality product.
It enables project management to track progress accurately.
Clear and Structured Process: The V-Model provides a clear and structured process for software
development, making it easier to understand and follow.
Emphasis on Testing: The V-Model places a strong emphasis on testing, which helps to ensure the quality
and reliability of the software.
Improved Traceability: The V-Model provides a clear link between the requirements and the final product,
making it easier to trace and manage changes to the software.
Better Communication: The clear structure of the V-Model helps to improve communication between the
customer and the development team.
Disadvantages:
TESTING AS A PROCESS
The software development process is described as a series of phases, procedures and steps that result
in the production of software products, embedded within the software development process are several other
processes including testing.
Testing is described as a group of procedures carried out to evaluate some aspect of a piece of
software.
Testing can be described as a process used for revealing defects in software, and for establishing that
the software has attained a specified degree of quality with respect to selected attributes.
Testing covers both validation and verification activities. Testing includes the following,
Technical reviews
Test planning
Test tracking
Test case design
Unit test
Integration test
System test
Acceptance test
Usability test
Testing can also be described as a dual-purpose process.
It reveals defects and evaluates quality attributes of the software such as:
Reliability
Security
Usability
Correctness
The debugging process begins after testing has been carried out and the tester has noted that the
software is not behaving as specified.
Debugging is the process of:
Locating the fault or defect
Repairing the code
Retesting the code
Testing has economic, technical and managerial aspects.
Testing must be managed. Organizational policy for testing must be defined and documented.
Testing is related to two processes
1. Verification
2. Validation
Verification
Verification is the process of evaluating a software system or component to determine whether the
products of a given development phase satisfies the conditions imposed at the start of that phase.
Verification is usually associated with inspections and reviews of software deliverables.
We apply verification activities from the early phases of the software development and check / review
the documents generated after the completion of each phase. Hence, it is the process of reviewing the
requirement document, design document, source code and other related documents of the project. This is
manual testing and involves only looking at the documents in order to ensure what comes out is what we
expected to get.
Validation
Validation is the process of evaluating a software system or components during or at the end of the
development cycle in order to determine whether it satisfies specified requirements.
Validation is usually associated with traditionally execution-based testing, that is, exercising the codewith test
cases.
Both are essential and complementary activities of software testing. If effective verification is carried out, it may
minimize the need of validation and more number of errors may be detected in the early phases of the software
development. Unfortunately, testing is primarily validation oriented.
Memory bit got stuck but CPU does not access this data
Software ―bug in a subroutine is not ―visible while the subroutine is not called.
Faults (Defects): A fault (defect) is introduced into the software as the result of an error. It is an
irregularity in the software that may cause it to behave incorrectly, and not according to its specification.
Failures: A failure is the inability of a software system or component to perform its required functions
within specified performance requirements.
Presence of an error might cause a whole system to deviate from its required operation
One of the goals of safety-critical systems is that error should not result in system failure
During execution of a software component or system, a tester, developer, or user observes that
it does not produce the expected results.
In some cases a particular type of misbehaviour indicates a certain type of fault is present. We
can say that the type of misbehaviour is a symptom of the fault.
An experienced developer/tester will have a knowledge base of fault/symptoms/ failure cases
stored in memory.
Incorrect behaviour can include producing incorrect values for output variables, an incorrect
response on the part of a device, or an incorrect image on a screen.
During development failures are usually observed by testers, and faults are located and repaired
by developers.
When the software is in operation, users may observe failures which are reported back to the
development organization so repairs can be made.
A fault in the code does not always produce a failure. In fact, faulty software may operate over
a long period of time without exhibiting any incorrect behaviour.
However when the proper conditions occur the fault will manifest itself as a failure.
Voas is among the researchers who discuss these conditions, which are as follows:
The input to the software must cause the faulty statement to be executed.
The faulty statement must produce a different result than the correct
statement. This event produces an incorrect internal state for the software.
The incorrect internal state must propagate to the output, so that the result of the fault is
observable.
Software that easily reveals its’ faults as failures is said to be more testable.
From the testers point-of-view this is a desirable software attribute. Testers need to work with
designers to insure that software is testable.
Test Cases: To detect defects in a piece of software the tester selects a set of input data and then
executes the software with the input data under a particular set of conditions. A test case is a test-
related item which contains the following information:
A set of test inputs. These are data items received from an external source by the code under
test. The external source can be hardware, software, or human.
Execution conditions. These are conditions required for running the test, for example, a
certain state of a database, or a configuration of a hardware device.
Expected outputs. These are the specified results to be produced by the code under test.
Test: A test is a group of related test cases, or a group of related test cases and test procedures.
Test Oracle: A test oracle is a document, or piece of software that allows testers to determine whether
a test has been passed or failed.
Test Bed: A test bed is an environment that contains all the hardware and software needed to test a
software component or a software system.
Quality relates to the degree to which a system, system component, or process meets specified
requirements.
Quality relates to the degree to which a system, system component, or process meets customer
or user needs, or expectations.
Metric: A metric is a quantitative measure of the degree to which a system, system component, or
process has a given attribute. A quality metric is a quantitative measurement of the degree to which an
item possesses a given quality attribute. Some examples of quality metric are,
Correctness—the degree to which the system performs its intended function
Reliability—the degree to which the software is expected to perform its required functions
under stated conditions for a stated period of time
Usability—relates to the degree of effort needed to learn, operate, prepare input, and interpret
output of the software
Integrity—relates to the system’s ability to withstand both intentional and accidental attacks
Portability—relates to the ability of the software to be transferred from one environment to
another
Maintainability—the effort needed to make changes in the software
Interoperability—the effort needed to link or couple one system to another.
* The program inspection is a formal process that iscarried out by a team of at least four people.
* 4 team members
Inspection process
Reliability and safety are two critical aspects of software testing, but they have different focuses and objectives.
Let's explore the differences between reliability and safety in software testing:
Reliability refers to the ability of software to perform its intended function consistently and accurately over a
specified period and under certain conditions. In the context of software testing, the goal of testing for reliability is
to identify and address defects, bugs, or vulnerabilities that could cause the software to behave unpredictably or
inconsistently. The focus is on ensuring that the software works as expected and meets the user's requirements.
1. Functionality testing: Reliability testing involves functional testing to verify that the software functions as
intended and produces accurate results.
2. Stress testing: Reliability testing also includes stress testing, where the software is subjected to various load
conditions to identify potential performance issues
3. Regression testing: Regression testing is often a part of reliability testing to ensure that new changes or updates
to the software do not introduce new defects.
4. Metrics: Reliability metrics include mean time between failures (MTBF), mean time to failure (MTTF), mean
time to repair (MTTR), and availability, which provide insights into the software's reliability.
Safety, on the other hand, focuses on identifying potential hazards or risks associated with the software and
ensuring that it does not cause harm to users, systems, or the environment. Safety testing is crucial for software
used in critical systems such as medical devices, automotive systems, aerospace, and industrial control systems,
where a software failure can have severe consequences.
1. Hazard analysis: Safety testing involves identifying potential hazards and risks associated with the software and
analyzing the impact of these hazards.
2. Fault tolerance: Safety testing aims to verify the software's fault tolerance and its ability to handle unexpected
situations without causing harm.
3. Compliance with safety standards: In many industries, safety-critical software must adhere to specific safety
standards and regulations. Safety testing ensures that the software meets these requirements.
4. Extensive testing scenarios: Safety testing involves testing the software under various conditions and edge cases
to identify and mitigate potential safety issues.
In summary, while reliability testing focuses on the consistent and accurate functioning of the software, safety
testing aims to identify and prevent potential hazards and risks that could lead to harm or adverse consequences.
Both aspects are essential in software testing, especially in critical systems where both functionality and safety are
paramount. A comprehensive testing approach should address both reliability and safety concerns to deliver high-
quality and safe software products.
SOFTWARE TESTING PRINCIPLES
Testing principles are important to test specialists and engineers because they are the foundation
for developing testing knowledge and acquiring testing skills. They also provide guidance for defining testing
activities.
A principle can be defined as,
A general or fundamental law.
A rule or code of conduct.
The laws or facts of nature underlying the working of an artificial device.
In the software domain, principles may also refer to rules or codes of conduct relating to professionals,
who design, develop, test, and maintain software systems.
The following are a set of testing principles
Principle 1: Testing is the process of exercising a software component using a selected set of test cases, with
the intent of revealing defects, and evaluating quality.
This principle supports testing as an execution-based activity to detect defects. It also supports the
separation of testing from debugging since the intent of debugging is to locate defects and repair the
software.
The term ―software component‖ means any unit of software ranging in size and complexity from an
individual procedure or method, to an entire software system.
The term ―defects‖ represents any deviations in the software that have a negative impact on its
functionality, performance, reliability, security, and/or any other of its specified quality attributes.
Principle 2: When the test objective is to detect defects, then a good test case is one that has a high
probability of revealing yet undetected defects.
Testers must carry out testing in the same way as scientists carry out experiments.
Testers need to create a hypothesis and work towards proving or disproving it, it means he/she must
prove the presence or absence or a particular type of defect.
Testers need to carefully inspect and interpret test results. Several erroneous and costly scenarios may
occur if care is not taken.
A failure may be overlooked, and the test may be granted a ―pass‖ status when in reality the software
has failed the test.
Testing may continue based on erroneous test results.
The defect may be revealed at some later stage of testing, but in that case it may be more costly and
difficult to locate and repair.
Principle 4: A test case must contain the expected output or result.
The test case is of no value unless there is an explicit statement of the expected outputs or results.
Expected outputs allow the tester to determine
Whether a defect has been revealed,
Pass/fail status for the test.
It is very important to have a correct statement of the output so that time is not spent due to
misconceptions about the outcome of a test.
The specification of test inputs and outputs should be part of test design activities.
Principle 5: Test cases should be developed for both valid and invalid input conditions.
A tester must not assume that the software under test will always be provided with valid inputs.
Inputs may be incorrect for several reasons.
Software users may have misunderstandings, or lack information about the nature of the inputs
They often make typographical errors even when complete/correct information is available.
Devices may also provide invalid inputs due to erroneous conditions and malfunctions.
Principle 6: The probability of the existence of additional defects in a software component is proportionalto
the number of defects already detected in that component.
The higher the number of defects already detected in a component, the more likely it is to have
additional defects when it undergoes further testing.
If there are two components A and B, and testers have found 20 defects in A and 3 defects in B, then
the probability of the existence of additional defects in A is higher than B.
Principle 7: Testing should be carried out by a group that is independent of the development group.
This principle is true for psychological as well as practical reasons. It is difficult for a developer to
admit that software he/she has created and developed can be faulty.
Testers must realize that
Developers have a great pride in their work,
Practically it is difficult for the developer to conceptualize where defects could be found.
The tester needs to record the exact conditions of the test, any special events that occurred, equipment
used, and a carefully note the results.
This information is very useful to the developers when the code is returned for debugging so that they
can duplicate test conditions.
It is also useful for tests that need to be repeated after defect repair.
Test plans should be developed for each level of testing. The objective for each level should be
described in the associated plan. The objectives should be stated as quantitatively as possible.
Principle 10: Testing activities should be integrated into the software life cycle.
Testing activity should be integrated into the software life cycle starting as early as in the requirements
analysis phase, and continue on throughout the software life cycle in parallel with development
activities.
A tester needs to have knowledge from both experience and education about software specification,
designed, and developed.
A tester needs to be able to manage many details.
A tester needs to have knowledge of fault types and where faults of a certain type might occur in code
construction.
A tester needs to reason like a scientist and make hypotheses that relate to presence of specific types of
defects.
A tester needs to have a good understanding of the problem domain of the software that he/she is testing.
Familiarly with a domain may come from educational, training, and work related experiences. A tester
needs to create and document test cases.
To design the test cases the tester must select inputs often from a very wide domain.
The selected test cases should have the highest probability of revealing a defect. Familiarly with the
domain is essential.
A tester needs to design and record test procedures for running the tests.
A tester needs to plan for testing and allocate proper resources.
A tester needs to execute the tests and is responsible for recording results.
A tester needs to analyse test results and decide on success or failure for a test.
This involves understanding and keeping track of huge amount of detailed information.
A tester needs to learn to use tools and keep updated of the newest test tools.
A tester needs to work and cooperate with requirements engineers, designers, and developers, and
often must establish a working relationship with clients and users.
A tester needs to be educated and trained in this specialized area.
Testing levels are the procedure for finding the missing areas and avoiding overlapping and repetition between the
development life cycle stages. We have already seen the various phases such as Requirement collection,
designing, coding testing, deployment, and maintenance of SDLC (Software Development Life Cycle).
The levels of software testing involve the different methodologies, which can be used while we are performing the
software testing.
In software testing, we have four different levels of testing, which are as discussed below:
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing
Unit Testing
Unit Testing is the first level of testing usually performed by the developers.
In unit testing, a module or component is tested in isolation.
As the testing is limited to a particular module or component, exhaustive testing is possible.
The first level of testing involves analyzing each unit or an individual component of the software
application.
Unit testing is also the first level of functional testing. The primary purpose of executing unit testing is to
validate unit components with their performance.
Advantage – Error can be detected at an early stage saving time and money to fix it.
Limitation – Integration issues are not detected in this stage, modules may work perfectly on isolation but
can have issues in interfacing between the modules.
It is mainly used to test the data flow from one module or component to other modules.
In integration testing, the test engineer tests the units or separate components or modules of the software
in a group.
The primary purpose of executing the integration testing is to identify the defects at the interaction between
integrated components or units.
It is of four types – Big-bang, top-down, bottom-up, and Hybrid.
1. In big bang integration, all the modules are first required to be completed and then integrated. After
integration, testing is carried out on the integrated unit as a whole.
2. In top-down integration testing, the testing flow starts from top-level modules that are higher in the
hierarchy towards the lower-level modules. As there is a possibility that the lower-level modules might
not have been developed while beginning with top-level modules.
So, in those cases, stubs are used which are nothing but dummy modules or functions that simulate the
functioning of a module by accepting the parameters received by the module and giving an acceptable
result.
3. Bottom-up integration testing is also based on an incremental approach but it starts from lower-level
modules, moving upwards to the higher-level modules. Again the higher-level modules might not have
been developed by the time lower modules are tested. So, in those cases, drivers are used. These
drivers simulate the functionality of higher-level modules in order to test lower-level modules.
4. Hybrid integration testing is also called the Sandwich integration approach. This approach is a
combination of both top-down and bottom-up integration testing. Here, the integration starts from the
middle layer, and testing is carried out in both directions, making use of both stubs and drivers,
whenever necessary.
System Testing
System Testing is the third level of testing.
System testing, which is used to test the software's functional and non-functional requirements.
It is end-to-end testing where the testing environment is parallel to the production environment. In the
third level of software testing, we will test the application as a whole system.
To check the end-to-end flow of an application or the software as a user is known as System testing.
In system testing, we will go through all the necessary modules of an application and test if the end
features or the end business works fine, and test the product as a complete system.
It is the level of testing where the complete integrated application is tested as a whole.
It aims at determining if the application conforms to its business requirements.
System testing is carried out in an environment that is very similar to the production environment.
Acceptance Testing
Acceptance testing is the final and one of the most important levels of testing on successful completion of
which the application is released to production.
It aims at ensuring that the product meets the specified business requirements within the defined standard
of quality.
There are two kinds of acceptance testing- alpha testing and beta testing.
1. When acceptance testing is carried out by testers or some other internal employees of the organization
at the developer’s site it is known as alpha testing.
2. User acceptance testing done by end-users at the end-user’s site is called beta testing.