SQT QB Final C
SQT QB Final C
SQT QB Final C
Verification and validation (V&V) are processes of checking that a software system meets
specifications and requirements and fulfills its intended purpose123. They are also known
as software quality control123. Verification targets the software's quality, architecture, design,
database, etc., and is done by the QA team before the software is built34. Validation targets the
software's functionality, usability, and customer's requirements, and is done by the testing team
with the QA team after the software is built and before the production release345.
Defect: A deviation or departure of a quality characteristic from its specified value that
results in a product not satisfying its normal usage requirements24.
Bug: A defect recognized by the development team2.
Failure: A transition from correct to incorrect service delivery or a difference from the
expected result
CI/CD practices involve automating the integration, testing, and deployment of code changes,
ensuring that software is continually built, tested, and delivered with high quality.
20.What is user experience (UX) design?
UX design focuses on creating software that provides a positive and user-friendly experience. It
involves considering factors like usability, accessibility, and user satisfaction.
Part-B
1.Describe about White box and Black box testing?
Software Testing can be majorly classified into two categories:
Black box testing and white box testing are two different approaches to software testing,
and their differences are as follows:
Black box testing is a testing technique in which the internal workings of the software are not
known to the tester. The tester only focuses on the input and output of the software. Whereas,
White box testing is a testing technique in which the tester has knowledge of the internal
workings of the software, and can test individual code snippets, algorithms and methods.
Testing objectives: Black box testing is mainly focused on testing the functionality of the
software, ensuring that it meets the requirements and specifications. White box testing is
mainly focused on ensuring that the internal code of the software is correct and efficient.
Knowledge level: Black box testing does not require any knowledge of the internal workings
of the software, and can be performed by testers who are not familiar with programming
languages. White box testing requires knowledge of programming languages, software
architecture and design patterns.
Testing methods: Black box testing uses methods like equivalence partitioning, boundary
value analysis, and error guessing to create test cases. Whereas, white box testing uses methods
like control flow testing, data flow testing and statement coverage.
Scope: Black box testing is generally used for testing the software at the functional level.
White box testing is used for testing the software at the unit level, integration level and system
level.
Advantages and disadvantages:
Black box testing is easy to use, requires no programming knowledge and is effective in
detecting functional issues. However, it may miss some important internal defects that are not
related to functionality. White box testing is effective in detecting internal defects, and ensures
that the code is efficient and maintainable. However, it requires programming knowledge and
can be time-consuming.
In conclusion, both black box testing and white box testing are important for software testing,
and the choice of approach depends on the testing objectives, the testing stage, and the
available resources.
Differences between Black Box Testing vs White Box Testing:
Black Box Testing White Box Testing
Implementation of code is not needed for Code implementation is necessary for white
black box testing. box testing.
This testing can be initiated based on the This type of testing of software is started after a
requirement specifications document. detail design document.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing It is generally applicable to the lower levels of
of software. software testing.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
It is less exhaustive as compared to white box It is comparatively more exhaustive than black
testing. box testing.
Test plan
A test plan often describes a document that identifies the quality assurance team's projects
schedule as well as various tasks that they will be taking on. According to Software Testing Help,
this deliverable often includes all activities in the project as well as defines the scope, roles,
risks, entry and exit criteria, test objectives and more. A test plan can also include a test strategy,
which outlines the testing approach, and gives generic details for teams to follow. While a test
plan gives specific responsibilities to team members, the test strategy ensures that anyone is able
to execute the tests, aligning with agile practices.
It's important to note that setting up the test environment is also part of test planning. According
to InformIT contributor Elfriede Dustin, installing hardware, software and network resources,
integrating testing assets, refining test databases and creating test bed scripts will all be a part of
this phase. In this way, organizations will be sure that they have the tools on hand to support the
test environment and create quality projects.
Test design
The test design revolves around tests themselves, including how many will need to be performed,
the test conditions and ways that testing will be approached. According to the ISTQB blog, test
design also involves creating and writing test suites for testing a software, but will require
specificity and detailed input. After choosing the input value, QA teams can then determine what
the expected result would be and document it as part of the test case. Doing so will help give
qualifications for passing and failing tests, allowing QA to quickly mitigate errors and refine
their projects to achieve overarching goals.
"Much like a software development effort, the test program must be mapped out and consciously
designed to ensure that test activities performed represent the most efficient and effective tests
for the system under test," Dustin wrote. "Test program resources are limited, yet ways of testing
the system are endless. A test design is developed to portray the test effort, in order to give
project and test personnel a mental framework on the boundary and scope of the test program."
Although there are a number of terms to understand in software development, test planning and
test design are two critical assets that must be fully utilized. By leveraging an enterprise test
management tool, organizations can enact their test design, prioritize their test cases and
collaborate effectively.
Points 3 and 4 talk about the metrics. The following metrics can be used for Test
Monitoring:
1. Test Coverage Metric
2. Test Execution Metrics (Number of test cases pass, fail, blocked, on hold)
3. Defect Metrics
4. Requirement Traceability Metrics
5. Miscellaneous metrics like level of confidence of testers, date milestones, cost, schedule,
and turnaround time.
Test Control involves guiding and taking corrective measures activity, based on the results
of Test Monitoring. Test Control examples include:
1. Prioritizing the Testing efforts
2. Revisiting the Test schedules and Dates
3. Reorganizing the Test environment
4. Re prioritizing the Test cases/Conditions
Test Monitoring and Control go hand in hand. Being primarily a manager’s activity, a Test
Analyst contributes to this activity by gathering and calculating the metrics which will be
eventually used for monitoring and control.
While these tests can be performed by a human, they are quite complex and are therefore prone
to errors. For example, someone testing a site in a foreign language is bound to make mistakes,
especially if the site is sizable. In instances like this, it's easy to see why automation testing is
the right option.
That said, there are some instances where manual testing is better, including:
New test cases that have not yet been executed manually
Test cases where the criteria are always changing
Test cases that are not routine
In these instances, you can see why it would be beneficial to have a pair of human eyes on the
testing. For example, the first time a test code is written, it should be run manually to ensure
that it delivers the expected result. Once this is verified, it can then be used as an automated
solution.
In the cases where automation testing is appropriate, you’ll see some specific benefits,
(perhaps even more so if you are already using AI in test automation) including:
Speed
Wider test coverage
Consistency
Cost savings
Frequent and thorough testing
Faster time to market
Now that you know when to use an automation tool and the reasons why you should, let’s look
at how to choose the right tool for your needs.
9 Types Of Automation Testing
Generally, there are two types of testing. Functional testing tests the real-world applications of
the software while non-functional testing tests different software requirements, like security
and data storage.
Many specific types of testing fit into these categories, and some of them may overlap. The
types of automated testing include:
1. Unit Testing
Unit testing is testing small, individual components of the software. It’s the first stage of
testing, and while it’s usually done manually, it can be automated, so I wanted to include it
here.
2. Smoke Tests
A smoke test is a functional test that determines whether or not a build is stable. It verifies the
function of essential features to make sure the program can endure further testing. The name
comes from the idea that this test prevents the program from catching fire if it’s not ready for
additional testing.
3. Integration Tests
These functional tests make sure that all of the individual pieces of software test are well when
operating as a whole.
4. Regression Tests
Regression tests are both functional and non-functional, ensuring that no part of the software
has regressed after changes are made.
5. API Testing
The application programming interface or API acts as the conduit between all the other systems
that your software needs to function. It’s usually tested after software development to make
sure that everything is working together as it should.
6. Security Tests
Security tests are also functional and non-functional. Their purpose is to check everything for
security weaknesses that can be exploited.
7. Performance Tests
Non-functional performance tests evaluate stability and responsiveness. They ensure that the
software can handle stress and deliver a better and more reliable user experience.
8. Acceptance Tests
Acceptance tests are functional tests that try to determine how end-users will respond to the
final product. This test must be passed successfully before the product can be released to end-
users.
9. UI Tests
User interface tests are one of the last tests in the process. This test is designed to accurately
replicate a typical user experience. It ensures that the end product that users interact with
works as it should.
Test Automation Frameworks
Once you know what kind of automated testing you need to do, the next step is to choose a
framework to organize the testing process.
The biggest benefit of doing this is that it standardizes the testing process, which provides a
structure so that everyone applying automated testing to the project is on the same page.
Some of the most common types of test automation framework are:
Linear Framework
This type is sometimes called Record and Playback. Testers create a test script for each test
case. It’s a very basic approach that’s more suited to a small team that doesn’t have a lot of
experience with test automation.
This framework organizes each test case into small, independent modules. Each one has a
different scenario, but they are all handled by the framework’s single master script. This
approach is very efficient, but a lot of planning is required, and it’s best used by testers who
have experience with automation testing tools.
Manual
Automated
Unit testing is commonly automated but may still be performed manually. Software Engineering
does not favor one over the other but automation is preferred. A manual approach to unit testing
may employ a step-by-step instructional document.
A developer writes a section of code in the application just to test the function. They
would later comment out and finally remove the test code when the application is
deployed.
A developer could also isolate the function to test it more rigorously. This is a more
thorough unit testing practice that involves copy and paste of code to its own testing
environment than its natural environment. Isolating the code helps in revealing
unnecessary dependencies between the code being tested and other units or data
spaces in the product. These dependencies can then be eliminated.
A coder generally uses a UnitTest Framework to develop automated test cases. Using an
automation framework, the developer codes criteria into the test to verify the correctness
of the code. During execution of the test cases, the framework logs failing test cases.
Many frameworks will also automatically flag and report, in summary, these failed test
cases. Depending on the severity of a failure, the framework may halt subsequent testing.
The workflow of Unit Testing is 1) Create Test Cases 2) Review/Rework 3) Baseline 4)
Execute Test Cases.
Software development projects can gain a number of advantages through unit testing, including
the following:
1. Early Bug Detection: Unit testing aids in the early detection of flaws or defects in the
software code, helping to avoid them developing into larger problems or spreading to
later stages of the software development cycle. This reduces the entire development time
and lowers costs.
2. Quicker Software Development: Unit tests help engineers find and quickly fix errors,
which speeds up software development. Unit testing also aids in the early detection of
flaws, which makes it simpler to fix problems before they worsen.
3. Higher Quality Code: Unit testing helps guarantee that code is of a high standard and
complies with the specifications of the software. Early bug detection allows engineers to
create more dependable, scalable, and effective code.
4. Better Team Communication: Unit testing gives team members a clear and concise way
to discuss the code, which enhances team communication. Developers can cooperate to
make sure their code complies with the criteria by having an easy time understanding
what is expected of their code.
5. Code Reusability: Unit testing can assist in locating code that is applicable to different
areas of the programme. Developers can increase the code's modularity and make it
simpler to maintain and modify in the future by spotting these code snippets early on.
6. Better Documentation: Unit tests acts as documentation which shows how the code is
supposed to operate. These tests can be used by developers as a guide for comprehending
the code, which can assist prevent misunderstandings and confusion.
Overall, unit testing is a crucial part of creating modern software. Unit testing can save time and
money while ensuring that software satisfies the needs of the end user by detecting errors early,
guaranteeing code quality, enhancing collaboration, and lowering technical debt.
While unit testing offers many benefits to software development, here are some potential
disadvantages that should be considered. Here are some of the key disadvantages of unit testing:
1. Time Consuming: Unit testing can take a lot of time, particularly in complicated, large-
scale projects. Unit test creation, execution, and maintenance can be labour-intensive and
extend development time.
2. Increased Code Complexity: Unit testing might result in increased code complexity
since developers must add more code to support test scenarios. For individuals who are
unfamiliar with the project, in particular, this can make the code more difficult to read
and comprehend.
3. False Sense of Security: Passing unit tests simply validates the functionality of the tested
unit; it does not take into account how the tested unit interacts with other components of
the system. An issue in production may arise if a unit passes all tests but fails in the larger
system.
4. Maintenance Challenges: Maintaining unit tests can be difficult, particularly when code
modifications happen often. To keep the tests relevant, developers must update them,
which can be time-consuming and challenging.
5. Limitations on Test Coverage: It could be challenging to obtain 100% test coverage,
particularly in complex systems with lots of interdependent components. The lack of
testing in some areas of the code can cause problems in the production environment.
6. Cost: Putting in place a thorough unit testing approach may call for more resources and
raise the price of software development.
Reference or define anomalies in the flow of the data are detected at the time of associations
between values and variables. These anomalies are:
A variable is defined but not used or referenced,
A variable is used but never defined,
A variable is defined twice before it is used
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
To find a variable that is used but never defined,
To find a variable that is defined but never used,
To find a variable that is defined multiple times before it is use,
Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
Time consuming and costly process
Requires knowledge of programming languages
Example:
1. read x, y;
2. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
Control flow graph of above example:
x 1 2, 3
y 1 2, 4
a 3, 4 5
M
ajor States of SIT: There are three major states of system integration testing:
1. Data state within the integration layer: Integration layer is the medium used for data
transformation. Different web services are involved in this layer which is used as medium
for data sending and receiving. There are several check-points where the validation of data
is checked and there are several protocols used in it. Middleware is also used as medium
for transformation which allows the data mapping against the cross-checking.
2. Data state within the database layer: Database layer consists of several steps
involved in it. It checks whether the data is transformed from integration layer to database
layer. Data properties are checked and data validation process is performed. Mainly SQL is
used for data storing and data manipulation process.
3. Data state within the application layer: Application layer is used to create a data map
for databases and check its interaction with user interface. Data properties are also checked
in it.
System Integration Testing (SIT) is a type of testing that focuses on verifying the interactions
and interfaces between different systems or components of a software application. The purpose
of SIT is to ensure that the different components are integrated and work together correctly as a
system.
SIT is typically performed after unit testing and integration testing, and before user acceptance
testing (UAT). SIT tests the end-to-end flow of data and transactions through the system,
including the interfaces between the components, the communication protocols, the data
transfer mechanisms, and the shared resources.
During SIT, various test scenarios are executed to ensure that the components of the system
interact with each other seamlessly and correctly, and that they conform to the functional and
non-functional requirements of the system. The testing team may use automated testing tools
and techniques to simulate various real-world scenarios and edge cases.
The test environment for SIT should closely resemble the production environment, including
all the hardware, software, and networking components, to ensure that the system behaves as
expected in the production environment. Any defects or issues that are discovered during SIT
are logged and tracked, and are usually fixed before the system is deployed for user acceptance
testing or production.
In summary, SIT is an important testing phase in the software development lifecycle that
ensures that the different components of a system are integrated and work together correctly.
SIT helps to identify and resolve any issues or defects before the system is deployed for user
acceptance testing or production, thereby reducing the risk of costly and time-consuming
rework.
1. Ensures system functionality: SIT verifies that the different components of the system
are integrated and work together correctly, ensuring that the overall system meets the
functional and non-functional requirements.
2. Reduces risk: SIT helps to identify and resolve any issues or defects before the system
is deployed for user acceptance testing or production, reducing the risk of costly and time-
consuming rework.
3. Improves system quality: By verifying the interactions and interfaces between different
systems or components, SIT helps to improve the quality of the system and ensures that it
performs as expected.
4. Enhances system performance: SIT helps to identify and address any performance
issues that may arise due to the interactions and interfaces between different components of
the system.
5. Increases team collaboration: SIT requires close collaboration between the different
teams responsible for developing and testing the system, enhancing team collaboration and
communication.
6. Supports agile development: SIT is an essential testing phase in agile development
methodologies, helping to ensure that the system is tested comprehensively and meets the
specified requirements.
Unit-II
Answer all the questions
Part-A
1.What is test plan?
A software testing plan is a document that describes the scope, approach, resources,
schedule, and activities of testing a software product1234. It is the basis for formally testing any
software in a project and serves as a blueprint for the test manager34. A software testing plan
should include the following elements5:
Software background information
A high-level document is used to validate the test types or levels to be executed for the product
and specify the Software Development Life Cycle's testing approach is known as Test strategy
document.
Once the test strategy has been written, we cannot modify it, and it is approved by the Project
Manager, development team.
The test environment is used by the testing teams to check the quality and impact of the
application before handing it to the user. It can test a specific part of an application using
different configurations and data setups. It’s an essential part of the agile development
methodology.
Test automation is the process of using automation tools to maintain test data, execute tests, and
analyze test results to improve software quality.
Automated testing is also called test automation or automated QA testing. When executed well, it
relieves much of the manual requirements of the testing lifecycle.
20.What is approvals?
Identify the stakeholders who must review and approve the test plan before testing can
commence. Include their names and signatures.
Part-B
1.Elaborately explain about software test plan?
The software testing process is a crucial stage in the development of a solid and powerful
application. Documentation plays a critical role in achieving effective software testing. It makes
the testing process easy and organized, also saves company money and time spent on that
software project. With proper documentation it is easy for the client to review the software
process. In this article, we will discuss a type of software documentation, test plan in software
testing.
Listed below are the topics covered in this article:
Test Plan
Test Case
Test Scenario
Traceability Matrix
Moving further with this article on ‘Test Plan in Software Testing’ let’s learn more about test plan
in particular.
What is Test Plan in Software Testing?
A test plan in software testing is a document which outlines the what, when, how, who, and more
of a testing project. It contains the details of what the scope of testing is, what the test items are,
who will do which testing task, what the items test/pass criteria will be, and what is needed to set
up the test environment and much more.
Planning is the first step of the software testing process. A test plan document outlines the
planning for the entire test process. It has the guidelines for the testing process such as approach,
testing tasks, environment needs, resource requirements, schedule, and constraints. It explains
the full process of what you’re going to do to put the software through its paces, in a step-by-step
format. In software testing project, when you have a plan in place, chances are it will go
smoother. But, why is it required to write a test plan?
What are the Benefits of Test Plan?
Value of writing a test plan is tremendous. It offers a lot of benefits like:
It serves as a roadmap to the testing process to ensure your testing project is successful
and helps you control risk.
Proper test plan provides a schedule for testing activities, so, you can have a rough
estimate of time and effort needed to complete the software project
It clearly defines the roles and responsibilities of every team member, outlines the
resource requirements which are essential to carry out the testing process
Planning and a test plan encourages better communication with other project team
members, testers, peers, managers, and other stakeholders
Helps people outside the test team such as developers, business managers, customers
properly understand the details of testing
Even though performing smoke testing has a lot of advantages, testers may choose not to write
one, arguing, for reasons like time, challenge and redundancy constraints.
So, what happens when one doesn’t have a test plan?
So, it’s unquestionable that writing a test plan has a lot of pros than cons. Now you already know
that making a test plan is the most important task of Test Management Process. So how do you
write a test plan in software testing?
How to Write a Good Test Plan?
You can follow these 6 steps to device an efficient test plan:
With the knowledge of testing strategy and scope in hand, your next step is to develop a schedule
for testing. Creating a schedule helps you control the progress of the testing process. While
drawing up a schedule, you should consider factors like:
Project estimate
Project risk estimate
Resource estimate
Employee roles & responsibilities
Test activity deadlines
Well, you can follow these simple steps to prepare a test plan. With that said, what do you
actually include in the plan? Different people may come up with different sections to be included
in a test plan. But who will decide what is the right format?
Test Plan Template
IEEE, an international institution that defines standards and template documents which are
globally recognized. It has defined the IEEE 829 standard for system and software
documentation. This IEEE 829 standard specifies the format of a set of documents that are
required in each stage of the software and system testing. The table below lists out the test plan
parameters according to the IEEE 829 standard test plan template.
Scope: Details the objectives of the particular project. Also, it details user scenarios to be
used in tests. The scope can specify scenarios or issues the project will not cover if necessary.
Schedule: Details start dates and deadlines for testers to deliver results.
Resource Allocation: Details which tester will work on which test.
Environment: Details the test environment‘s nature, configuration, and availability.
Tools: Details what tools will be used for testing, bug reporting, and other relevant
activities.
Defect Management: Details how bugs will be reported, to whom, and what each bug
report needs to be accompanied by. For example, should bugs be reported with screenshots, text
logs, or videos of their occurrence in the code?
Risk Management: Details what risks may occur during software testing and what risks
the software itself may suffer if released without sufficient testing.
Exit Parameters: Details when testing activities must stop. This part describes the
expected results from the QA operations, giving testers a benchmark to compare actual results.
Examinations: Examinations are a review to find small hidden bugs which might be
overlooked when the rest reviews happen. The review is especially for small mistakes.
Peer Review: These reviews are done by your colleagues. This is an informal review.
Software testing refers to the process of verifying and evaluating the function of a software
application or product. It’s used to reduce or eliminate bugs and minimize the amount of money
a company must invest in addressing issues and releasing updates.
In some cases, Software Testers are called in to improve a program’s performance — even if it
doesn’t have any noticeable bugs. In short, Software Testers are crucial because they help
optimize software, profit, and processes.
Software testing is important because the impact of untested or underperforming software can
have a trickle-down or domino effect on thousands of users and employees.
For example, if a web application that sells a product works too slowly, customers may get
impatient and buy a similar product elsewhere. Or, if a database within an application outputs the
wrong information for a search query, people may lose trust in the website or company in
general.
Software Testers help prevent these kinds of corporate faux pas. Plus, software testing can help
ensure the safety of users or those impacted by its use, particularly if an application is used to run
a critical element of a town or city’s infrastructure.
In the software engineering process, testing is a key element of the development lifecycle. In a
waterfall development system, Software Testers may be called in after an application has been
created to see if it has any bugs and how it performs. The Testers’ feedback is critical to the
process because it helps engineers fine-tune the end product.
For example, if a web app needs to integrate well with mobile devices, one group of Software
Testers may focus on the app’s performance on iOS and Android devices, while another group of
Testers checks how it performs on macOS or Windows.
Similarly, granular elements of an application can be run through tests. This can include how
well it processes information from interactive databases or the flow and feel of the user interface.
The input from Testers can make it easier and faster to fine-tune key elements of an application’s
performance, particularly from the perspective of an end-user.
There are several types of software testing, each requiring varying degrees of specificity. Here’s
a list of some of the most common:
Usability testing
Usability testing involves figuring out how well the system works when a customer uses it for a
specific task. A usability test can be performed on one or a combination of tasks to see how the
programming functions in different scenarios.
Acceptance testing
Acceptance testing involves checking to make sure the system works as it’s supposed to. While
this may involve a general test of several functions, it can also focus on a specific set, especially
if one type of user tends to use the software in a particular fashion.
For instance, imagine an app is used by several people working in a factory. It might include a
feature that aligns existing inventory with customer orders, pointing out any discrepancies. It
also might show an item’s status in the manufacturing process, including its current station or
even who’s working on it. One acceptance test can be done for each of these functions.
Regression testing
Regression testing is meant to assess the impact of new features that get added to an application.
At times, a new feature may interfere with one that’s already proven effective.
This kind of feedback can help engineers adjust how each feature interacts with the program’s
dependencies — or decide which features to alter.
Integration testing
Integration testing aims to figure out how well different components of the app work with each
other. Each element of an app requires different resources, and sometimes they can compete with
each other in ways that hurt functionality. Integration testing can reveal these kinds of
weaknesses.
Unit testing
A unit refers to the smallest component of an application that can be tested. Unit testing attempts
to see how different components perform in isolation. This gives engineers a view into how well
their code executes from a specific, granular perspective.
Functional testing
Functional testing brings real-world scenarios into the mix. Through functional testing, engineers
can see how software accomplishes specific, intended purposes.
For instance, an app may be designed to integrate a customer relationship management (CRM)
solution with an email system. In this case, functional testing may be used to see:
If the email application is opened when an employee clicks on someone’s email address
in the CRM.
Which app it defaults to.
If the “To” field is automatically populated.
Stress testing
Stress testing is in some ways the opposite of functional testing. During a stress test, your only
job is to figure out if and how the app breaks when put under stress. In most situations, a stress
test will not imitate a real-world scenario, as is the case with functional testing.
Performance testing
Performance testing is similar to stress testing, but your objective is to see how much load the
app can take in a real-world scenario. Like stress testing, if the app were to malfunction, this
would provide valuable data to the dev team.
For example, a team may run performance testing on how well a shopping cart functions during
a peak buying season, such as during the holidays. They could simulate many user requests for
purchases simultaneously and observe how the app handles them.
Regardless of the kind of testing performed, the development team will first establish a base set
of requirements. Outlining the essential functions the application has to perform in any given
situation — as well as the parameters that are considered “acceptable” — provides important
benchmarks for evaluation. This is a key element to any testing strategy.
There are also two specific techniques used to assess the stability and performance of software:
black-box and white-box testing. Each offers a different perspective into how well the coding
holds up.
Black-box testing: This involves testing software without looking at what’s inside — the
coding, systems, and dependencies.
White-box testing: With white-box testing, the aim is to examine the structure within the
application, looking at the inner workings of the app, as opposed to how it functions overall.
Even if you don’t write any code as a Software Tester, in many cases, you still have to be able to
read it. As a Software Tester, your job involves more than clicking buttons and tabs. You have to
be able to examine code and look for potential issues or see what may have caused an error or
malfunction.
Here are some languages you should learn to maximize your job prospects as a Software Tester:
Java. You can learn Java with our Learn Java course. You can also get into the nitty-gritty
of Java development with our Build Basic Android Apps with Java Skill Path.
C#. Using our Learn C# course, you can get familiar with the popular Microsoft language
and how it’s used to create sites, apps, and games.
Python. Using our Learn Python 3 course, you can see how many of the world’s most
popular apps run, making it easier to bring value to the table as a Tester.
Ruby. Ruby is a popular, user-friendly language. You can get comfortable with it using
our Learn Ruby course. You can also dig into the Ruby on Rails framework with courses
like Learn Ruby on Rails and Learn Authentication with Ruby on Rails.
Regardless of the kind of Software Tester you want to be, we can provide you with the
foundational knowledge you’ll need to help dev teams meet their goals. In this way, you’ll play a
crucial role in the development process, helping create useable, effective products for end-users.
should be compiled and run. It includes working with the software by giving input values and
checking if the output is as expected by executing particular test cases which can be done with
either manually or with automation process. In 2 V’s i.e., Verification and Validation,
Validation is Dynamic Testing.
Levels of Dynamic Testing
There are various levels of Dynamic Testing. They are:
Unit Testing
Integration Testing
System Testing
Acceptance Testing
There are several levels of dynamic testing that are commonly used in the software
development process, including:
1. Unit testing: Unit testing is the process of testing individual software components or
“units” of code to ensure that they are working as intended. Unit tests are typically small
and focus on testing a specific feature or behavior of the software.
2. Integration testing: Integration testing is the process of testing how different
components of the software work together. This level of testing typically involves testing
the interactions between different units of code, and how they function when integrated
into the overall system.
3. System testing: System testing is the process of testing the entire software system to
ensure that it meets the specified requirements and is working as intended. This level of
testing typically involves testing the software’s functionality, performance, and usability.
4. Acceptance testing: Acceptance testing is the final stage of dynamic testing, which is
done to ensure that the software meets the needs of the end-users and is ready for release.
This level of testing typically involves testing the software’s functionality and usability
from the perspective of the end-user.
5. Performance testing: Performance testing is a type of dynamic testing that is focused on
evaluating the performance of a software system under a specific workload. This can
include testing how the system behaves under heavy loads, how it handles a large number
of users, and how it responds to different inputs and conditions.
6. Security testing: Security testing is a type of dynamic testing that is focused on
identifying and evaluating the security risks associated with a software system. This can
include testing how the system responds to different types of security threats, such as
hacking attempts, and evaluating the effectiveness of the system’s security features.
IMPORTANTS POINTS:
Some important points to keep in mind when performing dynamic testing include:
1. Defining clear and comprehensive test cases: It is important to have a clear set of test
cases that cover a wide range of inputs and use cases. This will help to ensure that the
software is thoroughly tested and any issues are identified and addressed.
2. Automation: Automated testing tools can be used to quickly and efficiently execute test
cases, making it easier to identify and fix any issues that are found.
3. Performance testing: It’s important to evaluate the software’s performance under
different loads and conditions to ensure that it can handle the expected usage and the
expected number of users.
4. Security testing: It is important to identify and evaluate the security risks associated
with a software system, and to ensure that the system is able to withstand different types of
security threats.
5. Defect tracking: A defect tracking system should be implemented to keep track of any
issues that are identified during dynamic testing, and to ensure that they are addressed and
resolved in a timely manner.
6. Regular testing: It’s important to regularly perform dynamic testing throughout the
software development process, to ensure that any issues are identified and addressed as
soon as they arise.
7. Test-Driven Development: It’s important to design and implement test cases before the
actual development starts, this approach ensures that the software meets the requirements
and is thoroughly tested.
Starting at the center, each turn around the spiral goes through several task regions:
Spiral Model:
As the name implies, spiral model follows an approach in which there are a number of cycles (or
spirals) of all the sequential steps of the waterfall model. Once the initial cycle gets completed, a
thorough analysis and review of the achieved product or output is performed. If it is not as per
the specified requirements or expected standards, a second cycle follows, and so on. This
methodology follows an iterative approach and is generally suited for large projects having
complex and constantly changing requirements.
The methodology provides a framework for testing in this environment. The major steps include
information gathering, test planning, test design, test development, test execution/evaluation, and
preparing for the next spiral. It includes a set of tasks associated with each step or a checklist
from which the testing organization can choose based on its needs. The spiral approach flushes
out the system functionality. When this has been completed, it also provides for classical system
testing, acceptance testing, and summary reports.
As spiral methodology is generally a guideline system for solving a problem, with specific
components such as phases, tasks, methods, techniques and tools. The spiral model combines the
idea of iterative development (prototyping) with the systematic, controlled aspects of the
waterfall model. The spiral methodology is the harder choice to plan and budget because of the
uncertain nature of how many iterations it will take.
Test Plan
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product. Test
Plan helps us determine the effort needed to validate the quality of the application under test. The
test plan serves as a blueprint to conduct software testing activities as a defined process, which is
minutely monitored and controlled by the test manager.
As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources,
and schedule of intended test activities.”
Let’s start with following Test Plan example/scenario: In a meeting, you want to discuss the Test
Plan with the team members, but they are not interested – .
In such case, what will you do? Select your answer as following figure
Help people outside the test team such as developers, business managers,
customers understand the details of testing.
Test Plan guides our thinking. It is like a rule book, which needs to be followed.
Important aspects like test estimation, test scope, Test Strategy are documented in Test
Plan, so it can be reviewed by Management Team and re-used for other projects.
How can you test a product without any information about it? The answer is Impossible. You
must learn a product thoroughly before testing it.
The product under test is Guru99 banking website. You should research clients and the end users
to know their needs and expectations from the application
Now let’s apply above knowledge to a real product: Analyze the banking
website http://demo.guru99.com/V4.
You should take a look around this website and also review product documentation. Review of
product documentation helps you to understand all the features of the website as well as how to
use it. If you are unclear on any items, you might interview customer, developer, designer to get
more information.
Back to your project, you need to develop Test Strategy for testing that banking website. You
should follow steps below
The components of the system to be tested (hardware, software, middleware, etc.) are
defined as “in scope“
The components of the system that will not be tested also need to be clearly defined as
being “out of scope.”
Defining the scope of your testing project is very important for all stakeholders. A precise scope
helps you
Give everyone a confidence & accurate information of the testing you are doing
All project members will have a clear understanding about what is tested and what is not
Now should clearly define the “in scope” and “out of scope” of the testing.
As the software requirement specs, the project Guru99 Bank only focus on testing all
the functions and external interface of website Guru99 Bank (in scope testing)
Nonfunctional testing such as stress, performance or logical database currently will not
be tested. (out of scope)
Problem Scenario
The customer wants you to test his API. But the project budget does not permit to do so. In such
a case what will you do?
Well, in such case you need to convince the customer that Api Testing is extra work and will
consume significant resources. Give him data supporting your facts. Tell him if Api Testing is
included in-scope the budget will increase by XYZ amount.
The customer agrees and accordingly the new scopes, out of scope items are
Now let’s practice with your project. The product you want to test is a banking website.
B) API Testing
C) Integration Testing
D) System Testing
E) Install/Uninstall Testing
F) Agile testing
In your project, the member who will take in charge for the test execution is the tester. Base on
the project budget, you can choose in-source or outsource member as the tester.
When will the test occur?
Test activities must be matched with associated development activities.
You will start to test when you have all required items shown in following figure
1. List all the software features (functionality, performance, GUI…) which may need to test.
2. Define the target or the goal of the test based on above features
Let’s apply these steps to find the test objective of your Guru99 Bank testing project
You can choose the ‘TOP-DOWN’ method to find the website’s features which may need to test.
In this method, you break down the application under test to component and sub-component.
In the previous topic, you have already analyzed the requirement specs and walk through the
website, so you can create a Mind-Map to find the website features as following
This figure shows all the features which the Guru99 website may have.
Based on above features, you can define the Test Objective of the project Guru99 as following
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during testing,
the active test cycle will be suspended until the criteria are resolved.
Test Plan Example: If your team members report that there are 40% of test cases failed, you
should suspend testing until the development team fixes all the failed cases.
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria are
the targeted results of the test and are necessary before proceeding to the next phase of
development. Example: 95% of all critical test cases must pass.
Some methods of defining exit criteria are by specifying a targeted run rate and pass rate.
Run rate is ratio between number test cases executed/total test cases of test
specification. For example, the test specification has total 120 TCs, but the tester only
executed 100 TCs, So the run rate is 100/120 = 0.83 (83%)
Pass rate is ratio between numbers test cases passed / test cases executed. For example,
in above 100 TCs executed, there’re 80 TCs that passed, so the pass rate is 80/100 = 0.8
(80%)
Test Plan Example:Your Team has already done the test executions. They report the test result
to you, and they want you to confirm the Exit Criteria.
In above case, the Run rate is mandatory is 100%, but the test team only completed 90% of test
cases. It means the Run rate is not satisfied, so do NOT confirm the Exit Criteria
Human Resource
The following table represents various members in your project team
No. Member Tasks
You should ask the developer some questions to understand the web application under
test clearly. Here’re some recommended questions. Of course, you can ask the other questions if
you need.
What is the maximum user connection which this website can handle at the same time?
What are hardware/software requirements to install this website?
Does the user’s computer need any particular setting to browse the website?
Employee and project deadline: The working days, the project deadline, resource
availability are the factors which affected to the schedule
Project estimation: Base on the estimation, the Test Manager knows how long it takes to
complete the project. So he can make the appropriate project schedule
Project Risk : Understanding the risk helps Test Manager add enough extra time to the
project schedule to deal with the risks
Test Scripts
Simulators.
Test Data
Test Traceability Matrix
Error logs and execution logs.
Test Results/reports
Defect Report
Installation/ Test procedures guidelines
Release notes
Test coverage can be done by exercising the static review techniques like peer reviews,
inspections, and walkthrough
By transforming the ad-hoc defects into executable test cases
At code level or unit test level, test coverage can be achieved by availing the automated
code coverage or unit test coverage tools
Functional test coverage can be done with the help of proper test management tools
What Are Main Differences Between Code Coverage And Test Coverage?
Code coverage and test coverage are measurement techniques which allow you to assess the
quality of your application code.
Here, are some critical differences between booths of these coverage methods:
Parameters Code Coverage Test Coverage
Example 1:
For example, if “knife” is an Item that you want to test. Then you need to focus on checking if it
cuts the vegetables or fruits accurately or not. However, there are other aspects to look for like
the user should able to handle it comfortably.
Example 2:
For example, if you want to check the notepad application. Then checking it’s essential features
is a must thing. However, you need to cover other aspects as notepad application responds
expectedly while using other applications, the user understands the use of the application, not
crash when the user tries to do something unusual, etc.
Most of the tasks in the test coverage are manual as there are no tools to automate.
Therefore, it takes lots of effort to analyze the requirements and create test cases.
Test coverage allows you to count features and then measure against several tests.
However, there is always space for judgment errors.
Test Evaluation Report (TER) is a document that contains a summary of all the testing activities,
methods used for testing, and a summary of the final test results of a Software project. TER is
prepared after the completion of testing and the Test Summary Report and provides all the
necessary information regarding software testing to the developers and the key stakeholders.
These stakeholders can then evaluate the quality of the tested product and make a decision on the
software release.
Conducts analysis and assessment of Test Summary Report (TSR), source codes, test
results, and the measures used for product testing.
Enables objective evaluation and assessment of product quality.
Consists of corresponding recommendations that may be required for the next testing
efforts.
Validates that no bug or error was missed by the tester.
Test Evaluation Reports are essential for making sure that the product under development
is achieving an acceptable level of quality before it is released to the market.
Stakeholders and customers can take corrective actions if needed for future development
processes.
Done right; this can add true value to your development lifecycle by providing the right
feedback at the right time.
An effective Test Evaluation Report should contain the following components:
1. Project Information
All the information regarding the project and the customer, such as Project Name, Customer
Name, and Project No. etc., is mentioned under this section. For a Change Request (CR), the CR
number can be mentioned as well.
2. Introduction
The introduction section can consists of the following:
Purpose: This describes the purpose of the Test Evaluation in terms of test coverage and
defect analysis.
Scope: This describes the scope of this document; associated project(s), and other items
of interest to the test team.
Definitions: This can contain definitions, abbreviations, and acronyms required to
interpret the Test Evaluation document.
References: This identifies the documents referenced in the Test Evaluation Summary by
title, report number, date, and author.
Overview: This describes the rest of the Test Evaluation document and how the
document is organized.
3. Test Results
Test Results are summarized in this section. Test Results are generally the outcome of the whole
process of the Software Testing Life Cycle. The produced results offer an insight into the
deliverables of a software project which represents the project status to its stakeholders.
4. Test Coverage
Test Coverage is covered in this section which includes both the Requirements-based Test
Coverage and the Code-based Test Coverage. Test Results of both the coverages are mentioned
here and are compared with the previous test results.
Read More: Test Coverage Techniques Every Tester Must Know
5. Recommendations
This section identifies any suggested actions that need to be made based on the evaluation of the
test results. These recommendations help the developers/stakeholders to understand and work
accordingly for the next phase in the development life cycle.
6. Diagrams and Graphs
Under this section, diagrams, charts, graphs, or other data visualization of the test results can be
added. This helps in better debugging and root cause analysis
Example:
Source
Introduction
Purpose:
Scope:
References:
Overview:
Test Results
Recommendations
Diagrams
Concise and Clear: The information captured in the test report should be short, clear,
and easy to understand.
Detailed: The report should provide detailed information about the testing activities
whenever and wherever necessary. The information provided should not be abstract as it won’t
help the stakeholders in drawing a clear picture of it.
Standard: The report should follow a standard template as it is easy for stakeholders to
review and understand.
Specific: The report should describe and summarize the test result specification and focus
on the main point only.
On a closing note
Software Testing is extremely crucial nowadays and is done by all businesses. It not only
validates the product quality, but also makes sure that the customer demands are fulfilled.
Therefore, a Test Evaluation Report, provided after the Test Summary Report completion, is very
important.
The goal of this report is to deliver to the stakeholders a detailed evaluation and assessment of
the test results and the methods used for testing. The information collected here is presented to
the customer with an evaluation from the testing team, which indicates their product assessment
against the evaluation mission.
Tools like BrowserStack allows you to test on real devices and browsers, at the same time you
can also get text logs, console logs, video logs, screenshots, and share those on Slack,
GitHub, Jira, and Trello for better defect management within the team. It helps in collecting all
the necessary information that can be used to create a comprehensive Test Evaluation Report.
Acceptance testing is a quality assurance (QA) process that determines to what degree
an application meets end users' approval. Depending on the organization, acceptance testing
might take the form of beta testing, application testing, field testing or end-user testing.
A QA team conducts acceptance tests to ensure the software or app matches business
requirements and end-user needs. An acceptance test returns either a pass or fail result. A fail
suggests that there is a flaw present, and the software should not go into production.
Acceptance testing enables an organization to engage end users in the testing process and gather
their feedback to relay to developers. This feedback helps QA identify flaws that it might have
missed during the development stage tests, such as unit and functional testing. Additionally,
acceptance testing helps developers understand business needs for each function in the tested
software. Acceptance testing can also help ensure the software or application
meets compliance guidelines.
Acceptance testing occurs after system tests, but before deployment. A QA team writes
acceptance tests and sets them up to examine how the software acts in a simulated production
environment. Acceptance testing confirms the software's stability and checks for flaws.
Acceptance testing includes the following phases: plan, test, record, compare and result.
Once the test is written according to the plan, end users interact with the software to gauge its
usability. The software should meet expectations, as defined by the business in the requirements.
When the tests return results, IT should report and fix any flaws that show up. If the results
match the acceptance criteria for each test case, the test will pass. But, if test cases exceed an
unacceptable threshold, they will fail.
Acceptance testing encompasses various types, including user acceptance and operational
acceptance.
How QA
differs from UAT
User acceptance testing (UAT), also called end-user testing, assesses if the software operates as
expected by the target base of users. Users could mean internal employees or customers of a
business or another group, depending on the project.
Operational acceptance testing reviews how a software product works. This type of testing
ensures processes operate as expected and that staff can sufficiently use and maintain the system.
Operational acceptance testing examines backups and disaster recovery, as well as
maintainability, failover and security.
Prepare a Test Report, which is a document that contains a summary of all test activities
and final test results of a testing project. It is an assessment of how well the testing is
performed. Based on the test report, stakeholders can evaluate the quality of the tested
product and make a decision on the software release.
Prepare a Test Summary Report, which is a formal document that summarizes the results
of all testing efforts for a particular testing cycle of a project / module or a sub module.
Generally, test leads or test managers prepare this document at the end of testing cycle. Some
test managers prepares it at the end of project.
Use metrics to understand the test execution results, the status of test cases & defects, etc.
Required metrics can be added as necessary. Example: Defect Summary-Severity wise;
Defect Distribution-Function/Module wise; Defect Ageing etc.. Charts/Graphs can be
attached for better visual representation. One such metric is the number of test cases planned
vs executed.
Unit-III
Before embarking on a cost and resource intensive effort, that automation can be, a thorough
evaluation and assessment ensures that maximum value is derived from the effort. Test
automation assessment can be carried out for both existing automation processes or for projects
that want to introduce automation to the process or framework to:
Understand current systems and processes. Evaluate part of the testing process that
should be automated.
Determine tools that best suit the requirement.
Create the right framework that complements the process flow and your CI/CD pipeline.
Present strategy, recommendations, and implementation plan.
Using a test automation framework has many benefits12345. Here are some of them:
6. What are the advantages enterprises get with Cloud test automation frameworks?
1. Time is money and cloud based testing saves it ...
2. Saving costs by cloud automation tools ...
3. Global access 24*7 to perform automation testing ...
4. Scalability at its best ...
5. Integration options of Cloud based testing tools offer a streamlined and focused
collaboration ...
(ATDD). These methodologies aim to test the code as early as possible and document and
develop the application around the user behavior.
10.What is software testing methodology?
Testing methodologies are specific strategies for testing all of the pieces of your software to
make sure it behaves as expected. These strategies include many ways to test software, such as
unit testing, integration testing, performance testing, and more. In this article, we’ll take a closer
look at testing practices that use a test-first approach to software development.
11.What is assessment model or framework?
Choose a recognized assessment model or framework to guide the assessment process. Common
frameworks include the Test Maturity Model Integration (TMMi), Capability Maturity Model
Integration (CMMI), and ISO/IEC 29119 standards for software testing.
12.What is plan the assessment?
• Develop a detailed plan that outlines the assessment's goals, schedule, resources, and
roles and responsibilities of the assessment team.
• Identify the assessment criteria and the specific practices and processes that will be
evaluated.
13.What is collect data and evidence?
Gather data and evidence related to the organization's testing practices. This may involve
reviewing documentation, conducting interviews, surveys, and observations, and examining
artifacts like test plans and test cases.
14.What is shift left testing?
• Shift-left testing involves moving testing activities earlier in the software development
lifecycle (SDLC), ideally at the requirements and design stages.
• Teams are integrating testing with development processes to identify and address defects
sooner, reducing the cost and effort required to fix issues.
15.What is test automation and continuous testing?
• Test automation continues to be a dominant trend, with organizations investing in test
automation frameworks and tools.
• Continuous Testing practices, integrated with DevOps and CI/CD pipelines, ensure that
testing is performed continuously throughout the development process.
16.What is AI and machine learning in testing?
• AI and ML are increasingly used to enhance software testing. They are applied for test
data generation, test case optimization, anomaly detection, and predictive analytics.
• AI-driven testing tools can help identify areas of the application that are more prone to
defects.
Part-B
1.What are the steps in test process assessment?
The Capability Maturity Model (CMM) and its’ successor the Capability Maturity Model
Integration (CMMI) are often regarded as the industry standard for software process
improvement. Despite the fact that testing often accounts for at least 30-40% of the total project
costs, only limited attention is given to testing in the various software process improvement
models such as the CMM and the CMMI. To overcome this, Testing community have created
many complementary models ([1], [3], [4] and [5]). TMMi is one such model. The TMMi is a
detailed model for test process improvement and is positioned as being complementary to the
CMMI. TMMi [1] has a staged architecture for process improvement. It contains stages or levels
through which an organization passes as its testing process evolves from one that is ad-hoc and
unmanaged, to one that is managed, defined, measured, and optimized. Achieving each stage
ensures that an adequate improvement has been laid as a foundation for the next stage. The
internal structure of the TMMi is rich in testing practices that can be learned and applied in a
systematic way to support a quality testing process that improves in incremental steps. There are
five levels in the TMMi that prescribe a maturity hierarchy and an evolutionary path to test
process improvement. Each level has a set of process areas that an organization needs to focus on
to achieve maturity at that level. Each of the maturity levels has its own process areas and each
process area has to comply with a set of specific goals and generic goals. Each of the specific
goal has its own specific practices which when implemented will achieve the specific goal.
Generic goals and practices are common for each process area and it covers institutionalizing of
managed process and institutionalizing of defined process for each of the process area under each
of the maturity levels.
2.What are the major steps of test automation assessment? Explain it.
Automation assessment
Before embarking on a cost and resource intensive effort, that automation can be, a thorough
evaluation and assessment ensures that maximum value is derived from the effort. Test
automation assessment can be carried out for both existing automation processes or for projects
that want to introduce automation to the process or framework to:
Understand current systems and processes. Evaluate part of the testing process that
should be automated.
Determine tools that best suit the requirement.
Create the right framework that complements the process flow and your CI/CD pipeline.
Present strategy, recommendations, and implementation plan.
Automation as a service
Drive seamless integration of domains and functionality tools into various automation layers.
Ensure increased application availability and infrastructure performance. Effortlessly automate
events, processes, tasks, and business functions across all levels: Web, Mobile, and API.
Seamlessly integrates with popular test management tools and into your CI/CD pipeline.
Provides business language scripting capabilities.
Provides AI/ML capabilities to predict failures, determine test cases for execution for
each new build.
Integrates tightly with many tools, including Selenium, Robotium, Calabash, Karate, etc.
New tech trends: Leverages AI/ML utilities that allow for effective Test Impact analysis
and test selection.
Integrates with Cloud Execution types of tools such as Sauce Labs and BrowserStack.
Integrated Static Code Analysis to maintain code correctness and code standards.
Improved visual quality: Image-Based Testing and AI-Based UI testing.
Integrates with CI/CD tools(Jenkins and Bamboo) to enable automated executions.
Seamlessly integrates with popular test management tools,Scheduling and customizable
reporting.
4. What is the need for Test Automation Framework for enterprises?Explain it.
Automation Framework is not a single tool or process. It is a customised collection of processes
and tools that work harmoniously together to help automate the testing of any application. Think
of it this way, it has specific features like libraries, reusable modules, and test data.
It helps enterprises to standardise all their test automation assets, regardless of the tools they are
using.
Why do you need a Test Automation Framework?
Test Automation Framework is used in situations where multiple test cycles must be conducted
for a large number of test cases.
• It helps increase the team’s efficiency and speed, also it reduces test maintenance costs
and improves test accuracy.
• It makes your processes and applications easier to test, more readable, scalable,
maintainable, and reusable.
Test automation frameworks are used to run commands and scripts many times with various
builds to verify the output and test applications.
It takes time to develop an automation framework. So, it is advised not to automate
functionalities that are used only once. More than that, automation takes a lot of time, effort, and
resources. Therefore, it should be used for functionalities that are used multiple times.
So, here the question is:
If the automation frameworks are so time and resource-consuming, why can’t a simple script
work?
And the answer:
Scripts are not appropriate for test cases where you are testing many scenarios. When you create
test scripts for every scenario, your application test suite will become too big.
It means if any tweaking in the property or application of any of the objects needed, you need to
alter all the scripts, and that is heavy. To avoid that, you can use test automation frameworks.
So, for an efficient test automation process, test automation frameworks are vital.
Types of Test Automation Frameworks
Each automation framework has its own architecture, advantages and disadvantages. Below
given are the different types of automation frameworks.
Behaviour-Driven Framework (BDD)
A behaviour-driven development framework is used to create a platform that allows every
Testers, Developers, Business analyst, etc. to engage actively. In addition, it helps to increase
collaboration between the tester and the developers in your project.
• Pros: It doesn’t require the users to be familiar with a programming language. With non-
technical, natural language, test specifications can be created.
• Cons: To work with BDD, sufficient technical skills and prior experience in Test Driven
Development (an iterative approach which is a combination of programming, creation of unit
tests, and refactoring) is required.
In BDD, good communication is crucial between the people who write the feature files and
develops the automation code. The coder should interpret these files and the scenarios to
implement them as automation steps. If there is no mutual understanding about the structure and
the approach being used, the scenarios become increasingly difficult to turn into working
automated tests. This can lead to problems.
Another issue is Data-driven testing, the BDD’s support for defining test data makes it easier to
create data-driven automated test scenarios. But, issues arise when trying to execute these tests in
test environments that are never left in a known state. Also, with the execution of the automated
tests, dependencies on external data feeds can often cause issues.
Modular-Driven Framework
In this, the testers create individual test scripts by dividing the application into multiple modules.
These individual test scripts can be combined to make larger test scripts. It is done by using a
master script to achieve the needed scenarios.
This master script is used to invoke the individual modules to run end-to-end test scenarios. This
framework builds an abstraction layer to protect the master module from any alterations made in
individual tests.
• Pros: Modular-driven frameworks make sure that the division of scripts leads to easier
maintenance and scalability. Thus the testers can write independent test scripts. Also, the changes
made in one module bring no or low impact to the other modules.
• Cons: Modular-driven frameworks take more time to examine the test cases and to
identify reusable flows. It requires coding skills to set up the framework.
Library Architecture Testing Framework
Setting up a data-driven framework will let the tester store and pass the input/output parameters
to test scripts from an external source. It can be Excel Files, Text Files, SQL Table, CSV Files, or
ODBC repositories.
The test scripts are connected to the external data source, and when needed, they are told and
populate the necessary data.
• Pros: As they reduce the number of scripts needed, multiple scenarios can be tested in
less code.
Hard-coding data can be avoided so that changes made to the test scripts do not affect the data
being used and vice versa.
• Cons: You will need a well-versed tester who is proficient in various programming
languages to completely utilise this framework’s design.
Keyword-Driven Framework
Keyword-Driven Framework also known as table-driven testing, is suitable only for small
projects or applications. The automation test scripts performed are based on the keywords stored
in the excel sheet of the project.
These keywords are a part of a script representing the various actions performed to test the GUI
of an application. These can be labeled simply as ‘click’, ‘login’, or with complex labels like
‘click link, or ‘verify link.’
• Pros: For this framework, minimal scripting knowledge is required. The code is reusable
as a single keyword can be used across multiple test scripts.
• Cons: The initial setup cost of the framework is high. It is also time-consuming and
complex. To work with this framework, you need an employee with good test automation skills.
Hybrid Testing Framework
A hybrid testing framework is a combination of Data-driven and Keyword-driven frameworks to
execute the most out of the frameworks mentioned above. It is to leverage the benefits and
strengths of other frameworks for the particular environment it manages.
• Pros: This model leverages the advantages of all kinds of related frameworks.
• Cons: In this model, tests are entirely scripted. Thus, it increases the automation effort.
1) Linear Scripting
2) The Test Library Architecture Framework.
3) The Data-Driven Testing Framework.
Disadvantages
Dialog("Login").WinButton("OK").Click
End Function
Now, you will call this function in the main script as follows
Call Login()
---------------------------
Other Function calls / Test Steps.
---------------------------
Advantages
Higher level of code reuse is achieved in Structured Scripting as compared to “Record &
Playback”
The automation scripts are less costly to develop due to higher code re-use
Easier Script Maintenance
Disadvantages
Jimmy Mercury
Tina MERCURY
Bill MerCURY
Step 2) Develop Test Script and make references to your Test- Data source.
SystemUtil.Run "flight4a.exe","","","open"
Dialog("Login").WinEdit("Agent Name:").Set DataTable("AgentName", dtGlobalSheet)
Dialog("Login").WinEdit("Password:").Set DataTable("Password", dtGlobalSheet)
Dialog("Login").WinButton("OK").Click
'Check Flight Reservation Window has loaded
Window("Flight Reservation").Check CheckPoint("Flight Reservation")
Disadvantages
More time is needed to plan and prepare both Test Scripts and Test Data
What is a Keyword?
Keyword is an Action that can be performed on a GUI Component. Ex. For GUI Component
Textbox some Keywords ( Action) would be InputText, VerifyValue, VerifyProperty and so on.
Object Action
(Application MAP) (KEYWORDS) Argument
WinButton(OK) Click
WebButton(OK) Click
Advantages
Disadvantages
Initial investment being pretty high, the benefits of this can only be realized if the
application is considerably big and the test scripts are to be maintained for quite a few
years.
High Automation expertise is required to create the Keyword Driven Framework.
NOTE : Even though Micro Focus UFT advertises itself as KeyWord Driven Framework, you
can not achieve complete test tool and application idependence using HP UFT.
Agile Testing includes various different principles that help us to increase the productivity of our
software.
1. Constant response
2. Less documentation
3. Continuous Testing
4. Customer Satisfaction
5. Easy and clean code
6. Involvement of the entire team
7. Test-Driven
8. Quick feedback
For our better understanding, let's see them one by one in detail:
1. Constant Response
In other words, we can say that the Product and business requirements are understood throughout
the constant response.
2. Less Documentation
The execution of agile testing requires less documentation as the Agile teams or all the test
engineers use a reusable specification or a checklist. And the team emphases the test rather than
the secondary information.
3. Continuous Testing
The agile test engineers execute the testing endlessly as this is the only technique to make sure
that the constant improvement of the product.
4. Customer Satisfaction
In any project delivery, customer satisfaction is important as the customers are exposed to their
product throughout the development process.
As the development phase progresses, the customer can easily modify and update requirements.
And the tests can also be changed as per the updated requirements.
When the bugs or defects occurred by the agile team or the testing team are fixed in a similar
iteration, which leads us to get the easy and clean code.
As we know that, the testing team is the only team who is responsible for a testing process in
the Software Development Life Cycle. But on the other hand, in agile testing, the business
analysts (BA) and the developers can also test the application or the software.
7. Test-Driven
While doing the agile testing, we need to execute the testing process during the implementation
that helps us to decrease the development time. However, the testing is implemented after
implementation or when the software is developed in the traditional process.
8. Quick response
In each iteration of agile testing, the business team is involved. Therefore, we can get continuous
feedback that helps us to reduces the time of feedback response on development work.
In this framework, the testers themselves have shared resources across teams, however testing
protocols, toolsets, and KPIs are maintained at a centralized level. This allows organizations to
quickly deploy any tester to any team while continuously maintaining QA principles and
processes.
You feel pressure to reduce time to production: The QA cycle of writing test cases,
scripting and executing takes a considerable percentage of the overall software
development lifecycle (SDLC). Having a TCoE in place cuts out the repetitive processes
across teams, allowing them to focus solely on testing tasks that matter.
Your organization is challenged by not hiring and onboarding strong testing
resources: It can establish reliable recruiting, hiring, and onboarding protocols. This
leads to strong testers across your organization, who are all onboard with consistency.
You want to encourage persistent innovation: A tester’s day is filled with writing test
cases or scripting, executing tests, and reporting defects. There is typically very little time
for innovating and advancing the way they work. Having a Testing Center of Excellence
ensures that someone in your organization is focused on this critical component.
Shifting projects and priorities leaves your testers shifting teams or deliverables
often: In an agile environment, sometimes customer feedback loops lead to frequently
shifting priorities. Having the ability to shift resources and maintain quality is the key to
being successful.
How To Set Up TCoE?
Once an organization agrees to the framework of a Testing Center of Excellence, then hard work
comes in the form of successfully implementing it.
Resources/Cost Involved
Your resources and costs may vary depending on how your company approaches the
implementation. For example, if you decide to partner with a third-party vendor to start-up
and/or maintain the TCoE, the internal resources dedicated to this may be minimal, however,
your partnership may result in higher costs.
On the contrary, if you’re considering implementing this framework in-house, then the
following resources and costs should be considered:
Resources: A Testing Center of Excellence should be comprised of individuals who are
fully dedicated to this initiative. When considering who should be included, contemplate
recruiting testing managers, testing leads, and ensure someone from each testing
competency is involved (automation, manual, performance, security, etc).
Cost: The cost associated with starting up an internal TCoE includes resources that will
be dedicated to its implementation and those that will formally sit within that group
moving forward. In addition, there may be costs to consider while standardizing testing
tools or purchasing a document repository solution.
TCoE Pros & Cons
While analyzing whether to implement a Testing Center of Excellence you must fully consider
the pros and cons as such.
Here are some cons to consider before deciding to make the leap:
A TCoE may overcomplicate things: If you have one or two teams with static testers,
odds are there that the processes and tools are fairly aligned. Or maybe you have high
functioning teams that would find standard ways of working an impediment to being
successful. Either way, adding in an additional layer may add unnecessary complexity,
thereby resulting in delayed releases and frustration.
Insufficient support could lead to burnout and failure: Deciding to implement a TCoE
without backing from all levels of your organization could lead its members feeling
discouraged and burned out if their process and tooling recommendations aren’t
supported or adopted properly.
TCoE Stages Of Evolution
The below image shows the three stages of TCoE:
Not defining how much authority the TCoE has: You will inevitably have a tester or
team who fails to follow processes or use tools outlined by the TCoE. Failing to provide
the Testing Center of Excellence with the ability to enforce guidelines will be
counterproductive and lead to low adoption rates over time.
Failing to create feedback loops for communication, both ways: Having a group of
individuals defining process or implementing new tools, without buy-in or direction from
the other teams in the organization, will drive an unsuccessful implementation. It is
important that all testers are engaged and help in driving decisions, not just in the
beginning, but over time as well.
Creating a TCoE with bad collaborators and communicators: It’s not enough for this
group to be comprised of people who understand the testing principles in-depth, it is also
a must that they value communication and collaboration.
Trying to move too quickly during the implementation phase: Identifying, planning,
and implementing a Testing Center of Excellence takes time. Ensuring that you’ve gone
through the steps above, and taking the time needed to plan upfront, will pay off in the
end.
Identifying what KPIs you should measure is challenging and unique to every organization.
While selecting your set of KPIs, you must consider the team sizes and distribution, company
culture, and current gaps or challenges you are trying to fix.
A project delivery model is a term that is widely used within the IT industry. It is a way of
project delivery based on the location of labor resources. The choice of a delivery model can
affect the success of the entire project.
Our article is all about how the software engagement models operate and how they can operate
more efficiently. Let’s figure out the relationship between project success and practices and
understand the pain points of project managers. We will answer the question “What is an onsite
and offshore delivery model?”, and compare onsite, offshore, and hybrid cooperation with
outsourcing vendors as well. Besides, you will learn what suits your needs best, onsite software
development or offshore project development.
You may find it useful: How to set up a dedicated team for your project?
On-hand information. Both, a vendor and a customer, can get first-hand information from their
employees to learn about the current work progress.
Face-to-face communication enables on-time detection of emerging issues and efficient
problem-solving.
Effective collaboration. As there is no time and distance gap between both sites, there is almost
no misunderstanding within a team.
Time effectiveness. It often happens that clients offer some changes at the latest stages of the
software development lifecycle. When a team uses an onsite delivery model, there are no
chances of late changes. Everything is done on time.
Enhanced time to market. The product is delivered faster due to the above-mentioned
advantages.
Whereas there are a lot of advantages of using an offshore delivery model, clients must be aware
of the risks related to this type of partnership. Make sure you know about these risks.
Learn more about offshore dedicated teams.
People often choose the hybrid delivery model because of the cost savings of utilizing offshore
resources while reducing the total infrastructure cost (for the onsite team). Among the pros of an
onsite-offshore model are the following benefits:
Direct communication
High effectiveness
Best resources
Cost-effectiveness
Best practices in resource management
The management and administration costs involved in maintaining both the onsite and offshore
employees may inhibit many service providers from going for the onsite-offshore model. Also,
cultural differences between the onsite employees and offsite team members need to be managed
effectively to get the best results.
Are you looking for developers? Check out our staff augmentation options.
Project success
A short time ago, costs, time, and savings were the most important things determining success. A
project should not cost more than it was budgeted. A project should be completed on time, and
the results of the work should lead to benefits that are higher than the costs of the project. These
things are the main factors ensuring success, although recently, extra elements of success were
added.
So, let’s define a successful software development project as a software development project
where the delivered product meets the scope, has at least the expected quality, is completed on
time, and does not exceed its total budgeted costs, and communication.
In our company, we adhere to these principles. The teamwork of business analysts, tech leads,
and project managers assures accurate estimations. They select the right people and bring a team
together according to a customer’s project requirements.
Our senior managers and business analysts are mostly former developers with a strong
background in software engineering. Thus, they follow all the steps of the software development
life cycle and understand how important each of the stages is. These people select the right
candidates having all the necessary skills to complete a particular project.
Before bringing a team together, every single member of the team takes a special test to define
how psychologically compatible he/she is with other members of the team. Professional
recruiters develop these tests, and the latter provide accurate results.
Learn more about DICEUS.
Communication is the key instrument to overcome distances between onsite and offshore
employees and a client.
Control is adhering to goals, policies, standards, and quality levels. Coordination means all
managing activities that influence the project and, thus, communication.
If coordination is not sufficient, team performance and the final result will also not be sufficient.
This turns out to be one of the main sources why offshore projects fail: project management is
not adapting to the new offshore situation that is different from a distributed situation in the
home country.
The following coordination areas need attention: organizational structure, risk management,
infrastructure, process, conflict management, team structure, and team organization. Here are
five main categories:
Standards: Standards include all methodologies, rules, dictionaries, procedures, etc.
They are focused on delivering the right product (scope) with the right quality.
Plans: This category includes all schedules, milestones, and other plans. They all are
focused on delivering the product on time and within the budget.
Formal mutual adjustment is all about coordinating formal communication. This
category includes the creation of hierarchies, the planning of formal meetings, etc. Delegation is
also an important aspect that affects team performance over distance: people have certain
responsibilities in the project or the process.
Informal mutual adjustment: Some small measures can be taken to increase the chance
of informal communication between people.
Team selection: The knowledge and experience of all team members together influence
the success of the project. Three dimensions define the team’s maturity: team technical
competency, team motivation, and distributed teamwork skills (the ability to cooperate in a
distributed environment).
The five previously mentioned categories of coordination measures meet onsite offshore support
model challenges and aim at improving communication and knowledge exchange.
Conclusion
To sum it up, offshore project meaning is a wide notion, and the project success depends on a
great number of factors. Whatever delivery model you choose, you should consider these factors
before starting a project.
Let’s sum up the advantages of the three models under this review:
Offshore model
High-cost savings
A single point of contact
A minimum 4-hour overlap with the on-site team
Clear, responsive communications
Onsite model
Hybrid model
It is a process in which pre-scripted tests are executed on a software application before their
production release to check for any defect.
Types of Testing
Manual Testing
It’s a manual process to check for defects if any, while performing this tester must have the
stance of an end user. And ensuring all the features are working as stated in the requirement
document.
It has an objective to simplify the testing efforts as much possible with a minimum set of scripts
to find any defect using the automation tools. If a large percentage of a quality assurance (QA)
team’s resources are consumed by the unit testing, E.g. then for automation this process might be
a good candidate. Automated testing tools have the capability of comparing results with earlier
test runs, reporting outcomes and, executing tests. Thus, tests carried out with these tools can
repeatedly run, at any time of the day.
The software tool market is flooded with numerous soft-ware testing tools. There are 100+ tools
in the market, while not all of them will offer you the best service. Here is a list of best sorted
Software Testing Tools:
1. Apache JMeter
2. loadRunner
3. Loadster
4. Load impact
5. Qtest
6. httperf
7. Selenium
8. Qtp – HP
9. Appium
10. Pylot
The list goes on to a great number of options available. While the best of all the available options
are few only – Selenium, JMeter, LoadRunner, Qtp- HP and Appium are the most preferred of
all.
1. Selenium: this software testing tool is designed to test web applications. It was originated
back in the 2000s and evolved over a decade. Selenium has been a choice for web automation
testers especially for the ones possessing advanced programming and scripting skills for being
automation framework. Also, it has become a core framework for the other open source test
automation tools such as- Watir, Protractor, Robot Framework, and the Katalon
Studio. As Selenium supports multiple system environments like- Windows, Linux and Mac.
And the browsers- Firefox, Headless browser, IE, and Chrome. While, its scripts can be written
in various programming languages such as- Groovy, Python, PHP, Ruby, Java, Perl, etc.
2. JMeter: its open source software, a complete Java application designed to measure
performance and to load test. It was solely designed for testing the web applications but then
extended to other test functions as well. Apache JMeter may be used to test performance both on
static and a dynamic resource, Web dynamic applications JMeter is used.
4. QTP – HP: Quick Test Professional while the latest name is “UTF” i.e. Unified Functional
Tester after being acquired by HP, is an automation functional testing tool, primarily used for
regression, service testing, and functional. Using this you can automate actions of users on a web
or client based computer application and testing the same actions for different users, different
browsers, and various windows operating systems.
5. Appium: It’s an open source automation tool designed for mobile applications. All three types
of mobile applications: hybrid, mobile web, and native are tested by Appium. While, it also
allows to run the automated tests on actual devices – simulators and emulators.
There various online course providers like- Coursera, Edx, Vskills and Istqb, which will help you
to learn all these technologies. These professional certificates can boost your resume and helps
you in finding the right job.
DevOps involves practices, rules, processes, and tools that help to integrate development and
operating activities to reduce time from development to operations. DevOps has become a
widely accepted solution for organizations that are looking at ways to shorten the software
lifecycle from development to delivery and operation.
The adoption of both Agile and DevOps helps the teams to develop and deliver quality software
faster, which in turn is also known as “Quality of Speed”. This adoption has gained much interest
over the past five years and continues to intensify in the coming years too.
They need to find opportunities to replace manual testing with automated testing. Test
automation is considered to be an important bottleneck of DevOps, at a minimum, most
regression testing should be automated.
Given the popularity of DevOps and the fact that test automation is underutilized, with less than
20% of testing being automated, there is a lot of room to increase the adoption of test automation
in organizations. More advanced methods and tools should emerge to allow better utilization of
test automation in projects.
Existing popular automation tools such as Selenium, Katalon, and TestComplete continue to
evolve with new features that make automation much easier and more effective too.
For a list of the best automation testing tools for 2022, please refer here and this list here.
#3) API and Services Test Automation
Decoupling the client and server is a current trend in designing both Web and mobile
applications.
APIs and services are reused in more than one application or component. These changes, in turn,
require teams to test APIs and services independent of the applications using them.
When APIs and services are used across client applications and components, testing them is
more effective and efficient than testing the client. The trend is that the need for API and service
test automation continues to increase, possibly outpacing that of the functionality used by the
end-users on user interfaces.
Having the right processes, tools, and solutions for API automation tests is more critical than
ever. Therefore, it is worth your effort in learning the best API Testing Tools for your testing
projects.
#4) Artificial Intelligence for Testing
Although applying artificial intelligence and machine learning (AI/ML) approaches to address
challenges in software testing is not new in the software research community, recent
advancements in AI/ML with a large amount of data available pose new opportunities to apply
AI/ML in testing.
However, the application of AI/ML in testing is still in the early stages. Organizations will find
ways to optimize their testing practices in AI/ML.
AI/ML algorithms are developed to generate better test cases, test scripts, test data, and reports.
Predictive models will help you make decisions about where, what, and when to test. Smart
analytics and visualizations support teams to detect faults, understand test coverage, areas of
high risk, etc.
We hope to see more applications of AI/ML in addressing problems such as quality prediction,
test case prioritization, fault classification, and assignments in the upcoming years.
To fully support DevOps, mobile test automation must be a part of DevOps toolchains. However,
the current utilization of mobile test automation is very low, partly due to the lack of methods
and tools.
The trend of automated testing for mobile apps continues to increase. This trend is driven by the
need to shorten time-to-market and more advanced methods and tools for mobile test automation.
The integration between cloud-based mobile device labs like Kobiton and test automation tools
like Katalon may help in bringing mobile automation to the next level.
For example, using AI/ML to detect where to focus testing on, needs not only data from the
testing phase but also from the requirements, design, and implementation phases.
Along with the trend of increasing transformation towards DevOps, test automation, and AI/ML,
we will look at testing tools that allow integration with other tools and activities in ALM.
interest. In general, these properties indicate the extent to which the component or system under
test:
There are many types of software testing, each with a different purpose and scope. Some of the
most common types of software testing are:
These are some of the most common types of software testing, but there are many more types
that can be used depending on the context and objectives of the software project. Some other
types of software testing are:
Smoke testing: Smoke testing is a method of testing that checks whether the basic
functionality or features of a software application are working properly before performing more
detailed tests. It is usually done by developers or testers and is performed after each build or
deployment of the software. Smoke testing helps to verify that the software is stable enough for
further testing and does not have any major defects that would prevent it from functioning3
Sanity testing: Sanity testing is a method of testing that checks whether the new
functionality or features of a software application are working as expected after any changes or
modifications in the code. It is usually done by testers and is performed after regression testing
or smoke testing. Sanity testing helps to verify that the changes have not introduced any new
defects or errors in the software and that it meets the user requirements3
User acceptance testing: User acceptance testing is a method of testing that involves the
end users or customers of a software application. It is usually done by users or business
representatives and is performed before releasing or delivering the software to them. User
acceptance testing helps to verify that the software meets their expectations and requirements
and that they are satisfied with it3
Exploratory testing: Exploratory testing is a method of testing that involves exploring or
experimenting with a software application without following any predefined test cases or scripts.
It is usually done by testers and is based on their intuition, curiosity, creativity, and
experience. Exploratory testing helps to discover new defects or errors in the software that might
not be detected by other types of tests and to learn more about how it works and behaves in
different scenarios2
Ad hoc testing: Ad hoc testing is a method of testing that involves testing a software
application without any formal planning or documentation. It is usually done by testers and is
based on their random or spontaneous actions. Ad hoc testing helps to find defects or errors in
the software that might not be covered by other types of tests and to test the software in an
unpredictable or realistic way2
Security testing: Security testing is a method of testing that involves checking the
security or vulnerability of a software application. It is usually done by testers or security experts
and is based on the security requirements or standards. Security testing helps to verify that the
software is protected from any unauthorized access, modification, or damage and that it can
handle any malicious attacks or threats2
Globalization testing: Globalization testing is a method of testing that involves checking
the functionality or features of a software application in different languages, cultures, or regions.
It is usually done by testers or localization experts and is based on the globalization requirements
or specifications. Globalization testing helps to verify that the software can support multiple
languages, currencies, formats, etc. and that it can adapt to different user preferences and
expectations2
While there are several models for testing, the list below emphasizes some of the popular and
1. Waterfall Methodology
Amongst the rigid software testing methodologies, the waterfall methodology is one of those
models where the focus is on a development process that follows a vertical path. It is
characterized by its sequential processes where the next phase of the development process only
begins after the previous phase is completed. There are five predefined phases within the
Testing Approach
The ‘requirements’ phase ensures all the necessary requirements like testing objectives,
organizational planning, draft documents, and the testing strategy is defined and fixed. The rigid
documentation and planning make this model suitable for small applications compared to
The project ‘design’ is then selected and approved by the decision-maker. Next, the development
team implements the detailed design plan, and once it’s done, it’s ‘verified’ by the QA team and
stakeholders. Once the project is verified and launched, the development team starts with
‘maintaining’ the software product until it is thoroughly tested for the final launch.
Advantages
The methodology can be used to easily to plan and manage project requirements.
Disadvantages
Use Cases
2.Agile Methodology
The agile testing methodology is based on the idea of iterative development, where development
progress is made in rapid incremental cycles, also known as sprints. With complex applications
and swift market demands, the agile methodology opens interactions with stakeholders to better
understand their requirements. The communication cycle allows the team to focus on responding
to the changes instead of relying on extensive planning that might eventually change.
Testing Approach
The origin of the agile methodology was meant to break away from the rigid and model of
development and testing that does not give space for iterations. This is one of the reasons that the
testing team prefers using this approach for dynamic applications to accommodate constant
feedback from stakeholders. With less priority towards documentation and more towards
Instead of testing the entire system towards the end, every suggested iteration release is tested
thoroughly. Furthermore, each iteration has its own cycle of requirements, design, coding, and
testing cycle, making it a cyclical process. The test-driven development model is mostly used for
adding new functionalities to an existing system, making it suitable for small projects with tight
deadlines.
Advantages
Incremental tests minimize the cost and risks associated with numerous changes.
The constant communication between developers and clients determines the progress of
Disadvantages
Increased to and fro with clients may lead to longer delivery times.
Use Cases
The methodology works by breaking down a large project into smaller components, where each
component goes through a series of testing cycles. It is a data-driven model, and each iteration is
based on the results of the previous test cycle. The repeated tests ease organizational
management and streamline the software requirements before merging them into a final product.
Testing Approach
The iterative development model follows a cyclical pattern for testing smaller components of a
large project. Each iteration cycle is identical to a complete development cycle and consists of
planning, design, testing, and evaluation phases. Once a cycle is complete, the component is
When software application requirements are loosely defined, the feedback from each iteration
improves the functionality of the final product. This is the reason the model is most suitable for
Advantages
Smaller iterations for complex software applications decrease the time and cost of
development.
With each iteration cycle, the errors and bugs get removed at the initial development
stages.
The model allows more flexibility and focuses on the design instead of documentation.
Disadvantages
Use Cases
Gaming applications
Streaming applications
SaaS applications
Prototype testing
The V-methodology is considered an extension of the waterfall model, used for small projects
with defined software requirements. It follows a ‘V-shaped’ model that is categorized into
coding, verification, and validation. With coding being the base of the model, each development
phase goes hand-in-hand with testing, resulting in the early detection of errors in each step.
Testing Approach
The ‘V-model’ stands apart from the waterfall model in terms of parallel testing procedures
conducted at each development stage. The verification process ensures that the product is
developed correctly, while the validation process ensures that it’s the right product as per
requirements.
The model starts with the static verification processes that include a business requirement
analysis, system design, architectural design, and module design. Next, the coding phase ensures
that a specific language and tools are chosen based on the coding standards and guidelines.
Finally, the last phase of the validation ensures that each module and stage of development goes
through unit testing, integration testing, system testing, and application testing.
Advantages
development cycle.
Disadvantages
The model is not suitable for large projects with higher chances of changes.
There is no going back after a module has entered the testing phase.
Use Cases
Commercial applications.
The testing model is a form of an incremental methodology that originated out of the agile
developing components for the software, therefore, focusing more on testing rather than planning
and documentation. While each software function is divided and designed as separate
components, they are merged to form a prototype, collect end-user feedback, and make further
iterations accordingly.
Testing Approach
The RAD methodology consists of five phases through which the system components are
developed and tested simultaneously. Each of these phases is time-bound and must be done
The first phase, ‘Business Modeling’, identifies the business requirements and determines the
flow of information towards other business channels. Once the flow is determined, the ‘Data
The third, ‘Process Modeling’, converts data objects to establish a business information flow.
The phase defines the QA process through which the data objects can be further modified as per
client feedback. This is done while keeping in mind that the application might go through
The fourth stage of ‘Application Generation’ is known as the prototyping phase, and the models
are coded with automated tools. Finally, each prototype is individually tested in the ‘Testing and
Turnover’ phase, reducing the errors and risks in the overall software application.
Advantages
Simultaneous designing and reusability of the prototypes reduce development and testing
time.
The time-box approach at each incremental stage reduces the overall risks in the software
project.
Disadvantages
Use Cases
System modularization.
6. Spiral Methodology
The spiral methodology is one of the most popular software testing methodologies focusing on
risk handling. As the name suggests, the spiral method comprises many loops, where each loop
enhances the software and delivers small prototypes of the powerful software.
In the Spiral model, each loop is one phase of the software development process. Its radius
represents the project’s total expense, while the angular dimension represents its progress.
Testing Approach
Each phase of the spiral model gets divided into four quadrants, as shown in the figure. Each of
these quadrants has unique functions. Let’s discuss them one by one in detail.
starts with gathering customer requirements to identify, elaborate, and analyze the objectives.
And spending some time identifying alternative solutions for this particular quadrant. At the end
of this phase, you’ll have a software requirement specification document ready for your project.
Risk identification and rectification is the second quadrant that helps evaluate all the
possible solutions to determine the best solution that aligns with your project requirements. This
New product version development is the third quadrant that focuses on developing the
features identified in earlier phases and then verifying them through rigorous testing.
Finally, the last phase allows end-customers to review the newest version of the product
and provide feedback that will help plan the next version.
Advantages
Disadvantages
Difficult to identify the end of the project as the spiral goes on indefinitely.
Use Cases
Desktop applications
Payment gateways
development teams learn from previous experiences and incorporate best practices such as code
Testing Approach
The XP framework involves five major phases or development stages that iterate continuously:
In the planning phase, customers meet the software team to present their requirements
and envision the final product. The business analyst team develops a software requirement
specification document and creates a release plan for various features by breaking them into
milestones. If the software team can’t estimate any requirements, they introduce spikes into the
Next is the designing phase; it’s part of the planning process, but software teams
consider it a separate phase as it impacts the product’s user experience. Here, the focus is on one
of the XP best practices, i.e., simplicity. By simplicity in design, we mean clear-cut layout,
simple navigation, proper usage of white space, minimal components, a bright color palette, etc.
The coding phase focuses on the coding standards, pair programming, continuous
integration, and collective code ownership to develop clean and comprehensive code for the
project.
The testing phase ensures that rigorous unit, integration, and acceptance tests are
conducted to determine whether a particular feature is working correctly, whether the integrated
functionality provides the expected output, and whether the customer is pleased with the
Lastly, listening is all about constant communication and feedback. The customers
provide clear-cut feedback on the improvement required, and the project managers note it down
Advantages
Disadvantages
Use Cases
Mission-critical applications
Web development
Game development
Security-related applications
Bottom of Form
You can get an unbiased objective evaluation of your software applications from our experts and
evaluate your testing strategies. Reach out to us by signing up or leaving a comment about your
thoughts.
Unit-IV
Answer all the questions
Part-A
1.What is software quality?
Software quality is defined as a field of study and practice that describes the desirable attributes
of software products. There are two main approaches to software quality: defect management
and quality attributes.
2.What is software quality management system?
Software quality management (SQM) is a management process that aims to develop and
manage the quality of software in such a way so as to best ensure that the product meets the
quality standards expected by the customer while also meeting any necessary regulatory and
developer requirements, if any.[1][2][3] Software quality managers require software to be tested
before it is released to the market, and they do this using a cyclical process-based quality
assessment in order to reveal and fix bugs before release. Their job is not only to ensure their
software is in good shape for the consumer but also to encourage a culture of quality throughout
the enterprise.[
3.What is verification?
Quality verification techniques for software systems include1234:
Functional Testing: It is a QA technique that validates what the system does without
considering how it does it.
Standardization: Standardization plays a crucial role in quality assurance. This decreases
ambiguity and guesswork, thus ensuring quality.
Inspection: Inspection verifies requirements of a product or system with increasing rigor.
Demonstration: Demonstration verifies requirements of a product or system with
increasing rigor.
Test: Test verifies requirements of a product or system with increasing rigor.
4.What is validation?
Software validation is a method used to ensure that automated software processes work as
expected. In quality management systems, software validation is achieved through a set of
planned activities that are conducted throughout various stages of the software development and
implementation stages. Validating software helps reduce risk and legal liability, and provides
evidence that the computer system is fit for purpose. It is a critical first step toward eQMS
software adoption and implementation.
5.What are the components of software quality assurance?
Software quality assurance is composed of a variety of functions associated with two different
constituencies. The components of software quality assurance include:
Security
Reliability
Maintainability
Efficiency
Portability
At the highest level of maturity, organizations continuously improve their processes based on
quantitative feedback and innovation. They are highly adaptable and innovative, and they
actively seek ways to optimize their processes for efficiency and effectiveness.
16.What is competency framework in PCMM?
Defining the skills, knowledge, and behaviors required for different job roles within the
organization.
17.What is performance management in PCMM?
Implementing performance appraisal processes that provide feedback and reward based on merit.
18.What is training and development in PCMM?
Providing structured training programs to enhance the skills and capabilities of employees.
19.What is Mentoring and coaching in PCMM?
Encouraging mentoring relationships and coaching to facilitate knowledge transfer and skill
development.
20.What is measurement and analysis?
Collecting data on HR processes and using metrics to make data-driven decisions for
improvement.
Part-B
1.What are the factors of software quality?
The various factors, which influence the software, are termed as software factors. They can be
broadly divided into two categories. The first category of the factors is of those that can be
measured directly such as the number of logical errors, and the second category clubs those
factors which can be measured only indirectly. For example, maintainability but each of the
factors is to be measured to check for the content and the quality control.
Several models of software quality factors and their categorization have been suggested over the
years. The classic model of software quality factors, suggested by McCall, consists of 11 factors
(McCall et al., 1977). Similarly, models consisting of 12 to 15 factors, were suggested by
Deutsch and Willis (1988) and by Evans and Marciniak (1987).
All these models do not differ substantially from McCall’s model. The McCall factor model
provides a practical, up-to-date method for classifying software requirements (Pressman, 2000).
McCall’s Factor Model
This model classifies all software requirements into 11 software quality factors. The 11 factors
are grouped into three categories – product operation, product revision, and product transition
factors.
Product operation factors − Correctness, Reliability, Efficiency, Integrity, Usability.
Product revision factors − Maintainability, Flexibility, Testability.
Product transition factors − Portability, Reusability, Interoperability.
This factor deals with the capabilities and efforts required to support adaptive maintenance
activities of the software. These include adapting the current software to additional
circumstances and customers without changing the software. This factor’s requirements also
support perfective maintenance activities, such as changes and additions to the software in order
to improve its service and to adapt it to changes in the firm’s technical or commercial
environment.
Testability
Testability requirements deal with the testing of the software system as well as with its operation.
It includes predefined intermediate results, log files, and also the automatic diagnostics
performed by the software system prior to starting the system, to find out whether all
components of the system are in working order and to obtain a report about the detected faults.
Another type of these requirements deals with automatic diagnostic checks applied by the
maintenance technicians to detect the causes of software failures.
Product Transition Software Quality Factor
According to McCall’s model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows −
Portability
Portability requirements tend to the adaptation of a software system to other environments
consisting of different hardware, different operating systems, and so forth. The software should
be possible to continue using the same basic software in diverse situations.
Reusability
This factor deals with the use of software modules originally designed for one project in a new
software project currently being developed. They may also enable future projects to make use of
a given module or a group of modules of the currently developed software. The reuse of software
is expected to save development resources, shorten the development period, and provide higher
quality modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software systems or with
other equipment firmware. For example, the firmware of the production machinery and testing
equipment interfaces with the production control software.
A focus on a quality management system shouldn’t just mean a ‘box ticking’ exercise for an
organisation. And it shouldn’t be regarded as ‘just another cost’ to a business, either.
Instead, it can be treated as an opportunity for your business to evolve and grow - sometimes
with spectacular results.
As this research from McKinsey demonstrates there are common moments in every business
journey that can trigger a fresh focus on quality. If these are identified and handled correctly,
the research argues, it can lead to whole new levels of efficiency and profitability.
McKinsey identifies five stages of ‘quality maturity’, as well as their triggers - and the changes
in operational, quality and cultural practices needed to make them possible.
Starting out
Basic quality
Stronger quality
Embedded quality
1. Starting out
McKinsey’s research starts by describing a company which is not yet driven by quality goals.
They are delivering products but doing so to an inconsistent standard and without much
attention to customer needs. They may be losing money due to wastefulness, lost deals, and
even financial penalties from clients or regulators for the mistakes they make.
This may be the position of many startups and their trigger to change may be a large fine, a
product recall or a new opportunity with an important client who requires they meet a new set
of standards (e.g. ISO 9001:2015).
These businesses have the opportunity to save money, win new clients and increase sales by
raising the levels of their compliance and the quality of their end products to at least a basic
and consistent standard.
2. Basic quality
Achieving this basic level of quality maturity requires a repeatable, standardized approach to
development and manufacturing within your operations. To achieve this, you may need a
dedicated quality champion in your business, who can define and implement quality processes
and a consistent way of satisfying compliance requirements.
As efficiency improves and the business becomes less reactive in their approach to quality
these businesses can see opportunities to further reduce risk exposures and failures. They can
begin to see new ways to increase productivity while boosting savings and profitability.
3. Stronger quality
This second stage of quality maturity is driven by robust development and manufacturing
processes enabled by digitised QMS systems which embed accountability for improving
quality across the organisation. This approach entails developing company-wide methods to
review processes, identifying and solving quality problems as they arise.
Businesses can now see opportunities to further empower their staff to improve customer
satisfaction and the level of quality they are delivering.
4. Embedded quality
In this third stage of quality maturity, operations are now subject to a continuous cycle of
improvement thanks to a digital quality system which facilitates constant review of process and
customer needs. Quality and customer satisfaction drive product design and solutions, as well
as strategic decision-making. Quality has become the way of life for the company.
On the back of this progress, there is now an opportunity for an organisation to start defining
and setting standards for an entire industry.
This stage of maturity sees the adoption of advanced manufacturing, development and control
technologies, underpinned by unique insight and innovation in quality processes. Company
culture prizes quality as one of its highest achievements, and there is a focus on developing
solutions beyond the company’s traditional boundaries.
Verification and validation – These are the two important aspects of software quality
management. Verification gives the answer to the question whether the software is being
developed in a correct way and validation provides the answer whether the right software is
being produced. In a nutshell, verification denotes precision whereas validation indicates value
of the end or final product. Verification and validation is an important step used in various
processes in different industries.
Importance of Validation
Validation is requisite in the quality management process. It makes sure that the process or
product meets the purpose intended. There are different categories of validation in general
described below-
Prospective Validation
This type of validation is done to ensure the characteristics of interest before the product gets
launched. Proper functioning of the product meeting the safety standards is also checked in the
process of validation.
Retrospective Validation
This kind of validation is done against the written specifications. Retrospective validation is
actually based on the historical data or evidence that had been documented.
Partial Validation
This kind of validation is commonly used in research and pilot studies. During this process the
most important effects get tested.
Periodical Validation
There are certain items of interest that at times get repaired, relocated, dismissed or integrated in
specified time laps. This kind of validation is carried out for such items.
Concurrent Validation
This kind of validation process I usually carried out during the routine service, manufacturing or
engineering processing.
Cross Validation
This type of validation technique is suitable for estimation of performance of a predictive model
in statistics.
Six Sigma initiatives deserve all the accolades for bringing the verification and
validation aspects in the picture with Design for Six Sigma model. Before use or release, Design
for Six Sigma is utilized to make products and processes perfect. It is actually an application of
Six Sigma principles in order to design the products and also look after the manufacturing and
support process. For better understanding, let’s have a look at key Six Sigma methodologies.
DMAIC Methodology
Control
DMADV Methodology
Define
Identify
Design
Optimize
Verify
What makes DFSS different from DMAIC is the verification and validation part. It can
successfully get applied to Software Engineering as the methodology can cover the overall
Software Development Cycle. The major emphasis of DFSS is on designing and re-designing the
services or products and making them ready for the commercial market.
The word ‘defects’ is very dangerous for the software development processes. Defects are
something that is not right with the product or services. Therefore, it becomes highly essential to
keep the defect at a bay during the software development process, and Design for Six Sigma
helps to eliminate the defects. The objective is to pursue continuing process improvement by
verifying and validating the processes. Six Sigma quality management methods offer a
methodical way to figure out the defects and aid to rectify them accordingly. It is an approach to
achieve quality level of zero defects.
1. Pre-Project Components
This assures that the commitment of the project has been defined clearly regarding the time
development risks, total staff required for that particular project. It also assures that development
components. In the project development life cycle, it includes components like reviews, expert
opinions, and finding defects in software design and programming, whereas in the software
maintenance life cycle it includes specializing in maintenance components and development life
corrective measures in order to reduce the rate of errors in software based on the organization’s
maintenance and development activities, and the introduction of managerial involvement in order
helps in the coordination between the different organization quality systems at a professional
level.
main objective is to support and initiate the SQA activities, detect the gaps/deviations in them,
Abbreviated as SQAP, the Software Quality Assurance Plan comprises the procedures,
techniques, and tools that are employed to make sure that a product or service aligns with the
requirements defined in the SRS(Software Requirement Specification).
The plan identifies the SQA responsibilities of the team and lists the areas that need to be
reviewed and audited. It also identifies the SQA work products.
Based on the information gathered, the software architects can prepare the project estimation
using techniques such as WBS (Work Breakdown Structure), SLOC (Source Line of Codes), and
FP(Functional Point) estimation.
By validating the change requests, evaluating the nature of change, and controlling the change
effect, it is ensured that the software quality is maintained during the development and
maintenance phases.
For this purpose, we use software quality metrics that allow managers and developers to observe
the activities and proposed changes from the beginning till the end of SDLC and initiate
corrective action wherever required.
The SQA audit inspects the actual SDLC process followed vs. the established guidelines that
were proposed. This is to validate the correctness of the planning and strategic process vs. the
actual results. This activity could also expose any non-compliance issues.
ISO 9000: Based on seven quality management principles that help organizations ensure that
their products or services are aligned with customer needs.
7 principles of ISO 9000 are depicted in the below image:
CMMI level: CMMI stands for Capability Maturity Model Integration. This model originated
in software engineering. It can be employed to direct process improvement throughout a project,
department, or entire organization.
5 CMMI levels and their characteristics are described in the below image:
An organization is appraised and awarded a maturity level rating (1-5) based on the type of
appraisal.
Test Maturity Model integration (TMMi): Based on CMMi, this model focuses on maturity
levels in software quality management and testing.
5 TMMi levels are depicted in the image below:
As an organization moves to a higher maturity level, it achieves a higher capability for producing
high-quality products with fewer defects and closely meets the business requirements.
10. Education: Continuous education to stay current with tools, standards, and industry
trends
SQA Techniques
SQA Techniques include:
Auditing: Auditing is the inspection of the work products and its related information to
determine if a set of standard processes were followed or not.
Reviewing: A meeting in which the software product is examined by both internal and
external stakeholders to seek their comments and approval.
Code Inspection: It is the most formal kind of review that does static testing to find bugs
and avoid defect seepage into the later stages. It is done by a trained mediator/peer and is
based on rules, checklists, entry and exit criteria. The reviewer should not be the author
of the code.
Design Inspection: Design inspection is done using a checklist that inspects the below
areas of software design:
General requirements and design
Functional and Interface specifications
Conventions
Requirement traceability
Structures and interfaces
Logic
Performance
Error handling and recovery
Testability, extensibility
Coupling and cohesion
Simulation: A simulation is a tool that models a real-life situation in order to virtually
examine the behaviour of the system under study. In cases when the real system cannot
be tested directly, simulators are great sandbox system alternatives.
Functional Testing: It is a QA technique that validates what the system does without
considering how it does it. Black Box testing mainly focuses on testing the system
specifications or features.
Standardization: Standardization plays a crucial role in quality assurance. This
decreases ambiguity and guesswork, thus ensuring quality.
Static Analysis: It is a software analysis that is done by an automated tool without
executing the program. Software metrics and reverse engineering are some popular forms
of static analysis. In newer teams, static code analysis tools such as SonarCube,
VeraCode, etc. are used.
Walkthroughs: A software walkthrough or code walkthrough is a peer review where the
developer guides the members of the development team to go through the product, raise
queries, suggest alternatives, and make comments regarding possible errors, standard
violations, or any other issues.
Unit Testing: This is a White Box Testing technique where complete code coverage is
ensured by executing each independent path, branch, and condition at least once.
Stress Testing: This type of testing is done to check how robust a system is by testing it
under heavy load i.e. beyond normal conditions.
ISO 9000: Based on seven quality management principles that help organizations ensure that
their products or services are aligned with customer needs.
7 principles of ISO 9000 are depicted in the below image:
CMMI level: CMMI stands for Capability Maturity Model Integration. This model originated
in software engineering. It can be employed to direct process improvement throughout a project,
department, or entire organization.
5 CMMI levels and their characteristics are described in the below image:
An organization is appraised and awarded a maturity level rating (1-5) based on the type of
appraisal.
Test Maturity Model integration (TMMi): Based on CMMi, this model focuses on maturity
levels in software quality management and testing.
5 TMMi levels are depicted in the image below:
As an organization moves to a higher maturity level, it achieves a higher capability for producing
high-quality products with fewer defects and closely meets the business requirements.
7. Vendor Management: Work with contractors and tool vendors to ensure collective
success.
8. Safety/Security Management: SQA is often tasked with exposing vulnerabilities and
bringing attention to them proactively.
9. Risk Management: Risk identification, analysis, and Risk mitigation are spearheaded by
the SQA teams to aid in informed decision making
10. Education: Continuous education to stay current with tools, standards, and industry
trends
Difficulty in measuring process improvement: The SEI/CMM model may not provide
an accurate measure of process improvement, as it relies on self-assessment by the
organization and may not capture all aspects of the development process.
Focus on documentation rather than outcomes: The SEI/CMM model may focus too
much on documentation and adherence to procedures, rather than on actual outcomes such
as software quality and customer satisfaction.
May not be suitable for all types of organizations: The SEI/CMM model may not be
suitable for all types of organizations, particularly those with smaller development teams or
those with less structured development processes.
May not keep up with rapidly evolving technologies: The SEI/CMM model may not be
able to keep up with rapidly evolving technologies and development methodologies, which
could limit its usefulness in certain contexts.
Lack of agility: The SEI/CMM model may not be agile enough to respond quickly to
changing business needs or customer requirements, which could limit its usefulness in
dynamic and rapidly changing environments.
Key Process Areas (KPA’s): Each of these KPA’s defines the basic requirements that should
be met by a software process in order to satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project
and establish a context in which technical methods are applied, work products like models,
documents, data, reports, etc. are produced, milestones are established, quality is ensured and
change is properly managed.
Project Planning- It includes defining resources required, goals, constraints, etc. for
the project. It presents a detailed plan to be followed systematically for the successful
completion of good quality software.
Configuration Management- The focus is on maintaining the performance of the
software product, including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and
feedback which result in some changes in the requirement set. It also consists of
accommodation of those modified requirements.
Subcontract Management- It focuses on the effective management of qualified
software contractors i.e. it manages the parts of the software which are developed by third
parties.
Software Quality Assurance- It guarantees a good quality software product by
following certain rules and quality standard guidelines while developing.
Level-3: Defined –
At this level, documentation of the standard guidelines and procedures takes place.
It is a well-defined integrated set of project-specific software engineering and
management processes.
Peer Reviews- In this method, defects are removed by using a number of review
methods like walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different
development teams to ensure efficient and proper fulfillment of customer needs.
Organization Process Definition- Its key focus is on the development and
maintenance of the standard development processes.
Organization Process Focus- It includes activities and practices that should be
followed to improve the process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the
team members including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software
products as well as software processes.
The measurements made help the organization to predict the product and process
quality within some limits defined quantitatively.
Software Quality Management- It includes the establishment of plans and strategies
to develop quantitative analysis and understanding of the product’s quality.
Quantitative Management- It focuses on controlling the project performance in a
quantitative manner.
Level-5: Optimizing –
This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
Use of new tools, techniques, and evaluation of software processes is done to prevent
recurrence of known defects.
Process Change Management- Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for the
software product.
Technology Change Management- It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
Defect Prevention- It focuses on the identification of causes of defects and prevents
them from recurring in future projects by improving project-defined processes.
PCMM is a maturity structure that focuses on continuously improving the management and
development of the human assets of an organization.
459.2K
Java Try Catch
The People Capability Maturity Model (PCMM) is a framework that helps the organization
successfully address their critical people issues. Based on the best current study in fields such as
human resources, knowledge management, and organizational development, the PCMM guides
organizations in improving their steps for managing and developing their workforces.
The People CMM defines an evolutionary improvement path from Adhoc, inconsistently
performed workforce practices, to a mature infrastructure of practices for continuously elevating
workforce capability.
The PCMM subsists of five maturity levels that lay successive foundations for continuously
improving talent, developing effective methods, and successfully directing the people assets of
the organization. Each maturity level is a well-defined evolutionary plateau that institutionalizes
a level of capability for developing the talent within the organization
The Initial Level of maturity includes no process areas. Although workforce practices implement
in Maturity Level, 1 organization tend to be inconsistent or ritualistic, virtually all of these
organizations perform processes that are defined in the Maturity Level 2 process areas.
To achieve the Managed Level, Maturity Level 2, managers starts to perform necessary people
management practices such as staffing, operating performance, and adjusting compensation as a
repeatable management discipline. The organization establishes a culture focused at the unit level
for ensuring that person can meet their work commitments. In achieving Maturity Level 2, the
organization develops the capability to handle skills and performance at the unit level. The
process areas at Maturity Level 2 are Staffing, Communication and Coordination, Work
Environment, Performance Management, Training and Development, and Compensation.
The fundamental objective of the defined level is to help an organization gain a competitive
benefit from developing the different competencies that must be combined in its workforce to
accomplish its business activities. These workforce competencies represent critical pillars
supporting the strategic workforce competencies to current and future business objectives; the
improved workforce practices for implemented at Maturity Level 3 become crucial enablers of
business strategy.
At the Predictable Level, the organization handles and exploits the capability developed by its
framework of workforce competencies. The organization is now able to handle its capacity and
performance quantitatively. The organization can predict its capability for performing work
because it can quantify the ability of its workforce and of the competency-based methods they
use performing in their assignments.
At the Optimizing Level, the integrated organization is focused on continual improvement. These
improvements are made to the efficiency of individuals and workgroups, to the act of
competency-based processes, and workforce practices and activities.
disciplines as per the requirements. This is why CMMI is used as it allows the integration of
multiple disciplines as and when needed.
Objectives of CMMI :
1. Fulfilling customer needs and expectations.
2. Value creation for investors/stockholders.
3. Market growth is increased.
4. Improved quality of products and services.
5. Enhanced reputation in Industry.
CMMI Representation – Staged and Continuous :
A representation allows an organization to pursue a different set of improvement objectives.
There are two representations for CMMI :
Staged Representation :
uses a pre-defined set of process areas to define improvement path.
provides a sequence of improvements, where each part in the sequence serves as
a foundation for the next.
an improved path is defined by maturity level.
maturity level describes the maturity of processes in organization.
Staged CMMI representation allows comparison between different organizations
for multiple maturity levels.
Continuous Representation :
allows selection of specific process areas.
uses capability levels that measures improvement of an individual process area.
Continuous CMMI representation allows comparison between different
organizations on a process-area-by-process-area basis.
allows organizations to select processes which require more improvement.
In this representation, order of improvement of various processes can be
selected which allows the organizations to meet their objectives and eliminate
risks.
CMMI Model – Maturity Levels :
In CMMI with staged representation, there are five maturity levels described as follows :
1. Maturity level 1 : Initial
processes are poorly managed or controlled.
unpredictable outcomes of processes involved.
ad hoc and chaotic approach used.
No KPAs (Key Process Areas) defined.
Lowest quality and highest risk.
2. Maturity level 2 : Managed
requirements are managed.
processes are planned and controlled.
projects are managed and implemented according to their documented plans.
This risk involved is lower than Initial level, but still exists.
Quality is better than Initial level.
3. Maturity level 3 : Defined
processes are well characterized and described using standards, proper
procedures, and methods, tools, etc.
Medium quality and medium risk involved.
Focus is process standardization.
4. Maturity level 4 : Quantitatively managed
quantitative objectives for process performance and quality are set.
1. Initial. Processes are seen as unpredictable, poorly controlled, and reactive. Businesses in
this stage have an unpredictable environment that leads to increased risks and inefficiency.
2. Managed. Processes are characterized by projects and are frequently reactive.
3. Defined. Processes are well-characterized and well-understood. The organization is more
proactive than reactive, and there are organization-wide standards that provide guidance.
4. Quantitatively Managed. Processes are measured and controlled. The organization is
using quantitative data to implement predictable processes that meet organizational goals.
5. Optimizing. Processes are stable and flexible. The organizational focus is on continued
improvement and responding to changes.
It’s worth noting that while the goal of organizations is to reach level 5, the model is still
applicable and beneficial for organizations that have achieved this maturity level. Organizations
at this level are primarily focused on maintenance and improvements, and they also have the
flexibility to focus on innovation and to respond to industry changes.
Staged Representation :
uses a pre-defined set of process areas to define improvement path.
provides a sequence of improvements, where each part in the sequence serves as
a foundation for the next.
an improved path is defined by maturity level.
maturity level describes the maturity of processes in organization.
Staged CMMI representation allows comparison between different organizations
for multiple maturity levels.
Continuous Representation :
allows selection of specific process areas.
uses capability levels that measures improvement of an individual process area.
Continuous CMMI representation allows comparison between different
organizations on a process-area-by-process-area basis.
allows organizations to select processes which require more improvement.
In this representation, order of improvement of various processes can be
selected which allows the organizations to meet their objectives and eliminate
risks.
CMMI Model – Maturity Levels :
In CMMI with staged representation, there are five maturity levels described as follows :
1. Maturity level 1 : Initial
processes are poorly managed or controlled.
unpredictable outcomes of processes involved.
ad hoc and chaotic approach used.
No KPAs (Key Process Areas) defined.
Lowest quality and highest risk.
2. Maturity level 2 : Managed
requirements are managed.
processes are planned and controlled.
projects are managed and implemented according to their documented plans.
This risk involved is lower than Initial level, but still exists.
Quality is better than Initial level.
3. Maturity level 3 : Defined
processes are well characterized and described using standards, proper
procedures, and methods, tools, etc.
Medium quality and medium risk involved.
Focus is process standardization.
4. Maturity level 4 : Quantitatively managed
quantitative objectives for process performance and quality are set.
quantitative objectives are based on customer requirements, organization needs,
etc.
process performance measures are analyzed quantitatively.
higher quality of processes is achieved.
lower risk
5. Maturity level 5 : Optimizing
continuous improvement in processes and their performance.
improvement has to be both incremental and innovative.
highest quality of processes.
lowest risk in processes and their performance.
The Malcolm Baldrige National Quality Award® is the highest level of national recognition for
performance excellence that a U.S. organization can receive. The award focuses on performance
in five key areas:
Organizations don't receive the award for specific products or services. To receive the award, an
organization must have a system that
Benefits of Applying
Applicants for the Malcolm Baldrige National Quality Award (MBNQA)—those that have
received the award and those that haven't—say the Baldrige evaluation process is one of the best,
most cost-effective, most comprehensive performance assessments your organization can find. In
annual surveys conducted by the Judges Panel of the MBNQA, applicants have noted
many benefits of applying for the award.
Up to 18 awards are given annually across six eligibility categories: manufacturing, service,
small business, education, health care, and nonprofit. Award recipients that are nominated for a
subsequent award are not included in the total cap of 18.
Award recipients must share information about their exceptional performance practices with
other U.S. organizations, but they don't need to share proprietary information, even if they
included this information in their award applications. The principal mechanisms for sharing
information are the annual Quest for Excellence® Conference, and the Baldrige Fall
Conference (held in collaboration with state local Baldrige-based programs). Sharing beyond the
Quest for Excellence Conference is voluntary.
Site-visited organizations that are not recommended for the award may be recognized for
category best practices. Best practices identified by the Judges Panel in the Baldrige Criteria
process categories are eligible for recognition. To receive this recognition, an applicant's overall
organizational performance in the identified category must demonstrate mature processes that are
linked to the appropriate organizational results, demonstrating favorable levels and trends. Also,
the applicant organization must have credible performance across all categories. An applicant
may be recognized for more than one category best practice or for none at all. Organizations
receiving such category recognition will present at the Quest for Excellence
Conference following their recognition. Such organizations are recognized in the annual award
ceremony program, but not on stage, and the Baldrige Program highlights them on our website
and in a press release.
The Baldrige Program keeps the identity of all applicant organizations confidential unless they
receive the award or category best-practice recognition. We treat all information submitted by
applicants as strictly confidential, and we have numerous protocols and processes in place to
protect applicants and ensure the integrity of the award.
Unit-V
Answer all the questions
Part-A
1.What is meant by ‘plan’?
At this stage, you will literally plan what needs to be done. Depending on the project's size,
planning can take a major part of your team’s efforts. It will usually consist of smaller steps so
that you can build a proper plan with fewer possibilities of failure.
This is probably the most important stage of the PDCA cycle. If you want to clarify your plan,
avoid recurring mistakes, and apply continuous improvement successfully, you need to pay
enough attention to the CHECK phase.
4.Shortly explain about ‘Act’?
Finally, you arrive at the last stage of the Plan-Do-Check-Act cycle. Previously, you developed,
applied, and checked your plan. Now, you need to act.
If everything seems perfect and your team managed to achieve the original goals, then you can
proceed and apply your initial plan.
5.What is PDCA?
PDCA or plan–do–check–act (sometimes called plan–do–check–adjust) is an iterative design
and management method used in business for the control and continual improvement of
processes and products.
6.How to read software requirements?
To understand software requirements, you need to follow these steps1:
1. Define software requirements by clarifying and defining vague business requirements.
2. Analyze requirements by breaking them down into smaller categories by features and
functionalities.
3. Break down tasks.
Software requirements often describe a current business or technological problem in need of a
solution and a description of how the proposed software solves that problem2.Software
requirements can be divided into different categories3:
ASSERT(sum(a1, 4) == 10);
}
It costs my time and make me think is it necessary to write so many cases for one function? How
many cases should I write are enough to make test reliable? Or should I just write the normal
case first, then write another cases later if I have free time?
graphical displays known as control charts to determine whether a process should be continued
or should be adjusted to achieve the desired quality.
Part-B
1.What are the role of statistical methods in software quality?
Statistical methods are important in quality control and improvement1. The benefits and
advantages of statistical quality control include23:
Early detection of defects
Minimization of rework and scrap
More uniform quality of production
Improved relationship with the customer
Reduced inspection costs
The main objective of statistical process control is to determine whether variations in output are
due to assignable causes or common causes4
relationships to support the transformation. Phase 2 uses NLP and a data model to generate an
enhanced user requirements model, which is then transformed into abstract test cases using the
transformation rules and the knowledge base. Phase 3 includes the generation of concrete test
cases using the abstract test cases and a data model instance.
To date, a prototype has been created that accepts as input a source model for the user
requirement (use cases and user stories), validates the model against a meta-model, and performs
NLP processing with a pipeline of extraction features from the user requirements while using the
knowledge base. We then create an enhanced user requirements (EUR) model, which is validated
against a EUR meta-model, and then transformed to a target model (abstract test case) using the
predefined rules of an M2M transformation. Finally, to complete the prototype, a component will
be added to generate concrete test cases by querying a database instance and using abstract test
case models. We will validate our test case generation process by comparing generated concrete
test cases against existing testing cases from student projects using the relevant database
instances.
Regional Blue Hospital forms a PDCA team to handle the high number of health care-associated
infections (HAIs), where patients get a secondary infection because of their hospital stay. The
team goes through the following PDCA cycle:
Plan: The team decides they want to see a 25% reduction in the number of HAIs. The
team comes up with reasons HAIs are happening, which can include ineffective employee
training, poor air filtration system or longer hospital stays than necessary.
Do: The team may then decide that they think improper employee training is at the core
of their HAI issue. They may figure that if employees went through a training program,
they could prevent a lot of secondary infections in their patients.
Check: Over several months, they may develop new training and use a core group of
nurses to test this training to see if these nurses follow protocol more and if their patients
have a significantly lower incidence of infections associated with health care.
Act: If this solution seems to work out and the team is happy with the results, then the
cycle can close for a time until it's time to revisit the training and improve it further. If it
doesn't seem like the training is effective at decreasing the number of HAIs in a time
frame, the team may want to revisit their proposed solutions and try something else.
Converting a requirement into a set of test cases is a multi-step process. Here’s an example of how
to convert the requirement “User logs onto the website using email and password” into a set of
test cases:
4. Create test cases based on the inputs and expected outputs (for example, “Test case 1:
Remember to read requirements carefully, break them down into smaller pieces, and create a
structure for them in your head. By doing so, you’ll be able to write test cases that cover all the
necessary scenarios and ensure that the software meets the customer’s needs.
With the help of TestCaseLab, its intuitive interface, and powerful features, you can quickly
create test cases and ensure that your software meets the highest quality standards.
A survey of defect detection studies comparing inspection and testing techniques yields practical
recommendations: use inspections for requirements and design defects, and use testing for code.
Evidence-based software engineering can help software practitioners decide which methods to
use and for what purpose. EBSE involves defining relevant questions, surveying and appraising
avail able empirical evidence, and integrating and evaluating new practices in the target
environment. This article helps define questions regarding defect detection techniques and
presents a survey of empirical studies on testing and inspection techniques. We then interpret the
findings in terms of practical use. The term defect always relates to one or more underlying faults
in an artifact such as code. In the context of this article, defects map to single faults.
Defect removal:
Defect Removal Efficiency (DRE) is a metric used to estimate test efficacy123. It allows the
development team to eliminate bugs before release by measuring the correlation of bugs detected
internally with the amount of bugs that were detected externally1. DRE can be expressed as a
percentage where DRE = (total defects found during the project/total defects introduced by the
project)x 100
Deming management is the management technique that focuses on the creation and continuous
improvement of organizational mechanisms for the high quality of outputs. It is the application
of the principles of W. Edwards Deming, an American Scholar in management.
It is a fact that most scholars believe that the beginning of quality in productivity occurred in
Japan only after the 2nd World War. Most of the Japanese industries were completely destroyed
in the war and had to rebuild themselves from scratch. As such, a number of American scholars
reached Japan and helped Japanese Entrepreneurs to operate modern manufacturing facilities.
Deming was one of the scholars who went to Japan in 1950.
Deming questions the basic assumption that high quality means higher prices. He focuses on
statistical control of organizational performance and “joy in work” which will drive ever-
improving quality forward and lower the costs.
He believes that a manager’s job is to seek out and correct the causes of failure, rather than
merely identify failures after they occur. The goal of Deming’s fourteen pinots lies in altering the
behaviors of managers and employees so that companies can become low-cost, high-quality, and
highly productive suppliers of goods and services.
In recognition of his contribution to the management and substantial achievement in quality, the
Japanese government instituted a Deming prize shortly after the 2nd World War.
Robert Kreitner has suggested the following four quality management principles of Deming
management.
Quality improvement is essential to reduce waste and inefficiency. It helps to increase higher
productivity, greater market share, and new business and employment opportunities.
A satisfied customer is essential for organizational success. An organization must produce its
products and services that meet the expectations of customers easily but effectively. Only slogans
and inspirational words are not enough, what is necessary is action to implement.
Deming disagreed with blaming a particular person or department for inferior quality.
Management, work, rules, technology, organizational structure, and culture, all are responsible
for inferior quality. Employees will produce superior quality if the system is redesigned to
improve it.
Therefore, the management must treat employees as internal customers and provide them with
sufficient ideas and suggestions for quality improvement.
Plan-Do-Check-Act
Deming suggested making informed decisions on the basis of rigid data. Deming suggested a
four-step process for the application of total quality management (TQM) which is popularly
known as the PDCA cycle. This is also known as Deming’s four phases of quality management.
This is given by,
Plan – Management must plan for product development. Planning objectives, policies,
tools, and customer needs, and training employees to produce products that meet
customers’ needs, is essential for quality improvement.
Do – Management must produce the products according to the product development
plans. If any problem is identified in the planning phase, necessary steps must be taken to
solve the problem in this phase.
Check – Once the production process has started, the management must check to find out
deviations in outputs or inputs. This phase helps to find out whether or not the
improvement process was successful. It is also helpful in finding out the causes of
deviations and evaluating their impact on the final product and market share.
Act – This step deals with market research and aims to prevent problems rather than
correct them. after studying the lesson, the management must act on the basis of the
learned lessons. Deming’s PDCA cycle is at developing teamwork with respect to product
development, manufacturing, sale, and market research as shown in the figure:
Deming’s 14 Points
(b)Discuss briefly about Continuous Improvement through Plan Do Check Act (PDCA)?
(0)
Article • 7 min read
PDCA (Plan Do Check Act)
MTCT
By the Mind Tools Content Team
Imagine that your customer satisfaction score on a business ratings website has dipped.
When you look at recent comments, you see that your customers are complaining about
So, you decide to run a small pilot project for a month, using a new supplier to deliver
your products to a sample set of customers. And you're pleased to see that the feedback is
positive. As a result, you decide to use the new supplier for all your orders in the future.
What you've just done is a single loop called the PDCA Cycle. This is an established tool
The PDCA approach was pioneered by Dr William Deming, and we've worked closely
with The Deming Institute to produce this article. In it, we outline the key principles of
PDCA, and explain when and how to put them into practice.
Download transcript
of identifying why some products or processes don't work as hoped. His approach has
since become a popular strategy tool, used by many different types of organizations. It
allows them to formulate theories about what needs to change, and then test them in a
Note:
Deming himself used the concept of Plan-Do-Study-Act (PDSA). He found that the focus
He preferred to focus instead on studying the results of any innovations, and to keep
looking back at the initial plan. He stressed that the search for new knowledge is always
guided by a theory – so you should be as sure as you can that your theory is right! [1]
With the PDCA cycle you can solve problems and implement solutions in a rigorous,
1. Plan
First, identify and understand your problem or opportunity. Perhaps the standard of a
finished product isn't high enough, or an aspect of your marketing process should be
Explore the information available in full. Generate and screen ideas, and develop a robust
implementation plan.
Be sure to state your success criteria and make them as measurable as possible. You'll
2. Do
Once you've identified a potential solution, test it safely with a small-scale pilot project.
This will show whether your proposed changes achieve the desired outcome – with
minimal disruption to the rest of your operation if they don't. For example, you could
demographic.
As you run the pilot project, gather data to show whether the change has worked or not.
3. Check
Next, analyze your pilot project's results against the criteria that you defined in Step 1, to
You may decide to try out more changes, and repeat the Do and Check phases. But if your
4. Act
This is where you implement your solution. But remember that PDCA/PDSA is a loop,
not a process with a beginning and end. Your improved process or product becomes the
new baseline, but you continue to look for ways to make it even better.
The PDCA/PDSA framework works well in all types of organizations. It can be used to
improve any process or product, by breaking them down into smaller steps or
However, going through the PDCA/PDSA cycle can be much slower than a
straightforward, "gung ho" implementation. So, it might not be the appropriate approach
It also requires significant buy-in from team members, and offers fewer opportunities for
While PDCA/PDSA is an effective business tool, you can also use it to improve your own
performance:
First, Plan: Identify what's holding you back personally, and how you want to progress.
Look at the root causes of any issues, and set goals to overcome these obstacles.
Next, Do: When you've decided on your course of action, safely test different ways of
Then, Check: Review your progress regularly, adjust your behavior accordingly, and
Finally, Act: Implement what's working, continually refine what isn't, and carry on the
Key Points
The PDCA/PDSA cycle is a continuous loop of planning, doing, checking (or studying),
and acting. It provides a simple and effective approach for solving problems and
managing change. The model is useful for testing improvement measures on a small scale
The approach begins with a Planning phase in which problems are clearly identified and
understood, and a theory for improvement is defined. Potential solutions are tested on a
small scale in the Do phase, and the outcome is then studied and Checked.
Go through the Do and Check stages as many times as necessary before the full, polished