Sta - Unit Iv
Sta - Unit Iv
1. Planning the stress test: This step involves gathering the system data, analyzing the
system, and defining the stress test goals.
2. Create Automation Scripts: This step involves creating the stress testing automation
scripts and generating the test data for the stress test scenarios.
3. Script Execution: This step involves running the stress test automation scripts and
storing the stress test results.
4. Result Analysis: This phase involves analyzing stress test results and identifying the
bottlenecks.
5. Tweaking and Optimization: This step involves fine-tuning the system and
optimizing the code with the goal meet the desired benchmarks.
Examples for stress testing : A web server may be stress tested using scripts, bots, and various
tools to observe the performance of a web site during peak loads. These attacks generally are
under an hour long, or until a limit in the amount of data that the web server can tolerate is found.
Volume testing has to handle big volumes of data . It is difficult to maintain the database that
has a strong structure. Also reviewing the data types and connections between them is difficult.
1. Consider the factors: Before performing failover testing consider the factors like
budget, time, team, technology, etc.
2. Analysis on failover reasons and design solutions: Determine probable failure
situations that the system might experience. Examine the causes of failure, including
software bugs, hardware malfunctions, network problems, etc. It provides fixes for any
flaws or vulnerabilities found in the failover procedure.
3. Testing failover scenarios: It develops extensive test cases to replicate various
failover scenarios. This covers both unplanned failovers (system or component
failures) and scheduled failovers (maintenance). Test cases ought to address many
facets of failover, such as load balancing, user impact, network rerouting, and data
synchronization.
4. Executing the test plan: To reduce the impact on production systems, carry out the
failover test plan in a controlled setting. Keep an eye on how the system behaves during
failover to make sure it satisfies the recovery point and recovery time objectives.
● Example of Failover Testing: A bank needs to ensure its online banking system can
handle server failures without affecting customer transactions. The testing team
simulates a server crash in the primary data center while monitoring how quickly
transactions are redirected to a backup server in a different location. The failover
process takes 30 seconds, during which some transactions are lost. The team
implements improvements. After re-testing, the failover time is reduced to 5 seconds,
meeting the bank's requirements
Disadvantages:
1. Increased Complexity
2. Resource Intensive
3. Time-Consuming
QA Engineers: Primarily responsible for designing, executing, and analyzing test cases across
different hardware, software, and platforms.
Developers: Collaborate with QA to identify and fix compatibility issues.
Product Managers: Define compatibility requirements and prioritize testing efforts.
End-users: Can provide valuable feedback on real-world compatibility issues.
1. Prepare your product or design to test: The first phase of usability testing is choosing
a product and then making it ready for usability testing.
2. Find your participants: Generally, the number of participants that you need is based
on several case studies. Mostly, five participants can find almost as many usability problems as
you’d find using many more test participants.
3. Write a test plan: The main purpose of the plan is to document what you are going to
do, how you are going to conduct the test, what metrics you are going to find, the number of
participants you aregoing to test, and what scenarios you will use.
4. Take on the role of the moderator: The moderator plays a vital role that involves
building a partnership with the participant. Most of the research findings are derived by
observing the participant’s actions and gathering verbal feedback to be an effective
moderator.
5. Present final report: This phase generally involves combining your results into an
overall score and presenting it meaningfully to your audience.
Figure Sample documentation on the disk label for the software tester to check.
Availability: In this, the data must be retained by an official person, and they also guarantee
that the data and statement services will be ready to use whenever we need it.
Integrity: In this, we will secure those data which have been changed by the unofficial person.
The primary objective of integrity is to permit the receiver to control the data that is given by
the system. The integrity systems regularly use some of the similar fundamental approaches
as confidentiality structures. And also verify that correct data is conveyed from one
application to another.
Authorization: It is the process of defining that a client is permitted to perform an action and
also receive the services. The example of authorization is Access control.
Confidentiality: It is a security process that protects the leak of the data from the outsider's
because it is the only way where we can make sure the security of our data.
Authentication: The authentication process comprises confirming the individuality of a
person, tracing the source of a product that is necessary to allow access to the private
information or the system
System software security: In this, we will evaluate the vulnerabilities of the application based
on different software such as Operating system, Database system, etc.
Network security: In this, we will check the weakness of the network structure, such as
policies and resources.
Server-side application security: We will do the server-side application security to ensure
that the server encryption and its tools are sufficient to protect the software from any
disturbance.
Client-side application security: In this, we will make sure that any intruders cannot operate
on any browser or any tool which is used by customers.
Risk Assessment: To moderate the risk of an application, we will go for risk assessment. In
this, we will explore the security risk, which can be detected in the association. The risk can
be further divided into three parts, and those are high, medium, and low. The primary purpose
of the risk assessment process is to assess the vulnerabilities and control the significant threat.
1. Iteration 0
It is the first stage of the testing process and the initial setup is performed in this stage. The
testing environment is set in this iteration.
• This stage involves executing the preliminary setup tasks such as finding people for
testing, preparing the usability testing lab, preparing resources, etc.
• The business case for the project, boundary situations, and project scope are verified.
• Important requirements and use cases are summarized.
• Initial project and cost valuation are planned.
• Risks are identified.
2. Construction Iteration
It is the major phase of the testing and most of the work is performed in this phase. It is a set
of iterations to build an increment of the solution. This process is divided into two types:
a. Confirmatory testing: This type of testing concentrates on verifying that the system
meets the stakeholder’s requirements as described to the team to date and is performed by
the team. It is further divided into 2 types of testing:
Agile acceptance testing: It is the combination of acceptance testing and functional
testing. It can be executed by the development team and the stakeholders.
Developer testing: It is the combination of unit testing and integration testing and
verifies both the application code and database schema.
b. Investigative testing: Investigative testing detects the problems that are skipped or
ignored during confirmatory testing. In this type of testing, the tester determines the
potential problems in the form of defect stories. It focuses on issues like integration
testing, load testing, securitytesting, and stress testing.
4. Production
It is the last phase of agile testing. The product is finalized in this stage after the removal of all
defects and issues raised.
1. Quadrant 1 (Automated)
The first agile quadrat focuses on the internal quality of code which contains the test cases and
test components that are executed by the test engineers. All test cases are technology-driven
and used for automation testing. All through the agile first quadrant of testing, the following
testing can be executed:
• Unit testing / Component testing.
3. Quadrant 3 (Manual)
The third agile quadrant provides feedback to the first and the second quadrant. This quadrant
involves executing many iterations of testing, these reviews and responses are then used to
strengthen the code. The test cases in this quadrant are developed to implement automation
testing. The testing that can be carried out in this quadrant are:
• Usability testing.
• Collaborative testing.
• User acceptance testing.
• Pair testing with customers.
4. Quadrant 4 (Tools)
The fourth agile quadrant focuses on the non-functional requirements of the product like
performance, security, stability, etc. Various types of testing are performed in this quadrant to
deliver non-functional qualities and the expected value. The testing activities that can be
performed in this quadrant are:
• Non-functional testing such as stress testing, load testing, performance testing, etc.
• Security testing.
• Scalability testing.
• Infrastructure testing.
• Data migration testing.
Mobile Applications
A mobile application is a program that was built to be used on mobile devices (smartphones,
tablets and various wearables). Mobile apps are not as straightforward as desktop web apps
and fall into three varieties: mobile web, native and hybrid apps.
Mobile web applications
A mobile web application is a program that can be accessed via a mobile browser, meaning
that you don’t have to download them to your device to start using them. Like web apps, mobile
web applications are usually built using JavaScript, CSS and HTML5; however, there is no
standard software kit. Contrary to other mobile applications, web apps for mobile use are easier
to build and test, but they’re usually much more primitive in terms of functionality.
Native applications
Native mobile applications run on the device itself, so you have to download them before
using them. Since they are platform- specific, native mobile apps are built using specific
languages and integrated development environments (IDEs). For example, Android native
applications are developed using Java and Android Studio or Eclipse IDE. At the same time,
to build an app for an Apple device, you’ll need to use Objective-C or Swift and the XCode
IDE. Native apps are secure, integrate with the hardware perfectly and have the best UI/UX
experience.
Hybrid applications
Hybrid apps combine the characteristics of native and mobile web apps. Built with the help of
the “standard web” stack (JavaScript, CSS and HTML5), they are then wrapped in a native
environment, so you can use the same code for different platforms. While running on your
mobile browser, hybrid applications are downloadable and have access to your camera, GPS,
contact list, etc. Though such applications are easier to build and maintain, they are slower and
offer less advanced functionality than their native counterparts.
1. Functional Testing
Functional testing involves checking of the specified functionality of a web application.
Functional test cases for web applications may be generated using boundary value analysis,
equivalence class testing, decision table testing and many other techniques.
Example: Let us consider the eCommerce application sells products such as computers,
mobile phones, cameras, , etc. The home page of this web application is given in Fig below:
Table: Sample functional test cases of order process of an online shopping webapplication
2. User-interface Testing
User interface testing tests that the user interaction features work correctly. These
features include hyperlinks, tables, forms, frames and user interface items such as text fields,
radio buttons, check boxes, list boxes, combo boxes, command buttons and dialog boxes.
3. Usability Testing
Usability testing refers to the procedure employed to evaluate the degree to which the software
satisfies the specified usability criteria.
Table : shows sample test cases based on a user operation in an online shopping website.