Performance Testing
Performance Testing
Performance testing aims to identify bottlenecks, measure system performance under various loads and conditions, and ensure that
the system can handle the expected number of users or transactions.
TYPES OF PERFORMANCE TESTING
1. Load testing
Load testing simulates a real-world load on the system to see how it performs under stress. It helps identify bottlenecks and determine
the maximum number of users or transactions the system can handle. It checks the product’s ability to perform under anticipated user
loads. The objective is to identify performance congestion before the software product is launched in the market.
2. Stress testing
Stress testing is a type of load testing that tests the system’s ability to handle a high load above normal usage levels. It helps identify
the breaking point of the system and any potential issues that may occur under heavy load conditions. It involves testing a product
under extreme workloads to see whether it handles high traffic or not. The objective is to identify the breaking point of a software
product.
3. Spike testing
Spike testing is a type of load testing that tests the system’s ability to handle sudden spikes in traffic. It helps identify any issues that
may occur when the system is suddenly hit with a high number of requests. It tests the product’s reaction to sudden large spikes in the
load generated by users.
4. Soak testing
Soak testing is a type of load testing that tests the system’s ability to handle a sustained load over a prolonged period. It helps identify
any issues that may occur after prolonged usage of the system.
5. Endurance testing
Endurance testing is similar to soak testing, but it focuses on the long-term behavior of the system under a constant load. It is
performed to ensure the software can handle the expected load over a long period.
6. Volume testing
In Volume testing , a large number of data is saved in a database and the overall software system’s behavior is observed. The
objective is to check the product’s performance under varying database volumes.
7. Scalability testing
In Scalability testing , the software application’s effectiveness is determined by scaling up to support an increase in user load. It helps
in planning capacity additions to your software system.
WHY USE PERFORMANCE TESTING?
The objective of performance testing is to eliminate performance congestion.
It uncovers what needs to be improved before the product is launched in the market.
The objective of performance testing is to make software rapid.
The objective of performance testing is to make software stable and reliable.
The objective of performance testing is to evaluate the performance and scalability of a system or application under various loads
and conditions.
It helps identify bottlenecks, measure system performance, and ensure that the system can handle the expected number of users or
transactions.
It also helps to ensure that the system is reliable, stable, and can handle the expected load in a production environment.
HOW TO CONDUCT PERFORMANCE TESTING?
Conducting performance testing involves several steps to ensure that a software application can handle expected loads and perform
well under stress. Here’s a simplified guide on how to conduct performance testing:
Step 1: Set Up the Testing Environment First prepare the place where you will run the tests for the performance testing. Make sure
you have all the needed tools and understanding for the setup, like what devices and software you will be using for the performance
testing.
Step 2: Decide What to Measure Think about what you want to know from the tests. This will include things like how fast the system
responds to and how much it can handle the tests. You can also look at successful similar systems to set your goals.
Step 3: Plan Your Tests to Figure out different scenarios to test, considering things like how users might behave and what data you
will use. This helps you create tests that cover a range of situations and decide what data to collect.
Step 4: Set Up Your Tools Get everything ready for testing, including tools and ways to track what’s happening during the tests.
Step 5: Create and Run Tests Make the tests based on your plan and run them. Keep track of all the data you get from the tests.
Step 6: Look at the Results After each test, see what you find out. Adjust your tests based on what you learn, and run them again to
see if things change.
Step 7: Keep Testing Keep analyzing and adjusting your tests to get the best results. Repeat the process until you are satisfied with
the performance.
PERFORMANCE TESTING METRICS
Throughput. How many units of data a system processes over a specified time.
Memory. The working storage space available to a processor or workload.
Response time, or latency. The amount of time elapsed between a user-entered request and the start of a system's response
to that request.
Bandwidth. The volume of data per second that can move between workloads, usually across a network.
Central processing unit (CPU) interrupts per second. The number of hardware interrupts a process receives per second.
Average latency. Also called wait time, a measure of the time it takes to receive the first byte after sending a request.
Average load time. The average time it takes for every request to be delivered.
Peak response time. The longest time frame it takes to fulfill a request.
Error rate. Percentage of requests that result in an error compared to all other requests.
Disk time. Time it takes for a disk to execute a read or write request.
Session amounts. The maximum number of active sessions that can be open at one time.