0% found this document useful (0 votes)
29 views34 pages

Unit Iv

Uploaded by

tarun singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views34 pages

Unit Iv

Uploaded by

tarun singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT IV

Software testing
Activities
Levels of Testing

• The levels of software testing involve the different methodologies,


which can be used while we are performing the software testing.
• In software testing, we have four different levels of testing, which are
as discussed below:
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Level1: Unit Testing

• The first level of testing involves analyzing each unit or an individual


component of the software application.
• Unit testing is also the first level of functional testing.The primary purpose of
executing unit testing is to validate unit components with their performance.
• A unit component is an individual function or regulation of the application, or
we can say that it is the smallest testable part of the software. The reason of
performing the unit testing is to test the correctness of inaccessible code.
• Unit testing will help the test engineer and developers in order to understand
the base of code that makes them able to change defect causing code
quickly. The developers implement the unit.
Level2: Integration Testing

• The second level of software testing is the integration testing. The


integration testing process comes after unit testing.
• It is mainly used to test the data flow from one module or component to
other modules.
• In integration testing, the test engineer tests the units or separate
components or modules of the software in a group.
• The primary purpose of executing the integration testing is to identify the
defects at the interaction between integrated components or units.
• When each component or module works separately, we need to check the
data flow between the dependent modules, and this process is known
as integration testing.
Level3: System Testing

• The third level of software testing is system testing, which is used to


test the software's functional and non-functional requirements.
• It is end-to-end testing where the testing environment is parallel to
the production environment. In the third level of software testing, we
will test the application as a whole system.
• To check the end-to-end flow of an application or the software as a
user is known as System testing.
• In system testing, we will go through all the necessary modules of an
application and test if the end features or the end business works
fine, and test the product as a complete system.
Level4: Acceptance Testing

• The last and fourth level of software testing is acceptance testing,


which is used to evaluate whether a specification or the requirements
are met as per its delivery.
• The software has passed through three testing levels (Unit Testing,
Integration Testing, System Testing). Some minor errors can still be
identified when the end-user uses the system in the actual scenario.
• In simple words, we can say that Acceptance testing is the squeezing
of all the testing processes that are previously done.
• The acceptance testing is also known as User acceptance testing
(UAT) and is done by the customer before accepting the final product.
Debugging
• In the development process of any software, the software program is religiously tested,
troubleshot, and maintained for the sake of delivering bug-free products. There is nothing
that is error-free in the first go.
• So, it's an obvious thing to which everyone will relate that as when the software is
created, it contains a lot of errors; the reason being nobody is perfect and getting error in
the code is not an issue, but avoiding it or not preventing it, is an issue!
• All those errors and bugs are discarded regularly, so we can conclude that debugging is
nothing but a process of eradicating or fixing the errors contained in a software program.
• Debugging works stepwise, starting from identifying the errors, analyzing followed by
removing the errors. Whenever a software fails to deliver the result, we need the software
tester to test the application and solve it.
• Since the errors are resolved at each step of debugging in the software testing, so we can
conclude that it is a tiresome and complex task regardless of how efficient the result was.
Why do we need Debugging?

• Debugging gets started when we start writing the code for the software program. It
progressively starts continuing in the consecutive stages to deliver a software product
because the code gets merged with several other programming units to form a software
product.
• Following are the benefits of Debugging:
• Debugging can immediately report an error condition whenever it occurs. It prevents
hampering the result by detecting the bugs in the earlier stage, making software
development stress-free and smooth.
• It offers relevant information related to the data structures that further helps in easier
interpretation.
• Debugging assist the developer in reducing impractical and disrupting information.
• With debugging, the developer can easily avoid complex one-use testing code to save
time and energy in software development.
Steps involved in Debugging
Steps involved in Debugging
• Identify the Error: Identifying an error in a wrong may result in the wastage of
time. It is very obvious that the production errors reported by users are hard to
interpret, and sometimes the information we receive is misleading. Thus, it is
mandatory to identify the actual error.
• Find the Error Location: Once the error is correctly discovered, you will be
required to thoroughly review the code repeatedly to locate the position of the
error. In general, this step focuses on finding the error rather than perceiving it.
• Analyze the Error: The third step comprises error analysis, a bottom-up
approach that starts from the location of the error followed by analyzing the
code. This step makes it easier to comprehend the errors. Mainly error analysis
has two significant goals, i.e., evaluation of errors all over again to find existing
bugs and postulating the uncertainty of incoming collateral damage in a fix.
• Prove the Analysis: After analyzing the primary bugs, it is necessary
to look for some extra errors that may show up on the application. By
incorporating the test framework, the fourth step is used to write
automated tests for such areas.
• Cover Lateral Damage: The fifth phase is about accumulating all of
the unit tests for the code that requires modification. As when you
run these unit tests, they must pass.
• Fix & Validate: The last stage is the fix and validation that emphasizes
fixing the bugs followed by running all the test scripts to check
whether they pass.
Debugging Strategies

• For a better understanding of a system, it is necessary to study the system in


depth. It makes it easier for the debugger to fabricate distinct illustrations of such
systems that are needed to be debugged.
• The backward analysis analyzes the program from the backward location where
the failure message has occurred to determine the defect region. It is necessary
to learn the area of defects to understand the reason for defects.
• In the forward analysis, the program tracks the problem in the forward direction
by utilizing the breakpoints or print statements incurred at different points in the
program. It emphasizes those regions where the wrong outputs are obtained.
• To check and fix similar kinds of problems, it is recommended to utilize past
experiences. The success rate of this approach is directly proportional to the
proficiency of the debugger.
Debugging Tools

Here is a list of some of the widely used debuggers:


• Radare2
• WinDbg
• Valgrind
Types Of Software Testing
Techniques
• Static Testing Techniques are testing techniques that are used to find
defects in an application under test without executing the code. Static
Testing is done to avoid errors at an early stage of the development
cycle thus reducing the cost of fixing them.
• Dynamic Testing Techniques are testing techniques that are used to
test the dynamic behaviour of the application under test, that is by
the execution of the code base. The main purpose of dynamic testing
is to test the application with dynamic inputs- some of which may be
allowed as per requirement (Positive testing) and some are not
allowed (Negative Testing)
Static Testing Techniques
Static Testing Techniques

• Static Testing Techniques are divided into two major categories:


1. Reviews: They can range from purely informal peer reviews between two developers/testers on the artifacts
(code/test cases/test data) to formal Inspections which are led by moderators who can be internal/external to the
organization.
• Peer Reviews: Informal reviews are generally conducted without any formal setup. It is between peers. For
Example- Two developers/Testers review each other’s artifacts like code/test cases.
• Walkthroughs: Walkthrough is a category where the author of work (code or test case or document under review)
walks through what he/she has done and the logic behind it to the stakeholders to achieve a common
understanding or for the intent of feedback.
• Technical review: It is a review meeting that focuses solely on the technical aspects of the document under review
to achieve a consensus. It has less or no focus on the identification of defects based on reference documentation.
Technical experts like architects/chief designers are required to do the review. It can vary from Informal to fully
formal.
• Inspection: Inspection is the most formal category of reviews. Before the inspection, the document under review is
thoroughly prepared before going for an inspection. Defects that are identified in the Inspection meeting are logged
in the defect management tool and followed up until closure. The discussion on defects is avoided and a separate
discussion phase is used for discussions, which makes Inspections a very effective form of review
Static Testing Techniques
2. Static Analysis: Static Analysis is an examination of requirement/code or design to identify defects
that may or may not cause failures. For Example- Review the code for the following standards. Not
following a standard is a defect that may or may not cause a failure. Many tools for Static Analysis are
mainly used by developers before or during Component or Integration Testing. Even Compiler is a
Static Analysis tool as it points out incorrect usage of syntax, and it does not execute the code per se.
There are several aspects to the code structure – Namely Data flow, Control flow, and Data Structure.
• Data Flow: It means how the data trail is followed in a given program – How data gets accessed and
modified as per the instructions in the program. By Data flow analysis, you can identify defects like a
variable definition that never got used.
• Control flow: It is the structure of how program instructions get executed i.e. conditions, iterations,
or loops. Control flow analysis helps to identify defects such as Dead code i.e. a code that never gets
used under any condition.
• Data Structure: It refers to the organization of data irrespective of code. The complexity of data
structures adds to the complexity of code. Thus, it provides information on how to test the control
flow and data flow in a given code
Dynamic Testing Techniques
Dynamic Testing Techniques
Dynamic techniques are subdivided into three categories:
1. Structure-based Testing:
• These are also called White box testing techniques. Structure based testing technniques are focused on how the code
structure works and test accordingly. To understand Structure-based techniques, we first need to understand the
concept of code coverage.
• Code Coverage is normally done in Component and integration testing. It establishes what code is covered by
structural testing techniques out of the total code written. One drawback of code coverage is that- it does not talk
about code that has not been written at all (Missed requirement), There are tools in the market that can help measure
code coverage.
There are multiple ways to test code coverage:
1. Statement coverage: Number of Statements of code exercised/Total number of statements. For Example, if a code
segment has 10 lines and the test designed by you covers only 5 of them then we can say that statement coverage given
by the test is 50%.
2. Decision coverage: Number of decision outcomes exercised/Total number of Decisions. For Example, If a code
segment has 4 decisions (If conditions) and your test executes just 1, then decision coverage is 25%
3. Conditional/Multiple condition coverage: It has the aim to identify that each outcome of every logical condition in
a program has been exercised.
Dynamic Testing Techniques
2. Experience-Based Techniques:
These are techniques for executing testing activities with the help of experience gained over the
years. Domain skill and background are major contributors to this type of testing. These techniques
are used majorly for UAT/Business user testing. These work on top of structured techniques like
Specification-based and Structure-based, and they complement them. Here are the types of
experience-based techniques:
1. Error guessing: It is used by a tester who has either very good experience in testing or with the
application under test and hence they may know where a system might have a weakness. It cannot be
an effective technique when used stand-alone but is helpful when used along with structured
techniques.
2. Exploratory testing: It is hands-on testing where the aim is to have maximum execution coverage
with minimal planning. The test design and execution are carried out in parallel without documenting
the test design steps. The key aspect of this type of testing is the tester’s learning about the strengths
and weaknesses of an application under test. Similar to error guessing, it is used along with other
formal techniques to be useful.
Challenges in Debugging:

There are lot of problems at the same time as acting the debugging. These are
the following:

• Debugging is finished through the individual that evolved the software program
and it’s miles difficult for that person to acknowledge that an error was made.
• Debugging is typically performed under a tremendous amount of pressure to fix
the supported error as quick as possible.
• It can be difficult to accurately reproduce input conditions.
• Compared to the alternative software program improvement activities,
relatively little research, literature and formal preparation exist at the
procedure of debugging.
Debugging Approaches:
The following are a number of approaches popularly adopted by programmers for debugging.

• Brute Force Method: This is the foremost common technique of debugging however is that the least
economical method. during this approach, the program is loaded with print statements to print the
intermediate values with the hope that a number of the written values can facilitate to spot the statement
in error. This approach becomes a lot of systematic with the utilisation of a symbolic program (also
known as a source code debugger), as a result of values of various variables will be simply checked and
breakpoints and watch-points can be easily set to check the values of variables effortlessly.
• Backtracking: This is additionally a reasonably common approach. during this approach, starting from
the statement at which an error symptom has been discovered, the source code is derived backward till
the error is discovered. sadly, because the variety of supply lines to be derived back will increase, the
quantity of potential backward methods will increase and should become unimaginably large so limiting
the utilisation of this approach.
• Cause Elimination Method: In this approach, a listing of causes that may presumably have contributed
to the error symptom is developed and tests are conducted to eliminate every error. A connected
technique of identification of the error from the error symptom is that the package fault tree analysis.
• Program Slicing: This technique is analogous to backtracking. Here the search house is reduced by
process slices. A slice of a program for a specific variable at a particular statement is that the set of
supply lines preceding this statement which will influence the worth of that variable
Debugging Guidelines:
Debugging is commonly administrated by programmers supported their ingenuity. The
subsequent are some general tips for effective debugging:

• Many times debugging needs an intensive understanding of the program style. making an
attempt to rectify supported a partial understanding of the system style and implementation
might need an excessive quantity of effort to be placed into debugging even straightforward
issues.

• Debugging might generally even need a full plan of the system. In such cases, a typical mistake
that novice programmers usually create is trying to not fix the error however its symptoms.

• One should be watched out for the likelihood that a slip correction might introduce new
errors. so when each spherical of error-fixing, regression testing should be administrated.
Test Data
• Test Data :Data created or selected to satisfy the execution
preconditions and inputs to execute one or more test cases.

• Three types of test data are:


• normal data. - typical, sensible data that the program should accept and
be able to process.
• boundary data. - valid data that falls at the boundary of any possible
ranges, sometimes known as extreme data.
• erroneous data. - data that the program cannot process and should not
accept.
Test Data Generation

• Test data generation is the process of manually or automatically creating


realistic but synthetic test data for testing software under development.
• DevOps and testing teams use test data generation to simulate lifelike
scenarios to make sure that the software application performs as
expected under different conditions.
• The Test Data Generation is the process of collecting and managing a
large amount of data from various resources just to implement the test
cases to ensure the functional soundness of the system under testing.
Approaches to Test Data Generation
1) Manual Test data generation:
In this technique, all the datasets are generated manually by the tester
with respect to all the required test case through experience and
anticipations.
Pros:
• Easy to implement, no additional tools are needed to be deployed.
• Increase the confidence of the tester.
Cons:
• Accuracy of data sets generated by this scheme mostly doubtful.
• Time-consuming process.
2) Automated Test Data Generation:
The major feature of this testing that makes it more efficient than the
above technique is the speed, automated data generation technique
produces data as in an expedited manner through analysing large volume of
data in a small-time interval. In this scheme, we use automated tools, there
are many available in the market.
Pros:
• The data sets generated by this scheme are highly accurate.
• Data generation speed is very fast.
Cons:
• The one demerit of this method is that it is a costlier method to
implement.
• The second one is that these tools take time to understand the system
3) Back end data injection Approach:

This method is done with the help of using SQL queries. Here a tester writes the relevant
query and injects it into the database in order to populate the required data sets with
respect to the test cases. This is also an easier method which generates a large amount of
data in just a few minutes. We can update the database in this scheme if some new
datasets are found through other resources like sample XML documents etc could be
updated for future use if required.
Pros:
• It is less time-consuming technique.
• Less expertise required as compared to the above technique as you only need to write a
correct query to populate data required.
Cons:
• If you write any invalid query or incorrect it may populate illogical dataset or may cause
the failure of your database system so keep attention while injecting any query into
database.
4) Third-party tool:
A number of tools are available in the market that is processed or provided by
the out premises tools. These tools first understand the scenarios of your system
under testing and then generates dataset as per the requirement. These tools are
customizable as per your need of the business. These tools provide wide
coverage and accuracy in generating datasets.
Pros:
• These tools are accurate because they first understand the entire system and
then generated the datasets accordingly.
Cons:
• Costlier technique to implement because the price of such a tool is high as
compared to other technique.
• Less coverage in case of heterogeneous testing environment because these
tools aren’t generic in nature.
Test Data Generation Challenges

• Today's data teams understand the importance of test data


management, especially when it comes to provisioning test
environments with fresh, high-quality test data, on demand. But for
real-life production data to become test data, it must be:
• Complete, fresh, and trustworthy
• Masked, effectively hiding personal information
• Populated, to meet the requirements of the development project
• Synthesized, when additional test data is required
• Compliant, to address data privacy legislation
Test Data Generation Solutions
Today’s testing teams are tasked with delivering high-quality results, on time, in
compliance with privacy regulations, at minimal cost. These demands often lead them
to seek a test data generation solution based on production or synthetic data.
• Production test data
In this case, the enterprise uses data already in its production databases, processing
it to ensure that it is properly masked and subsetted, to comply with legal and
organizational requirements. Test data management tools are recommended for
both test data management and data masking purposes.
• Synthetic test data
As the name suggests, this type of test data is artificially generated, but closely
mimics the attributes of the company’s real data. Synthetic data, which is typically
used when production data is not accessible, is generated via any number of
synthetic data generation methods, including generative AI, business rules, and data
cloning.
Test Data Generation Tools
• Testsigma
• Mostly AI
• Datprof
• EMS Data Generator
• RedGate SQL Data Generator
• DTM Data Generator
• Mockaroo
• GenerateData
• Upscene – Advanced Data Generator
Software Testing Tools
• TestComplete
• LambdaTest
• TestRail
• Xray
• Zephyr Scale
• Selenium
• Ranorex
• TestProject

You might also like