0% found this document useful (0 votes)
107 views

Assignment-6

SQA involves ensuring quality is built into software at each stage of development through testing and quality checks. It aims to test software quality rather than just check it after completion. McCall's model identifies three categories of software quality factors - product operation, product revision, and product transition. Product operation focuses on correctness, reliability, efficiency, integrity, and usability. Product revision looks at maintainability, flexibility, and testability. Product transition examines portability, reusability, and interoperability. Quality metrics measure product, in-process, and maintenance quality. Examples include mean time to failure, defect density, and customer satisfaction. Six Sigma aims to identify and eliminate defects through statistical process control and a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

Assignment-6

SQA involves ensuring quality is built into software at each stage of development through testing and quality checks. It aims to test software quality rather than just check it after completion. McCall's model identifies three categories of software quality factors - product operation, product revision, and product transition. Product operation focuses on correctness, reliability, efficiency, integrity, and usability. Product revision looks at maintainability, flexibility, and testability. Product transition examines portability, reusability, and interoperability. Quality metrics measure product, in-process, and maintenance quality. Examples include mean time to failure, defect density, and customer satisfaction. Six Sigma aims to identify and eliminate defects through statistical process control and a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Assignment No.

- 6
Subject – EST&T
--------------------------------------------------------------------------------------------------------------------------------------------

1) What do you mean by SQA? Explain various activities involved in SQA

process in detail.?

A way to assure quality in the software. It is the set of activities which


ensure processes, procedures as well as standards are suitable for the project
and implemented correctly.
Software quality assurance is the process of ensuring the quality of
software that it meets the required it meet the desired quality measures. ...
SQA is used to test the software, rather than checking the quality after
completion. SQA processes, test for quality in each phase of development
until the software is complete

2) Explain different software quality factors & corresponding attributes in detail

1) Product Operation Software Quality Factors :


According to McCall’s model, product operation category includes five
software quality factors, which deal with the requirements that directly affect the
daily operation of the software
Correctness :
These requirements deal with the correctness of the output of the software system
 Output mission
 The required accuracy of output that can be negatively affected by inaccurate data
or inaccurate calculations.
 The completeness of the output information, which can be affected by incomplete
data.
 The up-to-dateness of the information defined as the time between the event and
the response by the software system.
 The availability of the information.
 The standards for coding and documenting the software system.
Reliability :
Reliability requirements deal with service failure. They determine the maximum
allowed failure rate of the software system, and can refer to the entire system or to one
or more of its separate functions
Efficiency :
It deals with the hardware resources needed to perform the different functions of the
software system. It includes processing capabilities (given in MHz), its storage capacity
(given in MB or GB) and the data communication capability (given in MBPS or GBPS).
Integrity :
This factor deals with the software system security, that is, to prevent access to
unauthorized persons, also to distinguish between the group of people to be given read
as well as write permit.
Usability :
Usability requirements deal with the staff resources needed to train a new employee and
to operate the software system.

2)Product Revision Quality Factors

According to McCall’s model, three software quality factors are included in the product
revision category
Maintainability :
This factor considers the efforts that will be needed by users and maintenance
personnel to identify the reasons for software failures, to correct the failures, and to
verify the success of the corrections.
Flexibility :
This factor deals with the capabilities and efforts required to support adaptive
maintenance activities of the software. These include adapting the current software to
additional circumstances and customers without changing the software. This factor’s
requirements also support perfective maintenance activities, such as changes and
additions to the software in order to improve its service and to adapt it to changes in the
firm’s technical or commercial environment.
Testability :
Testability requirements deal with the testing of the software system as well as with its
operation. It includes predefined intermediate results, log files, and also the automatic
diagnostics performed by the software system prior to starting the system, to find out
whether all components of the system are in working order and to obtain a report about
the detected faults. Another type of these requirements deals with automatic diagnostic
checks applied by the maintenance technicians to detect the causes of software failures.

3)Product Transition Software Quality Factor


According to McCall’s model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and
its interaction with other software systems. These factors are as follows −
Portability :
Portability requirements tend to the adaptation of a software system to other
environments consisting of different hardware, different operating systems, and so forth.
The software should be possible to continue using the same basic software in diverse
situations.
Reusability :
This factor deals with the use of software modules originally designed for one project in a
new software project currently being developed. They may also enable future projects to
make use of a given module or a group of modules of the currently developed software.
The reuse of software is expected to save development resources, shorten the
development period, and provide higher quality modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software systems or
with other equipment firmware. For example, the firmware of the production machinery
and testing equipment interfaces with the production control software.

3) Write short notes on :

a) Quality Metrics:

Software quality metrics :


are a subset of software metrics that focus on the quality aspects of the product,
process, and project. These are more closely associated with process and product
metrics than with project metrics.

Software quality metrics can be further divided into three categories −

1) Product quality metrics


2) In-process quality metrics
3) Maintenance quality metrics

 **Product Quality Metrics**

This metrics include the following −

 Mean Time to Failure


 Defect Density
 Customer Problems
 Customer Satisfaction

 **In-process Quality Metrics**


n-process quality metrics deals with the tracking of defect arrival during
formal machine testing for some organizations. This metric includes −

 Defect density during machine testing


 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness

 **Maintenance Quality Metrics**

Although much cannot be done to alter the quality of the product during this
phase, following are the fixes that can be carried out to eliminate the defects as
soon as possible with excellent fix quality.

 Fix backlog and backlog management index


 Fix response time and fix responsiveness
 Percent delinquent fixes
 Fix quality
------------------------------------------------------------------------------

b) Six Sigma :

Six Sigma is the process of producing high and improved quality


output. This can be done in two phases – identification and elimination. The
cause of defects is identified and appropriate elimination is done which
reduces variation in whole processes.
Characteristics of Six Sigma:
The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control:
Six Sigma is derived from the Greek Letter ? which denote Standard
Deviation in statistics. Standard Deviation is used for measuring the
quality of output.
2. Methodical Approach:
The Six Sigma is a systematic approach of application in DMAIC and
DMADV which can be used to improve the quality of production. DMAIC
means for Design-Measure- Analyze-Improve-Control. While DMADV
stands for Design-Measure-Analyze-Design-Verify.
3. Fact and Data-Based Approach:
The statistical and methodical method shows the scientific basis of the
technique.
4. Project and Objective-Based Focus:
The Six Sigma process is implemented to focus on the requirements
and conditions.
5. Customer Focus:
The customer focus is fundamental to the Six Sigma approach. The
quality improvement and control standards are based on specific
customer requirements.
6. Teamwork Approach to Quality Management:
The Six Sigma process requires organizations to get organized for
improving quality.
--------------------------------------------------------------------------
c) CMMI :

Capability Maturity Model Integration (CMMI) is a successor of CMM and is a


more evolved model that incorporates best components of individual disciplines
of CMM like Software CMM, Systems Engineering CMM, People CMM, etc. Since
CMM is a reference model of matured practices in a specific discipline, so it
becomes difficult to integrate these disciplines as per the requirements. This is
why CMMI is used as it allows the integration of multiple disciplines as and when
needed.
Objectives of CMMI :
1. Fulfilling customer needs and expectations.
2. Value creation for investors/stockholders.
3. Market growth is increased.
4. Improved quality of products and services.
5. Enhanced reputation in Industry.

The Capability Maturity Model Integration (CMMI) is a model that


helps organizations to: Effectuate process improvement. Develop behaviors
that decrease risks in service, product, and software development.

--------------------------------------------------------------------------------
d) TMM :

When a software is tested, there are so many processes which are followed
in order to attain maximum quality and minimizing defects or errors. Test
Maturity Model is one of such model which has a set of structured levels. TMM
is now replaced by Test Maturity Model Integration(TMMI) is a 5 level model
which provides a framework to measure the maturity of the testing processes.
Benefits of TMM:
1. Organized:
we have discussed all the 5 levels of TMM. Each level is well
defined and has a particular aim to achieve. This makes TMM a well-
organized model with clear goals.
2. Assurance of quality:
when we integrate testing with all the phases of software life cycle,
higher quality is achievable. Testing of test processes would optimize the
results which in turn gives assurance of good quality product.
3. Defect prevention:
as I mentioned earlier that TMM focuses on defect prevention
rather than defect detection by making testing process a part of all phases
of software life cycle. This would ensure that maximum defects are
prevented and final product is mostly defect free.

4. Clear requirements:
when requirements and designs are reviewed and test plans and test
cases are tested against requirements, the main test objectives are clearer
and hence, testing is more accurate.
Assignment No. - 5
Explain software test metrics.
Software testing metrics are a way to measure and monitor your test activities.
More importantly, they give insights into your team's test progress, productivity, and the
quality of the system under test. ... Result Metrics: metrics that are mostly an absolute
measure of an activity/process completed.
The goal of software testing metrics is to improve the efficiency and
effectiveness in the software testing process and to help make better decisions for
further testing process by providing reliable data about the testing process.
Types of Test Metrics

 Process Metrics: It can be used to improve the process efficiency of the SDLC ( Software
Development Life Cycle)
 Product Metrics: It deals with the quality of the software product
 Project Metrics: It can be used to measure the efficiency of a project team or any
testing tools being used by the team members

Why Test Metrics are Important


 Take decision for next phase of activities
 Evidence of the claim or prediction
 Understand the type of improvement required
 Take decision or process or technology change

1) Explain Basic COCOMO model.

The basic COCOMO model provide an accurate size of the project parameters. The

following expressions give the basic COCOMO estimation model:


Effort=a1*(KLOC)a2 PM
                Tdev=b1*(efforts)b2 Months

Where

KLOC is the estimated size of the software product indicate in Kilo Lines of Code,

a1,a2,b1,b2 are constants for each group of software products,

Tdev is the estimated time to develop the software, expressed in months,

Effort is the total effort required to develop the software product, expressed in person
months (PMs)

Organic: Tdev = 2.5(Effort) 0.38 Months

Semi-detached: Tdev = 2.5(Effort) 0.35 Months

Embedded: Tdev = 2.5(Effort) 0.32 Months

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.

Solution: The basic COCOMO equation takes the form:

Effort=a1*(KLOC)a2 PM
Tdev=b 1*(efforts)b2 Months
         Estimated Size of project= 400 KLOC

(i)Organic Mode

E=2.4*(400)1.05=1295.31PM
                D = 2.5 * (1295.31)0.38=38.07 PM

(ii)Semidetached Mode

E=3.0*(400)1.12=2462.79PM
                D = 2.5 * (2462.79)0.35=38.45 PM

(iii) Embedded Mode

  E=3.6*(400)1.20=4772.81PM
                D = 2.5 * (4772.8)0.32 = 38 PM
2) Describe Intermediate COCOMO model in detail.

Intermediate Model: 
The basic Cocomo model considers that the effort is only a function of the number
of lines of code and some constants calculated according to the various software systems.
The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on
various attributes of software engineering.

Classification of Cost Drivers and their attributes:

(i) Product attributes -


o Required software reliability extent
o Size of the application database
o The complexity of the product

Hardware attributes -
o Run-time performance constraints
o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time

Personnel attributes -
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience

Project attributes -
o Use of software tools
o Application of software engineering methods
o Required development schedule

Intermediate COCOMO equation:

  E=ai (KLOC)bi*EAF
          D=ci (E)di
3) Explain Function Point Metrics.//////

The conceptual Idea behind the FPM is that size of the software product is directly

dependent on the number of different functions or features it supports. ... Beside using the

number of the input and output data values function point metric computes the size of a

software product(in unit of function points or FPs.)

4) 3) Explain Test point Analysis. ? [**TAKE CARE**]

Test Point Analysis (TPA) : 


Test point analysis is a technique used to estimate black box testing. It
corresponds to Function point analysis which is used for White box testing
estimations. It makes use of 3 entities to make an estimate, namely –
5) Size – 
This is determined in terms of functional points adjusted for complexity,
interfaces, and uniformity.
6) Strategy – 
In particular which quality attributes or risks are to be tested and to what
extent. 
7) Productivity – 
This gives us productivity levels of participants, which is determined by the
skills of the testing team and are influenced by the process followed by
organization, technology being used.
How to calculate Test Points :
1. First we calculate, For each functional area of the system, a value that gives
the effort weightage for the dynamic quality characteristics i.e Quality
characteristics which will be tested with dynamic tests. (Actual code gets
executed in dynamic tests)
 Four characteristics are considered – Functionality, Usability,
Security and Efficiency.
 A weight is given using numbers 0,3,4,5,6 depending upon the
importance of dynamic testing for each quality characteristic.
 Smaller numbers indicate less importance, large numbers indicate more
importance.
 Default value for weight is 4, indicated in bold. 
Dynamic Quality Characteristics
Qd = (.75 Qf+0.05Qp+0.1 Qu +0.1 Qe)/4 Qf= Functionality Factor Qp-
Security Factor Qu= Usability Factor Qe= Efficiency Factorfficiency
Factor
2. Secondly, we calculate for each functional area of the system, a function-
dependent weighing factor based on various attributes that will make it easier or
harder to test.
Five Factors considered are – Usage Importance, Usage Intensity,
Interfacing, Complexity, and Uniformity.
 A weightage number is given for each factor as given in the figure.
 The default value for each attribute is highlighted in bold. 
 Lower numbers indicate it is easy or less critical to test.
 For uniformity, the smaller number indicates harder to test. 
Function Dependent Factors
Df = ((Ue+Uy+I+C))20)*U
Ue= Usage Importance. 3,6 or 12
Uy= Usage Intensity, 2,4,or 12
I= Interfacing; 2,4, or 6
C= Complexity 3,6, or 12
Uniformity 0.6 or 1
 
3. Third calculation is dynamic test point calculation for a given functional area of
the system. This could be a lengthy process if the system is complex.
 Multiply the dynamic quality weighting factor with a function-dependent
weighting factor and function point count of the given functional area.
 This process is repeated to calculate dynamic test point count for all
functional areas of the entire system.
8)

9) Write short note on:

a) Test Organization structure

A test organization defines who is responsible for what activity in the test process. The
organization defines the test functions, test facilities and test activities. It defines the
competencies and knowledge of the people involved.

 In this structure, the test group reports into the Development Manager, the person
managing the work of the programmers. Given what you've learned about software testing,
this should raise a red flag of warning to youthe people writing the code and the people
finding bugs in that code reporting to the same person has the potential for big problems.

The organizational structure for a small project often has the test team reporting to the
development manager.
shows another common organizational structure where both the test group and the
development group report to the manager of the project. In this arrangement, the
test group often has its own lead or manager whose interest and attention is
focused on the test team and their work. This independence is a great advantage
when critical decisions are made regarding the software's quality. The test team's
voice is equal to the voices of the programmers and other groups contributing to
the product.

In an organization where the test team reports to the project manager, there's some
independence of the testers from the programmers.

The downside, however, is that the project manager is making the final decision on
quality. This may be fine, and in many industries and types of software, it's
perfectly acceptable. In the development of high-risk or mission-critical systems,
however, it's sometimes beneficial to have the voice of quality heard at a higher
level. The organization shown in Figure 21.4 represents such a structure.
Three organizational structures are just simplified examples of the many types
possible and that the positives and negatives discussed for each can vary widely

b) Cyclomatic Complexity Measures for Testing

Cyclomatic complexity is a software metric used to indicate the complexity of a

program. It is a quantitative measure of the number of linearly independent paths

through a program's source code

Cyclomatic complexity of a code section is the quantitative measure of the number

of linearly independent paths in it. It is a software metric used to indicate the


complexity of a program. It is computed using the Control Flow Graph of the program.

The nodes in the graph indicate the smallest group of commands of a program, and a

directed edge in it connects the two nodes i.e. if second command might immediately

follow the first command.

M = E – N + 2P

where,

E = the number of edges in the control flow graph

N = the number of nodes in the control flow graph

P = the number of connected components

Steps that should be followed in calculating cyclomatic complexity and test cases
design are: 
 
 Construction of graph with nodes and edges from code.
 Identification of independent paths.
 Cyclomatic Complexity Calculation
 Design of Test Cases
A = 10

IF B > C THEN

A=B

ELSE

A=C

ENDIF

Print A

Print B

Print C

Control flow graph:


Advantages of Cyclomatic Complexity:.
 It can be used as a quality metric, gives relative complexity of various
designs.
 It is able to compute faster than the Halstead’s metrics.
 It is used to measure the minimum effort and best areas of
concentration for testing.

Disadvantages of Cyclomatic Complexity:


 It is the measure of the programs’s control complexity and not the data
the data complexity.
 In this, nested conditional structures are harder to understand than non-
nested structures.

c) Test planning
A Test Plan is a detailed document that describes the test strategy, objectives,
schedule, estimation, deliverables, and resources required to perform testing for a
software product. Test Plan helps us determine the effort needed to validate the
quality of the application under test. The test plan serves as a blueprint to conduct
software testing activities as a defined process, which is minutely monitored and
controlled by the test manager.

 Importance of Test Plan?


Making Test Plan document has multiple benefits

 Help people outside the test team such as developers, business managers,
customers understand the details of testing.
 Test Plan guides our thinking. It is like a rule book, which needs to be
followed.

Test Planning Activities:


 To determine the scope and the risks that need to be tested and that are NOT to
be tested.
 Documenting Test Strategy.
 Making sure that the testing activities have been included.
 Deciding Entry and Exit criteria.
 Evaluating the test estimate.
 Planning when and how to test and deciding how the test results will be evaluated,
and defining test exit criterion.
 The Test artefacts delivered as part of test execution.
 Defining the management information, including the metrics required and defect
resolution and risk issues.
 Ensuring that the test documentation generates repeatable test assets.
 Explain Object Oriented testing techniques ?

 Whenever large scale systems are designed, object oriented testing is done
rather than the conventional testing strategies as the concepts of object
oriented programming is way different from that of conventional ones.
 The whole object oriented testing revolves around the fundamental entity
known as “class”.
 With the help of “class” concept, larger systems can be divided into small well
defined units which may then be implemented separately.
 The object oriented testing can be classified as like conventional systems.
These are called as the levels for testing.

Object Oriented Testing : Levels / Techniques

 The levels of object oriented testing can be broadly classified into three categories.
These are:

Object Oriented Testing : Techniques

1. Class Testing
o Class testing is also known as unit testing.
o In class testing, every individual classes are tested for errors or bugs.
o Class testing ensures that the attributes of class are implemented as per the
design and specifications. Also, it checks whether the interfaces and methods
are error free of not.
2. Inter-Class Testing
o It is also called as integration or subsystem testing.
o Inter class testing involves the testing of modules or sub-systems and their
coordination with other modules.
3. System Testing
o In system testing, the system is tested as whole and primarily functional
testing techniques are used to test the system. Non-functional requirements
like performance, reliability, usability and test-ability are also tested.

 Explain Web Based software testing.

mosting, a software testing technique exclusively adopted to test the applications that are

hosted on web in which the application interfaces and other functionalities are tested

1. Functionality Testing - The below are some of the checks that are performed but not
limited to the below list:
 Verify there is no dead page or invalid redirects.
 First check all the validations on each field.
 Wrong inputs to perform negative testing.
 Verify the workflow of the system.
 Verify the data integrity.
2. Usability testing - To verify how the application is easy to use with.
 Test the navigation and controls.
 Content checking.
 Check for user intuition.
3. Interface testing - Performed to verify the interface and the dataflow from one system
to other.
4. Compatibility testing- Compatibility testing is performed based on the context of the
application.
 Browser compatibility
 Operating system compatibility
 Compatible to various devices like notebook, mobile, etc.
5. Performance testing - Performed to verify the server response time and throughput
under various load conditions.
 Load testing - It is the simplest form of testing conducted to understand the
behaviour of the system under a specific load. Load testing will result in
measuring important business critical transactions and load on the database,
application server, etc. are also monitored.
 Stress testing - It is performed to find the upper limit capacity of the system and
also to determine how the system performs if the current load goes well above the
expected maximum.
 Soak testing - Soak Testing also known as endurance testing, is performed to
determine the system parameters under continuous expected load. During soak
tests the parameters such as memory utilization is monitored to detect memory
leaks or other performance issues. The main aim is to discover the system's
performance under sustained use.
 Spike testing - Spike testing is performed by increasing the number of users
suddenly by a very large amount and measuring the performance of the system.
The main aim is to determine whether the system will be able to sustain the work
load.
6. Security testing - Performed to verify if the application is secured on web as data
theft and unauthorized access are more common issues and below are some of the
techniques to verify the security level of the system.
 Injection
 Broken Authentication and Session Management
 Cross-Site Scripting (XSS)
 Insecure Direct Object References
 Security Misconfiguration
 Sensitive Data Exposure
 Missing Function Level Access Control
 Cross-Site Request Forgery (CSRF)
 Using Components with Known Vulnerabilities
 Unvalidated Redirects and Forwards
.

 Explain Mobile application testing.

Mobile application testing is a process by which application software developed for

handheld mobile devices is tested for its functionality, usability and consistency.

Mobile application testing can be an automated or manual type of testing.

Types of mobile application testing[edit]


 Functional testing ensures that the application is working as per the requirements. Most
of the tests conducted for this is driven by the user interface and call flow.
 Laboratory testing, usually carried out by network carriers, is done by simulating the
complete wireless network. This test is performed to find out any glitches when a mobile
application uses voice and/or data connection to perform some functions.
 Performance testing is undertaken to check the performance and behavior of
the application under certain conditions such as low battery, bad network coverage, low
available memory, simultaneous access to the application's server by several users and
other conditions. Performance of an application can be affected from two sides:
the application's server side and client's side. Performance testing is carried out to check
both.
 Memory leakage testing: Memory leakage happens when a computer program
or application is unable to manage the memory it is allocated resulting in poor performance
of the application and the overall slowdown of the system. As mobile devices have
significant constraints of available memory, memory leakage testing is crucial for the
proper functioning of an application
 Interrupt testing: An application while functioning may face several interruptions like
incoming calls or network coverage outage and recovery. The different types of
interruptions are:

 Incoming and outgoing SMS and MMS


 Incoming and outgoing calls
 Incoming notifications
 Battery removal
 Cable insertion and removal for data transfer
 Network outage and recovery
 Media player on/off
 Device power cycle
An application should be able to handle these interruptions by going into a suspended state
and resuming afterwards.

 Usability testing is carried out to verify if the application is achieving its goals and
getting a favorable response from users. This is important as the usability of
an application is its key to commercial success (it is nothing but user friendliness).
[9]
 Another important part of usability testing is to make sure that the user
experience is uniform across all devices.[10] This section of testing hopes to
address the key challenges of the variety of mobile devices and the diversity in
mobile platforms/OS, which is also called device fragmentation. One key portion of
this type of usability testing is to be sure that there are no major errors in the
functionality, placement, or sizing of the user interface on different devices.[11]
 Installation testing: Certain mobile applications come pre-installed on the device
whereas others have to be installed by the store. Installation testing verifies that
the installation process goes smoothly without the user having to face any
difficulty. This testing process covers installation, updating and uninstalling of
an application
 Certification testing: To get a certificate of compliance, each mobile device
needs to be tested against the guidelines set by different mobile platforms.
 Security testing: Checks vulnerabilities to hacking, authentication and
authorization policies, data security, session management and other security
standards.[12]
 Location testing: Connectivity changes with network and location, but you can't
mimic those fluctuating conditions in a lab. Only in Country[clarification needed] non-
automated testers can perform comprehensive usability and functionality testing.
 Outdated software testing: Not everyone regularly updates their operating
system. Some Android users might not even have access to the newest version.
Professional testers can test outdated software.
 Load testing: When many users all attempt to download, load, and use an app or
game simultaneously, slow load times or crashes can occur causing many
customers to abandon your app, game, or website. In-country human testing done
manually is the most effective way to test load. [13]
 Black-box testing: Where the application is tested without looking at
the applications code and logic. The tester has specific test data to input and the
corresponding output that the application should produce, and inputs the test
data looking for the program to output data consistent with what the tester was
expecting. This method of test can be applied virtually to every level of software
testing: unit, integration, system and acceptance.

 Write Short note on:

a) Challenges in testing for Web based software

Cross Browser Compatibility


Earlier, when internet explorer was the only browser available, just unit testing would have
done the job. But, currently, with hundreds of browsers along with their different versions
available for desktop and mobile, cross-browser compatibility is a common issue. It is ideal
for a tester to use a cloud testing platform like Lambdatest for testing whether the
application is compatible across different browsers.
Cross-Device Compatibility
Nowadays, people mostly use mobile devices to access websites. Although there are a
limited number of devices in iOS, the count increases tenfold when it comes to Android. It
is important for a tester to target the devices where the application is meant to run and
start testing in each of them.

Responsiveness
The one thing to look out for while testing is whether the application fits properly in the
device resolution. A tester must check if there are any horizontal scrolling, alignment or
padding issues, and sizes of font and buttons in different devices.

Integration Testing
The rating of an application depends on its usability as well as functionality. Integration
testing is a must thing to carry out at the user’s end to check whether the application is
reliable, all the critical functionalities work properly as well as there is no significant impact
on performance after merging new features.

Security
If the application has features like online transaction and payment gateways, testing
should be executed to ensure that there are no chance of any fraudulent activities and
local storage of payment-related data in the device.

Performance Testing
Often a web application gets too slow or crashes when the internet traffic increases all of
a sudden. Performance testing should be carried out to ensure that there is no impact on
the speed of performing an activity using the application.

Application Getting Slow


It does not matter what device is used to access the application, due to poor network
coverage or low configuration of processor or physical memory an application may run
slower or take an infinite time to load a page. Testing should be conducted to ensure that
it is properly optimized to run properly under any condition.

Usability Testing
Interactive and dynamic web applications are always popular among users. Proper unit
testing should be carried out across devices from the user’s perspective to ensure there
are no such issues that may impact the usability of the application.
Entry and Exit Points
There are stages when a user will need to navigate out from the application to a third-
party website and redirect from another website or gateway to the application. It is a real
challenge to test whether this feature works properly.

Checking the Standards and Compliance


W3C has stated several standards and guidelines that every web application must comply
to. To ensure proper site ranking in the search engine index, the code should be tested
properly to check whether the website follows those standards and guidelines.

Firewalls
Often a web application is blocked by certain firewalls or port. This may be because of the
security certificate or something else. Testing should be conducted to ensure that it
behaves properly across all firewalls.

Accessibility Testing
W3C has mentioned several guidelines stated in Section 508 and WCAG which requires a
website to be accessible by all people, especially people with disabilities. Testing should be
conducted to ensure that users with hearing or sight disabilities can access the website
with the use of screen-reader and other devices.

Project Deadline
Testing is often not conducted properly when a project is coming nearer to the deadline. It
should be planned beforehand to ensure that there is a proper time for testing the
functionality, performance, and usability of the application before it is deployed in
production.

b) Challenges in testing for Mobile applications

 Too many devices globally


1.38 billion smartphones were sold worldwide in 2020 and 1.53 billion so far in 2021. The
numbers make it easy for us to guess the variety of mobile devices being used on the
world forum. However, this creates trouble for the testing team since applications are
expected to run smoothly on most such devices.
 Device fragmentation
Device fragmentation is one of the leading mobile app testing challenges since the number
of active devices running an app at any given time increases every year. This can pose a
significant compatibility issue since testing teams have to ensure these applications can
not only be deployed across different operating systems (like Android, iOS, Windows, etc.)
but also across various versions of the same operating system (like iOS 5.X and 6.X).

Different screen sizes


Companies across the globe design smartphones of varying screen specifications. Multiple
variants of the same model have different resolutions and screen sizes to attract a broader
range of consumers. Hence, there is a requirement for apps to be developed in
conjunction with every new screen specification released in the market.

The screen size affects the way an application will appear on different devices. It is one of
the most complicated mobile app testing challenges since developers must now
concentrate on its adaptability to various mobile screens. This includes resizing the apps
and adjusting to multiple screen resolutions to maintain consistency across all devices.
This might turn out to be a challenge unless an application is thoroughly tested.

Mobile network bandwidth


Mobile network bandwidth testing is a significant part of mobile app testing. Users expect
high-speed mobile applications that the backend team must ensure. But that is not all. An
application that fumbles to produce faster results also performs poorly in terms of data
communication.

ecurity concerns
Security concerns are a huge roadblock for the mobile app testing team. Although private
cloud-based mobile app testing tools like LambdaTest are secure, there are several
concerns that app developers regularly face.

 Easier access to the cache: Mobile devices are more prone to breaches since it is simpler to
access the cache. Suspicious programs can therefore find easy routes to private
information through mobile applications unless built and tested to nullify the
vulnerabilities.
 Poor encryption: Encryptions are the first walls between user data and malignant sources.
Poor or no encryption in mobile applications can attract hackers like a moth to the flame.
The initial half of 2020 witnessed data breaches that disclosed 36 billion records.
Therefore, developers must build apps with more robust encryption coding and then the
app testing team to ensure the encryption works well.
Too many app testing tools
There is a wide range of cloud-based mobile app testing tools not built from a one-size-
fits-all perspective. There are separate tools for the different kinds of applications, some
more which only test Android apps and others that check the ones for iOS. There is no
shortage of platforms and tools that test applications of all specifications.

c) Emulators and simulators

You might also like