0% found this document useful (0 votes)
21 views43 pages

STQA-Chapter 3

Chapter 3 discusses the importance of test management in software testing, emphasizing its role in ensuring high-quality software delivery and efficient testing processes throughout the software development life cycle (SDLC). It outlines the test management process, including planning and execution phases, and details the various activities involved, such as test planning, designing, execution, and reporting. Additionally, it covers the components of a test plan, test case design techniques, and best practices for effective test case design.

Uploaded by

dagimnega208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views43 pages

STQA-Chapter 3

Chapter 3 discusses the importance of test management in software testing, emphasizing its role in ensuring high-quality software delivery and efficient testing processes throughout the software development life cycle (SDLC). It outlines the test management process, including planning and execution phases, and details the various activities involved, such as test planning, designing, execution, and reporting. Additionally, it covers the components of a test plan, test case design techniques, and best practices for effective test case design.

Uploaded by

dagimnega208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Chapter 3

TEST MANAGEMENT: DESIGN AND EXECUTION

S O F T WA R E T E S T I N G A N D Q U A L I T Y A S S U R A N C E

1
Software Testing - Test Management
Software testing is a very critical phase in the entire software development life cycle (SDLC). During the
testing phase, it is very important that all the testing activities are well managed to ensure that they are
performed seamlessly without delaying the committed timelines.
What is Software Test Management?
Test management is a technique for managing testing processes to ensure software quality and high-standard
testing activities. It involves monitoring, arranging, and governing procedures and verifying the importance of
the testing processes to deliver software of the highest quality. It also ensures that the team can carry out the
testing activities without any bottlenecks.
Thus, test management is critical as it guarantees delivery of high-quality software which has minimal
probability of finding defects in the production. It ensures that the software is built as per the requirements given
by the customers. Moreover, it helps to meet the project timelines faster, encourages an environment of
collaboration, and effective allocation of resources.

2
Process of Software Test Management
The test management process is adopted to manage all the testing activities right from the
beginning to the end of the SDLC. It allows planning, demanding, chasing, and keeping track
of every software testing task. These tasks include test planning, test case preparation, test
execution, etc. It helps to set up the initial resources and correct the requirements and
specifications to streamline the testing process.
The test management process mainly consists of two parts planning and execution.
The planning phase consists of the evaluation of the risks, estimation of the testing tasks, test planning, and
test organization.
The execution phase consists of the monitoring of the testing activities, management of defects, creation of
test reports and its further analysis.

3
Activities Involved in Software Test Management
The activities involved in the software test management process are listed below −
Test Planning − It involves the planning of the testing activities in the entire SDLC. It sets out the aims, and
objectives of the entire testing lifecycle.
Test Designing − It involves the creation of the test cases.
Test Execution − It involves the execution of the test cases and then comparing the results against the
requirements.
Exit Criteria − It determines all the checklists that should be fulfilled before the testing can be considered to
be completed.
Test Reporting − It involves generating reports to list down all the testing activities, and processes. It also
describes the outcome of a specific test cycle.

4
The various tools used for the software test
management are listed below −
 TestRail
 Test Collab
Tools Used for  ALM/HP
Software Test  Zephyr
Management  TestLink
 Testpad
 qTest
 Jira

5
Software Testing - Test Plans
Software Testing Life Cycle (STLC) starts with the creation of a Test Plan.
It is a document that contains all the information regarding testing scope, resources, budget, test
approaches, roles and responsibilities, deadlines, environments, and potential risks. Thus, a Test
Plan is a set of guidelines defined by project stakeholders for the successful testing of a software.
What is a Test Plan?
A Test Plan is an important document for carrying out software testing activities. It is created
with the intent to detect as many defects as possible in the initial stages of the software
development life cycle (SDLC).
It is also observed that products guided by a detailed test plan incur less of cost after being
shipped to the customers. This is because the bugs are detected early. It is a costly affair to fix
defects at the later stages of SDLC.

6
Who Uses a Test Plan?
A Test Plan is a crucial tool for guiding team members in delivering quality software. It
helps developers measure testing scope and target areas in the software.
For the testing team, it lays the foundation for activities, detailing strategies, timelines, and
roles. It helps detect bugs, verify software features, and improve test coverage.
Project managers use a Test Plan to manage deadlines, plan resources, and improve software
quality.
Business analysts use it to assess test cases' coverage of customer requirements and detect
irrelevant ones.
Compliance teams validate testing procedures and processes, while support teams anticipate
potential bugs and propose solutions.

7
What Makes up a Test Plan?
oA Test Plan has multiple elements discussed below:
Test Objectives : The test objectives section contains the direction of testing, standard processes,
and methodologies that will be followed. Thus, it mainly focuses on detecting maximum defects and
enhancing the quality.
Scope : The scope section contains all the items to be tested and what all items will not be included
in the testing phase.
Test Items: Clearly identify the specific software components, modules, or features that will be
tested. Reference relevant documentation, such as requirements specifications and design documents
Test Methodology : The test methodology section contains information on the testing types, tools,
and methodologies that will be adopted.
Approach : The approach section contains the high-level test scenarios and flow of events from one
module to the next.

8
What Makes up a Test Plan?.....
 Assumptions : The Assumptions section contains the assumptions taken into considerations to test the software, for
example, the test team should get all the knowledge, support, assistance on from the development team and there will be
enough resources to carry out the testing process.
 Pass/Fail Criteria: Define clear and objective criteria for determining whether a test case has passed or failed. Consider
factors such as expected results, acceptable tolerances, and error handling
 Risks and Contingencies : The Risks section contains all the possible risks, for example, wrong budget estimation,
production defects, resource attrition etc, that may come up and the mitigation plans of all these risks.
 Role and Responsibilities : The roles and responsibilities section contains information about individual roles and
responsibilities to be carried by test team members.
 Schedule : The schedule section contains information about timelines for every testing activity, for example, test cases
creation, test execution etc.
 Defect Logging : The defect logging section contains all the information about the defect logging and tracking activities.
 Test Environment : The test environment section contains information on the environment specifications, for example,
hardware, software, configurations, installation steps etc on which test will be performed.

9
What Makes up a Test Plan?....
Entry and Exit Conditions: The Entry and Exit condition section contains information about the
requirements or checklists that need to be satisfied prior beginning and ending of test activities.
Suspension and Resumption Criteria: Specify the conditions under which testing should be
suspended and the requirements for resuming testing. Address factors such as critical defects,
environmental issues, or resource constraints
Automation : The Automation section contains information about what all features of the software
are a part of the automation.
Effort Estimation : The Effort Estimation section contains information about the effort estimation of
the testing team.
Test Deliverables : The Deliverables section contains information about the list of test deliverables,
namely the test plan, test strategy, test scenarios, test cases, test data, defects, logs, reports, etc.
Template: The Deliverables section contains information about the templates that will be used for
creating the test deliverables to maintain uniformity and standards maintained across all deliverables.

10
How to Create a Good Test Plan?
A good Test plan can be created by following the below steps −
 Analyze and have the best understanding of the requirements.
 Identify the test objectives and scope of the project.
 Identify the test deliverables of the project along with timelines.
 Identify all the information of the test environment.
 Identify all the possible risks in the project and its mitigation plans.
 Carry out the retrospection meetings to figure out what went right, or wrong,
and can be improved upon.

11
Test Steps
What are the Test Steps?

 Test Steps describe the execution steps and expected results that are documented against each one of
those steps.
 Each step is marked pass or fail based on the comparison result between the expected and actual
outcome.

While developing the test cases, we usually have the following fields:

1.Test Scenario
2.Test Steps
3.Parameters
4.Expected Result
5.Actual Result

12
Example:
Let us say that we need to check an input field that can accept maximum of 10 characters.
While developing the test cases for the above scenario, the test cases are documented the
following way. In the below example the first case is a pass scenario while the second case is a
FAIL.

Scenario Test Step Expected Result Actual Outcome

Verify that the input field


Login to application and Application should be able Application accepts all 10
that can accept maximum
key in 10 characters to accept all 10 characters. characters.
of 10 characters

Verify that the input field


Login to application and Application should NOT Application accepts all 10
that can accept maximum
key in 11 characters accept all 11 characters. characters.
of 11 characters

If the expected result doesn't match with the actual result then we log a defect. The defect goes
through the defect life cycle and the testers address the same after fix.

13
Test Strategy
What is Test Strategy?
Test Strategy is also known as test approach defines how testing
would be carried out. Test approach has two techniques:
Proactive - An approach in which the test design process is
initiated as early as possible in order to find and fix the defects
before the build is created.
Reactive - An approach in which the testing is not started until
after design and coding are completed.

14
Key Considerations for Test Strategy
• Risk Assessment: Identify and prioritize risks based on their potential impact
and likelihood of occurrence.
• Test Levels: Determine the appropriate levels of testing, including unit,
integration, system, and acceptance testing
• Expertise and Experience of the people in the proposed tools and techniques.
• Testing Techniques: Select suitable techniques based on project constraints,
and risk assessment.
• Regulatory and Legal aspects, such as external and internal regulations of the
development process
• The Nature of the product and the Domain of industry.
15
Test Case Design Technique
Software testing involves creation and execution of the test cases to confirm if all the features, and
functionalities of the software are working as expected. The test case design techniques include the planning,
creation, and execution of tests. All these improve the effectiveness of the tests and help to detect bugs in the
software.
What are Software Test Case Design Techniques?
The Test Case design techniques describe various ways to generate the test cases. They help to ensure that every
functionality of the software is working correctly without any bottlenecks. Let us take an example of an e-
commerce application, where only valid users should be able to login.
Test Case Title − This test verifies that only valid users should be able to login to the e-commerce site.
Test Case Design − Verify that only users with valid phone number and email address can register and later
login to the e-commerce site.
Test Case Prerequisites − The user possesses an accurate email address, and phone number.
Test Case Assumptions − The user is using a mobile device or desktop to login.

16
Sources of Information for Test Case Design
Requirements and functional specifications: These documents outline what the software should do and
provide a foundation for functional testing.
Source code: Analysing the code structure can inform structural testing, helping to identify paths and
conditions that need to be tested.
Input and output domains: Understanding the range of possible inputs and outputs can guide the selection of
test data and the definition of expected results.
User stories: These describe how users interact with the system and can be used to design user-centric test
cases.
Risk assessment: Identifying potential areas of failure can guide the design of test cases that focus on high-risk
areas.
Previous defects: Analysing past bugs can help anticipate potential problems and inform the design of test
cases to prevent regressions.

17
Types of Test Cases
Different types of test cases are designed to address various aspects of software
functionality and quality:
 Positive test cases verify that the software functions as expected under normal conditions.
 Negative test cases check how the software handles invalid inputs, errors, and boundary
conditions.
 Functional test cases focus on the specific functions and features of the software.
 Structural test cases are based on the code structure and aim to cover different code paths and
conditions.
 Performance test cases assess the software's speed, responsiveness, and resource usage under
different load conditions.
 Security test cases evaluate the software's resistance to attacks and vulnerabilities.
 Usability test cases focus on the user experience, ease of use, and accessibility.

18
Types of Software Test Case Design Techniques
The various types of test case design techniques are listed below −
Requirement Based
It is also known as the black box testing technique that validates the features of the software without considering its internal
working. It consists of the procedures listed below −
• Boundary Value Analysis − In this methodology, the verification is done around the boundary values of the valid and
invalid data sets. The behavior of the software at the edge of the equivalence partitions has a higher probability of finding
errors.
• Equivalence Partitioning − This methodology allows the testers to segregate input data into groups. It reduces the total
count of tests without compromising the test coverage.
• Decision Table − This methodology allows building of test cases from the decision tables created using various
combinations of input data and their outcomes which originate from different situations and use cases.
• State Transition Diagram − This methodology is used to test the change in the states of a software using different inputs. If
the conditions under which the input are updated, then there are changes to the states of the software.
• Use Case Testing − This methodology is focused on verifying the test scenarios involving the entire software.

19
Structure Based
It is also known as the white box testing technique that validates the internal working of
the software by the developers. It consists of the procedures listed below −

• Statement Coverage Testing − This methodology validates every executable line in the program source
code at least once.
• Decision Coverage Testing − This methodology tests all decision outcomes in the program.
• Condition Coverage Testing − This methodology primarily verifies all the conditions in the program
source code.
• Multiple Condition Testing − This methodology is used to verify different circumstances to get a very
good test coverage. It relies on multiple test scripts, hence requires more time for completion.
• Path Testing This methodology uses the control flow graph to calculate a group of linearly independent
paths.

20
Experience Based
It consists of the procedures listed below −
• Error Guessing − This methodology is an informal testing where the testers use their
knowledge, experience, expertise, and domain understanding to identify potential defects
in the software. Those defects may not have been found by the formal test cases or by
simply analyzing the requirements.

• Exploratory Testing − This methodology is an informal testing technique practiced on


the software to determine bugs. It is an unsystematic approach.

21
Test Case
Example

22
Test Case
Example
Continued

23
Best Practices for Test Case Design
To overcome these challenges and design effective test cases, consider the following best practices:
Start early: Involve testers in the early stages of the software development lifecycle to ensure that testability
is considered from the outset.
Prioritize test cases: Focus on high-risk areas and critical functionalities.
Use a variety of techniques: Employ different test design techniques to ensure comprehensive coverage.
Keep test cases simple and focused: Each test case should have a single, clear objective.
Document test cases thoroughly: Clear documentation makes test cases easier to understand, execute, and
maintain.
Automate where possible: Automate repetitive and time-consuming test cases to improve efficiency.
Regularly review and update test cases: Ensure test cases remain relevant and effective as software evolves.

24
Software Testing - Test Data Generation
What is Test Data Generation?
The test data generation is the process of gathering, generating, and controlling a large dataset from multiple
resources for the test cases to check the software functionalities. The data sets act as input to the test case to
verify the software behaviors.
Test data generation is done for positive, negative, and edge test cases for a particular requirement. The
generation of relevant test data is a critical step, and irrelevant data generation leads to incomplete test
coverage, or missing out on a requirement.
Test data can be pre-generated or randomly generated.
Pre-generated test data is typically driven by test data generation, where the focus is on designing a test
oracle for each test case by invoking all the rules that apply to that case.
Randomly generated test data is driven by the rules of the specification, where the rules determine the form
the test data takes.

25
Techniques of Test Data Generation
The techniques of test data generation are listed below −
Manual Test Data Generation :
In manual test data generation, the data sets are produced by testers manually based on their knowledge of the
product, testing skills, and requirements.
The advantage of this approach is that manual test data generation can be adopted very easily, without requiring
additional tools.
Besides, testers become more confident with the data they are using to test the product. The disadvantage of
this approach is that it takes a lot of time to generate test data manually. Since manual effort is made, also there
are chances of human error.

26
Techniques of Test Data Generation…
Automated Test Data Generation
In automated test data generation, the datasets are produced with the help of tools.
Thus, the rate of generation of large chunks of data over a short time is very high.

The advantages of this approach is that automated test data generation has a higher
level of accuracy, and speed at which large data sets are created.
The disadvantage of this approach is that it is a costly affair, and the test data
generation tools need time to understand the application under test before creating the
data sets.

27
Techniques of Test Data Generation…
Backend Test Data Generation Through Injection
In backend test data generation through injection, the data sets are created using SQL
queries.
A valid SQL query is injected into the database to get the required inputs to the test
cases. This is a relatively easier technique since it creates large data sets in a short time.
Also, the database schemas are updated based on the newer data sets.
The advantage of this approach is that backend test data generation through injection
produces data in a very short time requiring not much technical knowledge of the user.
The disadvantage of this approach is that the usage of a wrong query populates
incorrect data sets for the test cases.

28
Techniques of Test Data Generation…
 Third-Party-Tool Test Data Generation
In third-party-tool test data generation, out-of-the-premise tools are used to generate data.
These tools first understand the application under test and then create data sets as per the
user’s needs.
They can be customized to create varied data under the business requirements.
The advantage of this approach is that third-party-tool test data generation produces
accurate test data, and increases the test coverage. The disadvantage of this approach is that
it is a costly approach and produces less coverage test data sets for non-homogeneous
environments as these tools are not generic in their behavior.

29
Test Environment
What is Test Environment?
Test Environment consists of elements that support test execution with software, hardware
and network configured. Test environment configuration must mimic the production
environment in order to uncover any environment/configuration-related issues.
Multiple test environments are often necessary to support different categories of testing, such
as performance testing and security testing.
Factors for designing the Test Environment:
Determine if the test environment needs archiving to take backups.
Verify the network configuration.
Identify the required server operating system, databases, and other components.
Identify the number of licenses required by the test team.
30
Environmental Configuration:
It is the combination of hardware and software environment on which the tests will be executed.
It includes hardware configuration, operating system settings, software configuration, test terminals and other
support to perform the test.

Example:
A typical Environmental Configuration for a web-based application is given below:
 Web Server - IIS/Apache
 Database - MS SQL
 OS - Windows/ Linux
 Browser - IE/FireFox
 Java version : version 6

31
Software Testing - Documentation Testing
What is Test Documentation?
Test documentation refers to the documentation of all the testing artifacts that
guide the overall testing process.
It includes project estimation, resources, timelines, project progress, and so on.
It comprises a whole set of documents that record, and document the test plan,
test case, test strategy, test execution report, test summary report, and so on.

32
Types of Test Documentation
The different types of test documentation are listed below −
Test Scenario : A test scenario document contains various ways or amalgamation of testing on the product. It gives an
overview of end-to-end application flow, but does not incorporate any data, inputs, or step by step actions to be performed on
the application.
Test Case Specification: A test case document contains inputs, data, line by line actions to be performed on the application,
expected, and actual outcomes of those actions, and so on. It is derived from a test scenario.
Test Plan : A test plan document contains the information on project scope, resources, cost, strategy, timelines,
methodologies, and so on. It is a set of testing guidelines defined by project stakeholders for successful testing.
Requirement Traceability Matrix: A requirement traceability matrix or RTM is a document prepared to ensure that for
every requirement there is at least one test case written.
Test Strategy: A test strategy document contains information on the various testing types, approaches, levels, scopes, and so
on. Once created and approved, a test strategy document is not modified.
Bug Report: A defect report contains information on the total count of defects logged during the testing process, and used
extensively by both the developers, and testers. It is a very critical document that helps to track, and manage bugs, report a
bug, change the bug status, bug fixes, avoid duplicate bugs, and bring bugs to closure.
Execution Report (Test Log): An execution report document is prepared at the end of the testing process by a senior
member of the testing team. It contains information on the total count of test cases, the number of passed, failed, and
unexecuted, modules tested, the total number of defects, and so on. This log provides a historical record of testing activities
and can be used to identify trends and areas for improvement.

33
Test Incident Report (Bug Report)
This document reports a defect or issue found during testing. It describes the problem, the steps to reproduce it,
the expected behavior, and the actual behavior.
Key attributes of a good bug report:
 Clear and concise description: The bug should be described in a way that is easy to understand.
 Steps to reproduce: Clear and detailed steps should be provided so that the developer can easily reproduce the bug.
 Expected behavior: The expected behavior of the software should be described.
 Actual behavior: The actual behavior of the software should be described.
 Environment details: The environment in which the bug was found should be described, including the operating system,
browser, and version of the software.
 Severity and priority: The severity and priority of the bug should be assigned.
 Screenshots or videos: Screenshots or videos can be helpful in illustrating the bug.
 Supporting files: Any supporting files, such as log files or configuration files, should be attached.

34
Test Reporting
1) Executing Test Cases
Test execution is the process of executing the code and comparing the expected and actual results.
The following factors are to be considered for a test execution process:
 Based on risk, select a subset of a test suite to be executed for this cycle.
 Assign the test cases in each test suite to testers for execution.
 Execute tests, report bugs, and capture test status continuously.
 Resolve blocking issues as they arise.
 Report status, adjust assignments, and reconsider plans and priorities daily.
 Report test cycle findings and status

35
2) Test Reporting
Test reporting is a means of achieving communication through the testing cycle. There are 3 types of test
reporting.

1. Test incident report: A test incident report is communication that happens through the testing cycle as and
when defects are encountered. A test incident report is an entry made in the defect repository each defect has a
unique id to identify incident. The high impact test incident are highlighted in the test summary report.

2. Test cycle report: A test cycle entails planning and running certain test in cycle , each cycle using a different
build of the product .As the product progresses through the various cycles it is expected to stabilize. Test cycle
report gives

1. A summary of the activities carried out during that cycle.


2. Defects that are uncovered during that cycle based on severity and impact
3. Progress from the previous cycle to the current cycle in terms of defect fixed
4. Outstanding defects that not yet to be fixed in cycle
5. Any variation observed in effort or schedule

36
3) Test summary report:

The final step in a test cycle is to recommend the suitability of a product for release. A report that summarizes
the result of a test cycle is the test summary report. There are two types of test summary reports:

 1. Phase-wise test summary, which is produced at the end of every phase


 2. Final test summary report.

A Summary report should present

1. Test Summary report Identifier


2 Description: - Identify the test items being reported in this report with test ID
3 Variances: - Mention any deviation from test plans, or test procedures, if any.
4 Summary of results: - All the results are mentioned here with the resolved incidents and their solutions.
5 Comprehensive assessment and recommendation for release should include Fit for release assessment and
recommendation of release

37
What is Software Risk?
The risks are the unknown incidents in the software that
have a probability of occurring in the future. These incidents
are not guaranteed to take place. In case, these unknown
incidents occur in the software it leads to a loss in the overall
Software project.
Testing - Risk The detection and management of risks are very crucial
Testing steps at the time of software project development as they
determine the failure and success of the project.
Risk analysis in software testing involves recognizing
potential failures that could impact the enterprise and
customers. This doesn't require precise mathematical
quantification. It's about recognizing the potential negative
consequences of failures and the likelihood of those failures
happening.
38
Types of Software Risks
The different types of software risks are listed below −
1. Schedule Risks: They are related to the time-related risks involved in the software. The
incorrect schedules hamper software development and delivery. They mainly denote slow
progress which indicates that the project is running behind a committed time frame and
there may be a delayed software delivery. If these types of risks are not handled properly,
they lead to project failure and directly affect business. The schedule risks are mainly due to
the reasons listed below −
• Wrong time estimation
• Improper resource alignment
• Improper tracking of resources
• Changes in project scopes
• Inappropriate requirement analysis

39
2. Budget Risks
 They are related to the budget related risks involved whenever the budget has
surpassed. They mainly denote that the financial resources of the project are not
distributed, and managed properly.
 If these types of risks are not handled properly, they lead to the project failure. The
budget risks are mainly due to the reasons listed below:

• Wrong budget estimation


• Unplanned expansion of project
• Bad management of budget
• Additional unplanned expenses
• Improper tracking of budget

40
3. Operational Risks

They are related to the risks involved with the methods taken up while carrying out the
normal daily activities for the development of the project. They mainly denote the
incorrect implementation of the processes. The operational risks are mainly due to the
reasons listed below −
• Inadequate count of resources
• Problems in the task's allocation to the resources
• Mismanagement in tasks
• Inadequate planning
• Insufficient experienced and skilled resources
• Miscommunications
• Lack of cooperation and coordination
• Roles and responsibilities not properly defined
• Lack of training and guidance’s

41
4. Technical Risks
They are related to the risks involved with the functional or performance aspects of the software. The
technical risks are mainly due to the reasons listed below −
• Changes in the requirements
• Not taking help of latest technologies
• Insufficient experienced and skilled resources
• Complex implementation
• Incorrect integration of various modules

5. Programmatic Risks
They are related to the risks involved with the external factors or unavoidable situations. They originate from
outside and out of control of the interior program source code. The programmatic risks are mainly due to the
reasons listed below −
• Changing nature of the market
• Limited available funds
• Updates in the government rules and regulations
• Contract discontinuation in middle

42
6. Communication Risks : They are related to the risks originated due to lack of understanding, misses,
and confusions. They lead to insufficient or no communication during the project development.

7. Security Risks : They are related to the risks originated due to vulnerabilities such as compromise in the
reliability, privacy, accessibility etc.

8. Quality Risks : They are related to the risks originated when the developed software is not working
properly, and is unable to satisfy the customer needs.

9. Risks Around Law and Compliances : They are related to the risks originated because of not adhering
to the laws and compliances during the project development. They lead to penalties, legal hassles, and other
problems.

10. Costs Risks : They are related to the risks originated due to unanticipated expenses, updates in the
scope of the project, lack or excess of funds etc. They hamper the financial plans taken up from the start of
the project.

11. Market Risks : They are related to the risks originated due to changes in market conditions, new
technology trends, addition of competitors, changes in customer needs etc.
43

You might also like