Deployment Pipeline
Deployment Pipeline
Continuous Delivery is a set of practices and principles aimed at building, testing, and
releasing software faster and more frequently. If you're lucky enough to start out in a
"greenfield" organization without an established coding culture, it's a good idea to try to
create and automate your software delivery pipeline upfront. If you're successful out-of-the-
gate in creating a continuous Delivery pipeline, your business will be much more competitive
since you'll be able to get higher-quality software into the hands of your users and customers
faster than your competitors, and you'll be able to react to business demand and change much
more rapidly.
If you are adding a Continuous Delivery (CD) pipeline to an existing organization,
where you start depends on your current development and testing practices and the
bottlenecks in your software delivery process. These bottlenecks can include slow, error-
prone manual processes as well as poor-quality, big-bang rollouts that fail in production,
leading to unhappy users.
There are several ways to get a handle on the current state of your deployment
processes, including using workflow visualization tools like flowcharts and business process
maps to break down and understand your current delivery processes. One of the simplest
visual process management tools you can use to help make these kinds of decisions is
a Kanban board.
Kanban boards, like the one pictured below, are typically just sticky notes on a whiteboard
that are used to communicate project status, progress, and other issues.
Many organizations today are also experimenting with Value Stream Maps (VSMs) to
better understand the infrastructure changes needed to automate their software delivery
process. Borrowed from the lean manufacturing camp, a VSM is a technique used to analyze
value chains, which are the series of events required to bring a product or service to a
consumer.
A Value Stream Map, like the one pictured below, shows both material and information flow.
Not only does it show process flow but it includes data associated with each process such as
inventory between processes. It describes the method by which material moves from one
process to another, and information flows between Production Control (a central production
scheduling or control department, person or operation) and various processes, suppliers, and
customers, including customer demand.
Current State Value Stream Map with Environmental, Health and Safety (EHS) Data
Mary and Tom Poppendieck, who adapted concepts of lean manufacturing and Value
Stream Mapping to the software development process in their highly-regarded
book, Implementing Lean Software Development, stress the importance of starting and
ending with real customer demand. This means that a software organization that delivers
multiple software products, such as a company doing game development, may want to use a
VSM to optimize the delivery process for a popular product that brings in more revenue to
the company, before adapting the new process to less-popular products.
Building a successful CD pipeline means creating a DevOps culture of collaboration
among the various teams involved in software delivery (developers, operations, quality
assurance teams, management, etc.), as well as reducing the cost, time, and risk of delivering
software changes by allowing for more incremental updates to applications in production. In
practice, this means teams produce software in short cycles, ensuring that the software can be
reliably released at any time.
How short can you make your cycles? It depends on the degree of collaboration and trust
you can build among the teams involved, as well as the amount of resources and time you
devote to automating your delivery process. You can also use Value Chain Mapping to
measure your progress in creating a CD pipeline, which can be done in two steps:
The first step measures the efficiency of the different build, deploy and test stages of the
current state of your software delivery. In doing time measurements (in whatever metric
you choose: minutes, hours, or days), you initially try to determine execution time and the
wait time in each step. Wait time, in this case, is any non-value added activity such as
handoffs, signoffs, manual processes, or delays caused by hardware and software issues.
The second step measures the efficiency of the different build, deploy, and test stages your
software delivery target state. As you remove non-value added activity by implementing
the core stages of DevOps Continuous Delivery (Continuous Integration, Test
Automation, Continuous Deployment, etc.), you'll then be able to measure your progress
in implementing your CD pipeline.
It may seem as if you're trying to hit a moving target if the VSM target state is not fully
continuous or automatic, but that's acceptable because this approach provides a clear and
measurable improvement path toward a CD pipeline that should highlight many or most of
the bottlenecks in your current software delivery approach.
Core Stages of Continuous Delivery
Because large and slow software releases make for buggy and unreliable code,
Continuous Delivery pipelines rely on frequent releases of smaller amounts of functionality.
A typical CD pipeline can be broken down into the following sequence of stages:
Stage One: Build Automation
Build automation is the first stage in moving toward implementing a culture of
Continuous Delivery and DevOps. If your developers are practicing test-driven
development (TDD), they'll write unit tests for each piece of code they write, even before the
code itself is written. An important part of the agile methodology, TDD helps developers
think through the desired behavior of each unit of software they're building, including inputs,
outputs, and error conditions. New features implemented by developers are then checked into
a central code base prior to the software build, which compiles the source code into binary
code. With build automation, the software build happens automatically, using tools such as
Makefiles or Ant, rather than when a developer manually invokes the complier.
Stage Two: Continuous Integration
In Continuous Integration, developers check code into a shared repository several
times a day. Each check-in is then verified by an automated build, allowing teams to detect
errors and conflicts as soon as possible. Originally one of the fundamental practices outlined
in the Extreme Programming (XP) methodology pioneered by developers like Martin Fowler,
Continuous Integration (CI) has become an essential ingredient for teams doing iterative and
incremental software delivery.
Stage Three: Test Automation
The next stage in implementing your deployment pipeline is test automation. Manual
testing is a time-consuming and labor-intensive process and, in most cases, also a non-value
added activity since you're only trying to verify that a piece of software does what it’s
supposed to do.If developers are integrating code into a shared repository several times a day,
testing needs to be done continuously as well. This means running unit tests, component tests
(unit tests that touch the filesystem or database), and a variety of acceptance and integration
tests on every check-in. Use the following Agile Testing Quadrants matrix (developed by
Brian Marick and refined by Lisa Crispin) to prioritize the many different types of tests that
you intend to automate in your CD pipeline. There are no hard and fast rules about what tests
(performance, functionality, security, user acceptance, etc.) to automate, when to automate
them, or even whether certain manual tests really need automation. Crispin and other agile
testing experts favor automating unit tests and component tests before other tests since these
represent the highest return on investment.
Automation Tools
Automation is a cornerstone to DevOps, as it facilitates continuous integration and delivery
of applications into various environments (dev, test, prod, etc.). An example of such
automation tools is VMware’s vRealize CodeStream, which allows the creation of release
pipelines (e.g., from dev, to test, to production), with various tasks to retrieve application
builds, deploy environments, automation testing, and etc. These tools are typically
implemented and maintained by the operations teams.
Blueprints
Historically, operations teams held responsibility for maintaining the various tools used by
the development and release teams, such as build tools, source-code management tools,
automated testing systems, and etc. However, the lines are blurring here as developers take
on more coding responsibility for such management. This means Operations teams are
housing development teams capable of developing the management automation.
Monitoring
This is one of the areas that are frequently overlooked, or at least rarely mentioned, in a
DevOps environment. Monitoring applications through the various promotion environments
is very important to ensure a fail-fast approach: potential issues are reported and investigated
in early stages (dev and test), before they become real problems.
The operation team also builds dashboards for developers and operations so the application
and its environment can be monitored throughout the Continuous-Integration/Continuous-
Delivery process. This provides developers with feedback on the applications impact on the
environment in which it runs, allows operations to become familiar with the same from an
environment (VM/vApp) perspective, and provides confidence to the operations team that the
Continuous-Integration/Continuous-Delivery process is working and there will be no issues
when the application is released into production
It is worth mentioning that collaboration between development and operations should start
very early, as developers need to embed operations considerations in their application code
(such as adequate logging information), while the operations team need to ensure
infrastructure availability for developers to start their work.
Migrating to Microservices
This article details the steps to migrate an existing monolithic application to
microservices, an architectural approach that consists of building systems from small
services, each in their own process, communicating over lightweight protocols. This will
provide a broad view of microservices implementation in Azure using service fabric,
challenges faced in migration, approaches, and strategies that we can implement, and ways to
build, release, monitor, and scale microservices.This article deals with an innovative practice
that has the potential to be used across different projects.
Microservices are loosely coupled components by independent teams using a variety
of languages and tools. Speed and flexibility far outweigh the disadvantages. We will use
Azure Service Fabric to migrate an existing application to microservices.
Monolith Architecture Challenges
Growing codebase.
Tightly coupled code.
Long-term commitment to a single technology stack.
The whole application gets affected by the failure of a single module.
Scalability issues.
Quantitative Benefits
Let's take a look at a Walmart case study to see the quantitative benefits of migrating to
microservices.
Migrating to microservices caused a significant business uplift for the company:
Conversions were up by 20% literally overnight.
Mobile orders were up by 98% instantly.
There was no downtime on Black Friday or Boxing Day (the Black Friday of Canada)
and there has been zero downtime since the re-platforming.
The operational savings were significant, as well. The company moved off of its
expensive hardware onto commodity hardware (cheap virtual x86 servers) and saved
40% of the computing power and experienced 20-50% cost savings overall.
Qualitative Benefits
Rank modules by their benefits. It is usually beneficial to extract modules that change
frequently.
Extract modules that have resource requirements significantly different from those of
the rest of the monolith — for example, turn a module that has an in-memory database into a
service that can then be deployed on hosts with large amounts of memory.
Extract modules that implement computationally expensive algorithms since the
service can then be deployed on hosts with lots of CPU.
Look for existing coarse-grained boundaries as they make it easier and cheaper to turn
modules into services. For example, a module that only communicates with the rest of the
application via asynchronous messages. It can be relatively cheap and easy to turn that
module into a microservice.
Below is an approach for extracting a module:
Define a coarse-grained interface (bidirectional API) between the module and the
monolith that enables communication between the monolith and the service.
Turn the module into a free-standing service. Write code to enable the monolith and
the service to communicate through an API that uses an inter-process communication
(IPC) mechanism.
5. Service Discovery
Each service refers to an external registry holding the endpoints of the other services.
The client makes a request to a service via a load balancer. The load balancer queries
the service registry and routes each request to an available service instance.
The service registry is a database containing the network locations of service
instances.
A service instance is responsible for registering and deregistering itself with the
service registry
Also, if required, a service instance sends heartbeat requests to prevent its registration
from expiring.
Self-Registration Pattern
In the third-party registration pattern, service instances aren’t responsible for
registering themselves with the service registry,
The service registrar tracks changes to the set of running instances by either polling
the deployment environment or subscribing to events. When it notices a newly available
service instance, it registers the instance with the service registry.
The service registrar also deregisters terminated service instances.
Service Deployment
Patterns for service deployment:
Multiple service instances per host pattern: Provision one or more physical or
virtual hosts and run multiple service instances on each one.
Service instance per virtual machine pattern: Package each service as a virtual
machine (VM) image. Each service instance is a VM that is launched using that VM
image.
Service instance per container pattern: Each service instance runs in its own
container.
o Containers are a virtualization mechanism at the operating system level.
o A container consists of one or more processes running in a sandbox.
o They have their own port namespace and root filesystem.
Best Practices for Constructing Continuous Deployment Pipeline
Use one repository per service.
Each service should have independent CI and Deployment pipelines.
Plug in all of the toolchain into the DevOps automation platform.
The sSolution must be tools/environment agnostic.
Solution needs to be flexible to support any workflow.
Automation platforms should integrate with all the test automation tools and service
virtualization.
Audits should be in place.
Compliance should be there in the pipeline by binding certain security checks and
acceptance tests.
Automatic and manual approval gates to support regulatory requirements or general
governance processes.
Provide a real-time view of all the pipelines’ statuses and any dependencies or
exceptions.
Monitoring and logging should be enabled for pipeline automation.
Continuous Deployment Using VSTS
VSTS easily packages app in an automated build and publishes the app in a release.
Challenges
Publishing (new vs. upgrade).
Versioning.
Steps
ARM templates help create the clusters using during the deployment.
Add different publishing profiles to separate environments (prod, test, dev, etc.).
In order to create the Azure Resource Group containing the cluster from the ARM
template, VSTS will need a secure connection to the Azure subscription.
This connection is service-principal-based, so you need to have an AAD backing your
Azure subscription and you need to have permissions to add new applications to the
AAD.
If you don’t have an AAD backing your subscription or can’t create applications, you
can manually create the cluster in your Azure subscription.
Things We Can Do
Create release definitions for different environments.
Continuous deployment.
Create/update clusters.
Application Upgrade
Rolling Upgrades
The upgrade is performed in stages. At each stage, the upgrade is applied to a subset
of nodes in the cluster, called an update domain.
The application remains available throughout the upgrade.
Non-Rolling Upgrades
The upgrade is applied to all nodes in the cluster, which is the case when the
application has only one update domain.
Not recommended, since the service goes down and isn't available at the time of
upgrade.
Health Checks During Upgrades
Whether the application package was copied correctly.
Whether the instance was started.
Service fabric evaluates the health of the application through the health that is
reported on the application.
Service fabric further evaluates the health of the application services by aggregating
the health of their children, such as the service replica.
Service Monitoring
Azure Service Fabric introduces a health model that provides rich, flexible, and extensible
health evaluation and reporting.
Obtain health information and correct potential issues before they cascade.
Real-time monitoring of the state of the cluster and the services running in it.
Azure Service Fabric provides the following functionalities for health monitoring:
Health store: Keeps health-related information about entities in the cluster for easy
retrieval and evaluation.
Health entities:
o The health entities are organized in a logical hierarchy that captures
interactions and dependencies among different entities.
o The entities and hierarchy are automatically built by the health store based on
reports received from Service Fabric components
o The health entities mirror the Service Fabric entities.
Health states: OK, warning, and error.
Health policies: cluster health policy, application health policy, service type health
policy.
Health evaluation.
Health reporting.
Scaling Microservices
The scalability options for Azure Service fabric are the following.
Stateless service:
o Defining a higher number of service instance counts (two or more).
o Each of these load-balanced instances gets deployed to different nodes in the
cluster.
Stateful service:
o Stateful services can divide the load among its partitions or named service
instances.
o The partitions are just the separate service instances running with replicas on
various nodes in the clusters.
o Each of the partitions works on a subset of the total state managed by the
stateful service.
o Partitioning can be achieved by:
Named service instances: A specific named instance of a service type
deployed to ASF.
Implementing a partitioning scheme for the service.
Singleton: This indicates that the service doesn't need
partitioning.
Named: The service load can be grouped into subsets
identified by a predefined name.
Ranged partitions: The service load is divided into partitions
identified by integer range, a low and a high key and a number
of partitions (n).
Testing Microservices
Azure Service fabric has fault analysis service that performs the following testing:
o Induce meaningful faults and run complete test scenarios against applications.
For example:
Restart a node to simulate any number of situations where a machine
or VM is rebooted.
Service-to-service communication.
Chaos test.
Failover test.
Move a replica of the stateful service to simulate load balancing,
failover, or application upgrade.
Invoke quorum loss on a stateful service to create a situation where
write operations can't proceed because there aren't enough "back-up"
or "secondary" replicas to accept new data.
Invoke data loss on a stateful service to create a situation where all in-
memory state is completely wiped out.
o Simulating and generating failures that might occur in real-world scenarios.
o Generate correlated failures.
o Unified experience across various levels of development and deployment.
Tools for load testing microservices include SoapUI and JMeter. The following are a
couple of testability scenarios.
Service-to-Service Communication
Service fabric provides built-in service communication components which can be
used to test interactions between services
Chaos Test Scenario
This generates faults across the entire Service Fabric cluster and compresses faults
generally seen in months or years to a few hours. Faults include:
Restart a node.
Restart a deployed code package.
Remove a replica.
Restart a replica.
Move a primary replica (optional).
Move a secondary replica (optional).
Failover Test
Tests the effect of failover on a specific service partition while leaving the other services
unaffected. Faults include:
Restart a deployed code package where the partition is hosted.
Remove a primary/secondary replica or stateless instance.
Restart a primary secondary replica (if a persisted service).
Move a primary replica.
Move a secondary replica.
Restart the partition.
Using Azure Service Fabric for Microservices Implementation
Azure Service fabric Runs on Azure, on-premises or in any cloud (even in other third-party
hosted clouds like AWS). It supports Windows or Linux. It runs any Windows application in
your cluster, not only Service Fabric-aware apps.
Azure Service fabric improves reliability by adding redundancy into the application
deployments over multiple nodes (Virtual Machines). The service fabric cluster is the group
of five or more VMs that provide a guarantee against node-level failures.
Azure service fabric can be used for any service types.
Stateless Services
A service that doesn't hold any state between its requests and responses.
We need either caching or external storage to hold the state.
Configure two or more instances for any stateless service. These instances are
automatically load-balanced.
Each of these instances will be deployed to different nodes in the cluster.
If any instance failure is detected, runtime creates a new instance automatically on
another node in the same cluster
Stateful Services
A stateful service is modeled as a set of one primary and many active secondary
replicas.
These replicas consist of two things: an instance of the application code and the state
of the VM.
Co-location of code and state data makes it powerful as it results in low latency.
All the data read and write operations are performed at the primary replica, which gets
replicated to the active secondary.
If any replica goes down, Service Fabric Runtime automatically replaces it with a new
replica on another node.
In case the primary fails, the secondary replica takes over as the primary and a brand
new replica gets added as a secondary.
Cloud and DevOps
The Digital Innovation Economy
What is the relationship between Cloud Computing and DevOps? Is DevOps really
just “IT for the Cloud”? Can you only do DevOps in the cloud? Can you only do cloud using
DevOps? The answer to all three questions is “no”. Cloud and DevOps are independent but
mutually reinforcing strategies for delivering business value through IT.
To really understand the relationship between cloud and DevOps, it’s helpful to take a
step back and consider the larger context in which both are happening. Cloud and DevOps
have evolved in response to three fundamental societal transformations.
First, we are in the midst of a transition from a product economy to a service
economy. People are placing less emphasis on things and more emphasis on experiences.
While companies still produce products, they wrap them inside services. BMW includes
routine maintenance in the price of a new car. Cadillac integrates the OnStar service into its
vehicles. Much of the power of the iPhone comes from its integration with iCloud and
iTunes.
The transition from products to services is impacting software delivery as well.
Previously, development companies built software products, and delivered them to customers
who took responsibility for operations. With the advent of cloud, the majority of companies
that build software also operate it on their customers’ behalf.
Software as service is happening at all layers of the IT stack. At the bottom,
Infrastructure-as-a-Service delivers on-demand virtual machines, networks, and storage.
Platform-as-a-Service delivers on-demand databases, caches, workflow engines, and
application containers. Software-as-a-Service delivers on-demand business functionality. At
every level, providers allow customers to consume services based on demand, pay for them
based on consumption, and offload responsibility for their management to the provider.
Second, the 21st-century business environment is forcing companies to shift their
focus from stability and efficiency to agility and innovation. The pace of disruption is
accelerating. Kodak lasted a hundred years before having to face the erosion of its place in
the market. Microsoft, on the other hand, felt the ground shift under its feet after only thirty.
Apple has gone from the most valuable company in the world to a question mark in just a
couple of years.
In order to present an adaptable face to the market, companies need to change their
approach to work. They need to shorten work cycles, increase delivery frequency, and adopt
an attitude of continual experimentation. Social media is shifting power from producers to
consumers. Marketing is changing from a process of driving behavior to one of responding to
it. From the corporation as a whole down to the individual employee, companies need to
empower creative responsiveness, and minimize any waste that impedes the ability to act on
it.
Third, the digital dimension is completely infusing the physical dimension. Is your
car a vehicle made of metal and plastic, or is it a Pandora music-service client device? Is your
office building HVAC system a marvel of fluid dynamics, or a marvel of Big Data? Is your
local library a place to find books on the shelf, or a place to look them up online? The
answer, of course, is “yes”.
Digital infusion dramatically raises the stakes for IT. We’re reaching the point where
daily activities are becoming impossible without digital technology. Companies depend on IT
for their very existence. IT can’t afford to fail at providing a compelling platform for the
adaptive business.
Enabling Agility
What do these transformations have to do with Cloud or DevOps? Cloud is a direct
response to the need for agility. Initially, people saw cloud primarily as a way to save money
and transfer CapEx to OpEx. They’ve since come to realize that its real value lies in reducing
waste that impedes speed and defocuses effort. Very few companies would identify data
center operations as part of their core value proposition. Cloud services let IT departments
shift their focus away from commodity work such as provisioning hardware or patching
operating systems, and spend it instead on adding business-specific value.
The transformation from product to service economy, along with digital infusion,
mean that companies need to become software service providers as well as consumers. I’ve
reached the point where 99% of my interaction with my bank takes place via their website or
mobile app. I judge their brand by the quality of our digital interactions. I judge those
interactions across the dimensions of functionality, operability, and deliverability. I expect
seamless quality across all three dimensions.
Cloud enables greater business agility by making IT infrastructure more pliable. It lets
companies conduct digital service relationships with their customers. Cloud is only part of the
answer, though, to the question of how IT enables adaptive businesses. Whether an IT
organization runs a company’s applications on data center hardware, or on a private or public
cloud, it still needs to align itself with the business’ needs, rather than forcing the business to
align itself with IT’s. Silo-based organizations and manual processes still create waste that
impede the ability to deliver continuous change and conduct continuous experiments.
Onerous, time-consuming, arms-length change management procedures still generate
resentment and frustration, and lead users and developers alike to seek ways to get around IT
altogether.
IT Ops organizations often get tagged with the unfortunate moniker “The Department
of No”. Frustrated businesses used to tag development with this same moniker. The Agile
Development movement has made great strides towards creating mutually trusting
relationships between business and development. Agile comes in many flavors, and has its
own imperfections. At its root, though, Agile is about tuning development to be receptive
rather than resistant to change.
The Inseparability of Functionality and Operability
From a DevOps perspective, the most important implication of Software-as-Service is
the way in which it dissolves the separation between function and operation. Users
experience them as seamless aspects of a unified whole. At the same time as they expect high
levels of functional and operational quality, users also expect service providers to deliver
continuous change on top of that quality platform.
These expectations necessitate a fundamentally different approach to delivering
software. Separating development from operations clashes with the outside-in view of
inseparability. Function + operations maps more naturally to Development + Operations.
DevOps is exactly that. DevOps represents an effort to accomplish the same mutually trusting
relationship for Software-as-Service as Agile has done for software as product. Agile has
taught development how to move at the same speed and with the same flexibility as business;
DevOps tries to teach operations to move at the same speed and with the same flexibility as
development. Success in the 21st-century requires radical alignment of goals, viewpoints,
language, and cadence from marketing all the way through to operations.
Cloud And DevOps Aren’t Just for Web Apps
What about IT organizations that work in regulated industries? Can they not use
cloud? What about IT organizations that primarily operate commercial software instead of
doing their own custom development? Can they not adopt cloud or DevOps? The answer to
both questions is “yes they can”.
IT organizations need pliable infrastructure all the way from development to
production. Centralized, shared development and test environments generate tremendous
waste through polluted test data and contention for resources. IT organizations needn’t wait
for the ability to use cloud in production, whether private or public. They can use tools like
Vagrant and Docker to improve productivity on top of desktops and shared test infrastructure.
Organizations that manage commercial software still need to coordinate function and
operations. They also need to reliably, frequently deliver change, even if that change consists
of business rules configurations. Production support needs to understand the totality of
change, from business rules at the top down to infrastructure at the bottom. These kinds of
organizations can benefit from cross-functional collaboration, comprehensive version control,
and automation just like any other.
The truth is, though, that a digitally infused service economy may make pure
commercial application support IT a thing of the past. As IT becomes more essential to
business value, more companies will need to invest in some amount of custom development,
if only at the integration and API level. Inseparable development and operations practices are
universally relevant.
Cloud computing, Agile development, and DevOps are interlocking parts of a strategy
for transforming IT into a business adaptability enabler. If cloud is an instrument, then
DevOps is the musician that plays it. Together, they help IT shift its emphasis from asking
questions like “how long can we go without an outage?” to “how often can we deliver new
functionality?” or “how quickly can we deploy a new service?"
Adopting Cloud and DevOps
If you are a “legacy” IT organization that is struggling to adapt to new business
demands, how do you take up cloud or DevOps? Do you need to shatter your org chart, or
make a massive investment in deploying a private cloud? The principle of continuous
improvement is key to Agile, cloud, and DevOps. It should guide your approach to adopting
them as well. Continuous improvement speaks about “starting where you are.” The truth is
that there’s nowhere else to start.
Adaptive business is about always asking questions of oneself:
What’s changed since we last looked?
How can we get better?
What can we do differently?
What haven’t we thought of?
Adaptive IT is about asking the same kinds of questions. Rather than reacting to new ideas or
methodologies with “we can’t because…”, instead ask yourself:
Why can’t we?
How can we?
What’s the first step towards getting from here to there?
What can we stop doing?
These questions may lead you to do things like experimenting with a cloud platform for a
single test environment, or inviting InfoSec staff to standups for a single project. Learning
what works, and how it works for your organization, shows you how to propagate it more
widely.
No Time To Waste
Just like the business as a whole, IT needs to engage in continuous experimentation.
Public clouds like AWS, along with Shadow IT, are pulling businesses away from internal IT
departments. The time for fighting to retain control is past. IT’s need to change from being
the Department of No to being the Department of Look Over Here is urgent. Cloud and
DevOps are two enabling practices that can help IT address the larger transformative shifts –
the service economy, continuous disruption, and digital infusion - that are driving business in
the 21st century.