Dynamics 365 Implementation Guide v2
Dynamics 365 Implementation Guide v2
Success by Design
Microsoft Dynamics 365
Success by Design Implementation Guide
All rights reserved. No part of this book may be reproduced or used in any manner
without permission of the copyright owner.
Azure Cosmos DB, Bing, BizTalk, Excel, GitHub, Hololens, LinkedIn, Microsoft 365,
Microsoft Access, Microsoft Azure, Microsoft Dynamics 365, Microsoft Exchange Server,
Microsoft Teams, Office 365, Outlook, Power Automate, Power BI, Power Platform,
PowerApps, SharePoint, SQL Server, Visio and Visual Studio are trademarks of Microsoft
Corporation and its affiliated companies. All other trademarks are property of their
respective owners.
The materials contained in this book are provided “as-is” and Microsoft disclaims all
liability arising from you or your organization’s use of the book.
Information contained in this book may be updated from time to time, and you should
refer to the latest version for the most accurate description of Microsoft’s FastTrack for
Dynamics 365 program.
Case studies used in this book are fictional and for illustrative purposes only.
First Edition
ii
This book is dedicated to Matthew Bogan, our colleague and friend. The way you think
and the way you approach everything you touch embodies a growth mindset—that
passion to learn and bring your best every day to make a bigger difference in the world.
Thank you for being a shining example.
iii
What’s inside
Strategy................................................1
1 Introduction to Implementation Guide............................................. 2
5 Implementation strategy......................................................................85
Initiate............................................. 106
6 Solution architecture design pillars................................................. 107
8 Project governance................................................................................151
Implement..................................... 224
10 Data management............................................................................... 225
12 Security..................................................................................................... 283
Operate.......................................... 579
20 Service the solution.............................................................................. 580
21 Transition to support.............................................................................611
Conclusion............................................................................................................ 652
Acknowledgments............................................................................................. 655
Appendix................................................................................................................657
1
1 Guide
Introduction to
Implementation
Guide
Implementation Guide: Success by Design: Introduction to Implementation Guide 2
“Our success is dependent on our customers’
success, and we need to obsess about them—
listening and then innovating to meet their
unmet and unarticulated needs. No customer of
ours cares about our organizational boundaries,
and we need to operate as One Microsoft to
deliver the best solutions for them.”
– Satya Nadella, Chief Executive Officer of Microsoft
Overview
Microsoft believes that every business is in the
business of creating great customer experiences.
Introduction
Help readers understand what an
investment in Dynamics 365 invites
in terms of enabling holistic, unified
digital transformation.
In this chapter, we explore how the shift from the reactive past to the
predictive era is currently affecting our industry—and will eventually
affect every industry. We also look at Microsoft’s vision for Dynamics
365 and how our apps allow holistic, unified digital transformation. In
our view, these topics are essential to anyone about to embark on an
implementation of Dynamics 365.
Every industry
Muhammad Alam, Microsoft’s Corporate
Vice President of Dynamics 365.
For companies that “grew up” in the predictive era, that paradigm is
second nature. But for mature enterprises, the path can be more diffi-
cult, as it requires transformation of existing capabilities and processes.
For Microsoft, this isn’t just theory or prediction. We have been pursuing
our own transformation and have faced many of these challenges
ourselves. At the heart of our transformation is the way we build products.
Instead of the multiyear product cycle of traditional, on-premises
products, Dynamics 365 is delivered as a service that’s continuously
refined and always up to date. It also allows us to capture rich telemetry
on bugs and usage to inform ongoing feature development.
Beyond its influence on the way we run our own business, this digital
transformation paradigm has affected how we think about success for
our customers. Microsoft’s vision has always been about democratizing
Fig. 1-2
Predictive era
Power Platform
Azure
Data ingestion (Planet Scale
Foundation)
Microsoft’s data-first
cloud strategy
The opportunity contained in the shift from reactive to predictive—know-
ing that the car is on its way to failure before it fails—is hard to overstate.
For Microsoft and for our customers, every product, service, and business
process is ripe for reinvention. But the reinvention requires a wholesale
reimagination—a literal turning upside down—of the systems that power
the interconnected fabric of products, services, and applications.
Innovate everywhere
Dynamics 365:
Power Apps Power Automate Power BI Power Virtual Agents
Cloud-based
business apps
Microsoft Azure
for holistic,
Identity, security, management, and compliance
unified digital
transformation
Fig. 1-5
Engage Empower
customers people Keeping the opportunities of the predictive era top
Intel
l i ge nt of mind, we want to take a step back and focus on
what Microsoft got right with Dynamics 365, further
Data & AI
explain our data-first strategy in terms of Dynamics
365, and provide advice to customers embarking on
s
sy
te
ce
ms n
ie
s
per
and ex
an implementation of Dynamics 365.
Optimize Transform
operations products Looking back, it’s clear that Microsoft’s company-
wide push into the cloud set in motion the trajectory
Dynamics 365 is a ▪ Dynamics 365 provides front-office and back-office cloud applica-
tions that are consumable in a composable manner, which means
portfolio of business that our customers can maximize their investment without having
applications that to do an extensive rip and replace of what were historically behe-
meets organizations moth customer relationship management (CRM) and enterprise
resource planning (ERP) processes.
where they are—
▪ Dynamics 365 is built upon the low-code Power Platform to
and invites them to enable pro and citizen developers to build solutions, automate
digitally transform. processes, and generate insights using Power Automate, robotic
8
process automation (RPA), and Power BI—which no other vendor
can offer with its core platform.
▪ All of these are natively built on Azure, the cloud with an unmatched
level of security, trust, compliance, and global availability.
▪ Add to this the assets of Office 365 (such as Microsoft Teams,
LinkedIn, and Bing), which can be used to enrich the business
application experience.
Advice to customers
embarking on an
implementation of
Dynamics 365
Microsoft’s ideas on what customers should keep doing—whether they
are embarking on an implementation of Dynamics 365 or any other busi-
ness application—are detailed later in this book, but they include many
of the traditional activities, such as business-process definition, quality-
requirements management, end-user readiness, and change management.
start quickly in From a cloud services perspective, this old, on-premises implementation
style needs to be jettisoned in favor of agility.
the app allows
organizations to The ability to start quickly in the app allows organizations to bring value
bring value to to their business in an accelerated fashion. The focus of project teams
their business in should be on finding the function or area of the application that the
business can start using as quickly as possible, and then expand and
an accelerated enhance from there. The application and the business are going to change
fashion. anyway, so it is in the best interest of organizations to claim value early.
9
More pointedly, Microsoft’s view is that being agile does not mean
engaging in sprint cycles that still require a 9-month to 18-month team
effort to deploy part of the application to end users.
want to harness it cation projects that Microsoft wants to be a part of. This is where we
believe the world is going, and our products reflect that.
so that they can
disrupt the market—
are the business The journey forward
application projects
We understand that every organization comes with a unique set
that Microsoft wants of challenges, opportunities, and priorities. No matter where you
to be a part of. are in your digital transformation, Implementation Guide offers the
10
guidance, perspective, questions, and resources to help you drive a
successful Dynamics 365 implementation, and harness data for insights
that lead to the right actions and the right business outcomes across
any business process.
One of the primary grators, independent software vendors (ISVs), and customers as a means
to better architect, build, test, and deploy Dynamics 365 solutions.
goals of this book is
to democratize the For our customers, Microsoft recognizes that Success by Design
practice of Success doesn’t guarantee implementation outcomes, but we’re confident that
by Design by making it will help you achieve your project’s goals while enabling the desired
digital transformation for your organization.
it available to the
entire community For our partners, Microsoft is confident that Success by Design, cou-
of Dynamics 365 pled with your implementation methodology and skilled resources, will
increase your team’s effectiveness in delivering successful Dynamics
implementers.
365 projects to your customers.
For the first time in the product’s history, such questions have led to
the successful transition of 100 percent of Microsoft Dynamics 365
online customers to one version, to a dependable and safe deployment
process, to a steady and coherent feature release cadence (including
many overdue feature deprecations), and to a reliable and performant
platform that delivers customer value. But product is only one half of
that customer Microsoft has put equal emphasis on the need to provide you the
success is the prescriptive guidance and recommended practices to ensure a smooth
precursor to every implementation project and a properly designed and built Dynamics 365
solution that successfully transforms business operations. This push has
Dynamics 365 also resulted in the transformation of Microsoft’s FastTrack for Dynamics
product development 365 service, which makes certain that our community of Dynamics 365
decision. implementers—customers and partners—has access to Success by Design.
14
This chapter focuses on the fundamentals and practice of Success by
Design and its desired result: Dynamics 365 projects whose technical
and project risks are proactively addressed, solutions that are roadmap
aligned, and project teams that are successful in maximizing their
organization’s investment in Dynamics 365 cloud services.
Make Success by
Design your own
Enabling Dynamics 365 project teams to practice Success by Design
means an ongoing commitment by Microsoft to provide the commu-
nity of implementers—customers and partners—best-in-class business
applications, along with the latest in Success by Design project resources.
To this end, this chapter focuses on the what and the why of Success by
Design, as well as how project teams can use it to accelerate customer
success throughout the implementation of Dynamics 365.
Success by Design
objectives
Success by Design is prescriptive guidance (approaches and recom-
mended practices) for successfully architecting, building, testing, and
15
deploying Dynamics 365 solutions. Success by Design is informed by
the experience of Microsoft’s FastTrack program, which has helped our
customers and partners deliver thousands of Microsoft’s most complex
In this book, we use the term “Finance
Dynamics 365 cloud deployments.
and Supply Chain Management” to refer
to the collective category of Dynamics
365 apps that includes Finance, Supply
Chain Management, Commerce, Human
Success by Design reviews are exercises in reflection, discovery (obser-
Resources, and, in some cases, Project vation), and alignment (matching to known patterns) that project teams
Operations.
can use to assess whether their implementation project is following
recommended patterns and practices. Reviews also allow project teams
to identify (and address) issues and risks that may derail the project.
By the Prepare phase, the solution has been built and tested and the
project team is preparing for the final round of user acceptance testing
(UAT) and training. Additionally, all necessary customer approvals have
been granted, information security reviews completed, the cutover
plan defined (including go/no-go criteria), mock go-lives scheduled,
the support model ready, and the deployment runbook completed
with tasks, owners, durations, and dependencies defined. At this point,
the project team uses the Success by Design Go-live Readiness Review
to identify any remaining gaps or issues.
In the Operate phase, the customer solution is live. The goal of this
phase is stabilization and a shift in focus towards functionality and
enhancements that are earmarked for the next phase of the project.
17
The Solution Blueprint Review serves as the starting point of Success
by Design. We suggest that the Solution Blueprint Review be a man-
datory review for the project because findings that come from it lead
to Implementation Reviews, which offer project teams the opportunity
Later chapters of this book will cover each
Success by Design review in greater detail.
to drill down into topic-specific areas where deeper dives are deemed
For the most comprehensive coverage of necessary for further understanding project and solution risk. Finally,
Success by Design, refer to the Success by
Design training on Microsoft Learn. the Go-live Readiness Review, which we also suggest as a mandatory
review, is the last stop for assessing any remaining risks before go-live.
The primary review outputs fall into two related categories: findings
and recommendations.
19
(many with high transaction volumes). Additionally, the requirements
point to a business-mandated customization that the project team
agreed could not be achieved using out-of-the-box functionality.
Fig. 2-4
Why are success
Review Findings Recommendations measures important?
Success measures are important
because they provide access to
micro and macro project health
trends. Tracking success mea-
sures for a single project allows
Success measures stakeholders to assess the overall
health of the project at a glance.
Similarly, the benefit of tracking
Risk Assertion Issue success measures over 10, 20, or
100 projects is that Microsoft, the
partner, or the customer project
Conclusion
Success by Design equips project teams with a model for technical and
project governance that invites questions and reflection, which leads to
critical understanding of risks that might otherwise go unnoticed until
too late in the project.
Considering the pace of cloud solutions and the investment that organi-
zations make in Dynamics 365 software and implementation costs, even
the most experienced customers and partners with the best methodolo-
If you’re using Success by Design within gies will benefit by incorporating Success by Design into their projects.
your Dynamics 365 implementation
project, we want to hear from you! To
share your Success by Design feedback As covered in Chapter 1, Microsoft believes that every business is in
and experiences, reach out to us at
[email protected]. the business of creating great customer experiences. To achieve that,
business applications must do more than just separately run your back
office, marketing, supply chain, or even field operations. They must give
you the agility to remove every barrier in your way. When this happens,
business applications become more than just operational solutions.
They become resilient solutions that enable the digital feedback loop
and adapt to customer demands, delivering stronger, more engaging
experiences around the products you make and services you provide.
22
Case study
An inside look at the
evolution of Success by Design
When our Solution Architects got their start with Dynamics 365, they
found themselves hooked on the challenge of effectively aligning an
organization’s business processes and requirements with the (software)
product in hopes of delivering solutions that resulted in actual value
for users.
The content of this case study is based on
interviews conducted with Dynamics 365
Because of this, FastTrack Solution Architects also know the challenge of
FastTrack Solution Architects. The goal is
to provide readers with an inside look at working with people, understanding the business, shaping software to
Success by Design. We hope you enjoy
this insider view.
meet the needs of the business, and delivering a solution that is accepted
by users. They know that projects don’t always go well and often fail.
It’s for exactly this reason that the FastTrack for Dynamics 365 team
and Success by Design were created.
When the FastTrack for Dynamics 365 team was formed, these goals
were initially achieved without the use of a framework. In other words,
FastTrack Solution Architects used their experience and connection to
the product team, but this approach contained too many variables to
be consistently reliable.
With Success by Design, FastTrack joins the customer, our partners, and
the product team in a manner that ensures alignment between all three.
The more project teams invest in Success by Design, the more they will
get out of it.
Cloud implementation Whatever model an organization chooses, we find that the key to
success isn’t what implementation looks like, but rather how leaders,
Customize and extend
cloud applications architects, and entire companies approach the digital transformation
journey. Success in the cloud isn’t just about the technology or the fea-
Operate in the cloud tures available—it’s about the organizational mindset. For a successful
digital transformation, your organization must prepare for changes
Evergreen cloud
that span the entire enterprise to include organizational structure,
Upgrade from on-premises processes, people, culture, and leadership. It’s as much a leadership
to the cloud and social exercise as it is a technical exercise.
Smarter with Gartner Cloud Shift We start by addressing how to develop a cloud mindset in an organi-
Impacts All IT Markets
zation, which is foundational to your digital transformation. Then we
delve into factors to consider and understand before you shift to a
shared public cloud from an on-premises setup. These include how the
shift will impact controls you have in place and how to think differently
about scalability, performance, shared resources, and more. After
that, we discuss the principles around customization and extending
cloud apps. Finally, we explore the operating model, always up-to date
evergreen cloud models, and the options to migrate to the Dynamics
365 cloud from on-premises.
On the surface, this sounds ideal. It has been the standard operating
The changes we’re talking about aren’t triggered by the cloud or tech-
nology. These are thrust upon us by the changing business landscape.
Adopting a cloud mindset, then, is about transforming your processes,
people, culture, and leadership in such a way that your organization can
embrace change quickly and successfully. We believe the fundamental
organizational characteristics that will determine success in this environ-
ment come down to focusing on delivering business value through a
secure cloud technology platform. This includes the ability to harness the
data that is captured to generate insights and actions. This platform will
also rely on automation to quickly react to changing needs.
30
hinder growth or impede business. The role of technology is to deliver
business value by driving efficiency.
focus on delivering ments and new capabilities, monitoring, and support. These functions
are all still required, but with a key difference: IT is now much closer to
positive business the business application layer and has a greater opportunity to focus
outcomes. their energy on delivering positive business outcomes.
Businesses likely already have the data to do this, but may lack the
technology to turn data into practical insight. For example, Dynamics
Automate the 365 Commerce taps your systems to gather customer intelligence from
routine tasks so a myriad of sources (such as social media activity, online browsing hab-
you can focus on its, and in-store purchases) and presents it so that you can shape your
service accordingly. You can then manage product recommendations,
creative, more unique promotions, and tailored communications, and distribute them
challenging work. across all channels via the platform.
The “If it ain’t broke, don’t fix it” mentality common in large organiza-
tions, where business applications can remain unchanged for up to 10
years, puts the enterprise at risk. Unless your business is a monopoly
and can sustain without investing in technology, the hard reality is
most businesses today won’t survive without strategic plans for digital
transformation.
These scenarios all help illustrate how a transition to the cloud can play
an instrumental role in refocusing technology to deliver business value.
Driven by data
This disruptive innovation we’re seeing across several industries is
fueled by data. Almost everything around us generates data: appli-
ances, machines in factories, cars, apps, websites, social media (Figure
3-1). What differentiates one company from another is the ability to
harness this data and successfully interpret the information to generate
meaningful signals that can be used to improve products, processes,
and customer experiences.
te
ce
ms n
ie
s
Overall, the intelligence and insights you generate from your data
will be proportional to the quality and structure of your data. You
Think platform
An organization can take several approaches towards digital trans-
formation. In many cases, you start with a single application being
deployed to a SaaS cloud. A key responsibility of IT decision-makers
You’re investing and enterprise architects is to deliver a cloud platform for their or-
ganization’s digital transformation. Individual applications and their
in the platform, app feature sets are important, but you should also look at the larger
not just the picture of what the cloud platform offers and what its future roadmap
application. looks like. This will help you understand if the platform can meet the
short-term and long-term objectives of your digital transformation.
Thinking about the platform versus a single app offers clear benefits.
You avoid reinventing the wheel with each additional application,
and you instead deliver a foundation approved by governing bodies,
with clear patterns and practices. This approach limits risk and brings
a built-in structure that enables reuse. Platform thinking doesn’t nec-
essarily mean that other cloud platforms or legacy applications have
to be rebuilt and replaced. Instead, you can incorporate them as part
of an all-encompassing platform for your organization, with well-de-
fined patterns and swim lanes that enable integration and flow of data
between applications. Bringing this “systems thinking” to deliver a
platform for business applications can drastically reduce the amount
of time lost getting security and design approvals for individual apps,
thereby improving your agility and time to value.
The biggest selling point of the cloud is to stay current and quickly deploy
software. You can’t rely on teams to manually test and deploy each and
every change to achieve that vision.
Traditionally, a time lag existed between code and test and deploy.
A bug in production meant repeating the lengthy cycle. The fear of
breaking code meant that teams tended to delay updates for as long
as possible. With automation, you deploy fast and potentially fail fast.
This is where the culture of fail fast comes in. Automated processes
help companies empower their teams to take risks. You fail but quickly
release a fix. Over time, failures decrease and success rates improve,
instilling confidence in teams to deploy and innovate, and delivering
value to the business faster. At the core of the cloud mindset is un-
derstanding and embracing DevOps. DevOps is the union of people,
process, and technology to continually provide value to customers.
DevOps is a cultural change in which we break down silos between
developers and administrators, and create shared responsibility for the
The idea is to enable development, IT, and quality and security teams
to collaborate to produce more reliable and quality products with the
ability to innovate and respond more quickly to the changing needs of
business. Implementing DevOps inherently depends on automation:
automating the build process to enable continuous deployment,
regression tests to improve speed and reliability, and administrative
actions like user and license management.
If a change includes a bug, the build fails, and the cycle repeats. The
Fig. automation allows for changes to
3-2
happen quickly without unneces-
sary overhead—a team can focus
Plan on the business logic and not the
infrastructure. Therefore, with CD
Monitor Code we always have the most up-to-
date working software.
Cloud implementation
considerations
Implementing solutions from cloud SaaS products can reduce man-
agement overhead and allow you to deliver business value more
quickly by focusing on the application layer. Fundamental application
considerations, however, still apply to SaaS cloud applications. Security,
scalability, performance, data isolation, limits, and capacity are still
critical, but could have a different approach when compared to an
application deployed on-premises. This section focuses on some of
these considerations for SaaS cloud apps.
Fig.
3-3
Initial build pipeline instantiates pristine Building pipeline automates manual steps. No more Automated release pipeline removes manual steps. Weekly, daily, or hourly releases
development environment daily. need to upload to solution checker and manually become the new standard.
export solution, unpack, and push to repo.
Evergreen cloud customer is still responsible for securing access to environments and
application data.
Upgrade from on-premises
to the cloud
The IT information security team should clearly understand the
boundaries of these shared responsibilities to ensure the following:
▪ The SaaS service provider meets the organizational security,
privacy, and compliance requirements. This is usually done in the
beginning to approve the platform for use supported by a regular
review or audit process to ensure continued compliance.
▪ The security team is aware of the controls and configurations
required to govern access to data and fulfill the customer se-
curity responsibility. These could include defining the data loss
prevention policies, access control policies, and environment man-
agement policy. They usually have a default setup with a process
to manage deviations for specific apps.
▪ The governance process makes sure the application-level security
requirements are met. This is primarily managed by the appli-
cation teams and driven by business requirements for each app
deployment. You might have additional checks before deployment
to ensure compliance.
Scalability
The scalability of the SaaS platform is a key consideration for business
applications. Being able to scale out and scale up to support seasonal
workloads or spikes in user activity will impact the overall user experi-
ence (both staff and customers) and effectiveness of business processes.
42
One option for businesses looking for more reliability and security is
to use a private connection. Cloud providers offer dedicated channels,
You can test latency using the Azure for example Azure ExpressRoute. These connections can offer more
Latency Test and Azure Speed Test 2.0 for reliability, consistent latencies, and higher security than typical connec-
each datacenter.
tions over the internet.
The other aspect of sharing resources and infrastructure in the cloud is the
possibility of monopolizing resources or becoming a noisy neighbor. While
rare, these cases are more often caused by poorly developed runaway
code than a legitimate business use case. Automated monitoring and
protection throttles are built into the Dynamics 365 platform to prevent
such situations, so it’s important to understand and comply with these
service boundaries and throttles when designing for cloud platforms.
Service protection
and a capacity-based model
Service protection and limits are used in cloud services to ensure
consistent availability and performance for users. The thresholds don’t
impact the normal user operations; they’re designed to protect from
random and unexpected surges in request volumes that threaten the
end user experience, and the availability and performance characteris-
tics of the platform or solution. It’s crucial to understand the protection
limits and design for them with the appropriate patterns, especially
around high-volume workloads like integrations and batch processing.
For example, Dynamics 365 Commerce customers use the cloud and
edge model, in which the store operations run on-premises (edge)
and the main instance handles centralized back office functions like
finance, data management (such as products and customers), and
analytics (cloud).
could be challenging run in parallel, user productivity may be impacted. We want to enable
you to scale out and run manufacturing and warehouse processes in
in the cloud. isolation so high user productivity is always supported.
Future-proofing
One of the key principles we have established in this chapter and through-
out this book is getting comfortable with change. SaaS enables you to
adopt new features to maintain a competitive edge. These features and
capabilities are built on top of an existing baseline of features and tables.
Although repurposing or using a custom table may not be unsupported,
deviations can severely impact your ability to take advantage of future
capabilities or can affect the interoperability of your solution.
business process– because you benefit from all that expertise. Using standard processes
also makes your business less dependent on expensive and hard-to-
oriented approach. maintain customizations.
49
You will also want to know how the ISV handles deprecation notices for
phased out software and update cycles.
What to monitor?
Different personas in your company will want to monitor for different
aspects of the system. End users may be more concerned with respon-
siveness, admins may be looking at changes, and the business may be
looking at KPIs like the time taken to place an order.
Dynamics 365 has two release waves per year in which several in-
cremental enhancements and new capabilities are made available
to customers. Adopting these mandatory changes and new fea-
tures—many of which are included in existing license plans—are a
fundamental aspect of operating in cloud.
Stay engaged
Implementing the system is one thing, but getting continuous return
on investment (ROI) from your cloud solution requires engagement.
Companies who stay passionate and curious about the solution and
adopt the latest features are the ones who enjoy most success. We
urge all customers to maintain engagement through conferences and
events throughout the year, and stay connected by using community
You can choose from application-specific groups. Several official and unofficial communities have been formed
forums like the Microsoft Dynamics
Community Help Wiki, Yammer groups,
by customers, partners, and Microsoft engineering. Take every op-
blogs, events, and how-to videos where portunity to influence the product through active engagement with
you can discuss ideas, learn features, learn
roadmaps, and ask questions. product leaders, advisory boards, table talk with product managers,
preview programs, and collaborative development opportunities.
Cloud implementation
One of the benefits of a modern SaaS solution is that every custom-
Customize and extend
cloud applications
er runs on the latest version of the service. It’s the only model that
scales. With this approach, every security patch, bug fix, performance
Operate in the cloud enhancement, and functional improvement accrues to all implemen-
tations across the globe. Microsoft assumes responsibility for keeping
Evergreen cloud the platform current. This means that you no longer have to pull large
teams of people together every few years to perform a traditional
Upgrade from on-premises
to the cloud upgrade over many weeks or months with limited added value for the
business. The evergreen approach and the model of continuous up-
dates gives your business access to the latest capabilities to stay head
of the competition and meet changing customer expectations.
Chapter 20, “Service the solution,” delves further into the updates
wave approach.
Upgrade from
on-premises to the cloud
Organizations over the years might have invested in on-premises
deployment of business applications. These applications could serve
the users and business well in their current state, but keeping them
Conclusion
Embracing SaaS applications to run your business can significantly
accelerate your digital transformation, but it’s also important to rec-
ognize that organizational cloud maturity will play a significant role in
your strategy’s long-term success.
Adopt a cloud mindset Ensure that when upgrading existing on-premises solu-
tions to cloud, the data model, design, and data quality
Have a shared understanding at every level, from the make certain that the application becomes a stepping
executive sponsor to the developers, of the business stone and doesn’t stifle cloud adoption by carrying over
impact being delivered. poor design and data.
One of its key businesses is a chain of retail stores that serves busy
guests across Australia.
Given the high volume of traffic that the retail chain processes and the
diversity of its retail store locations, the company was seeking a hybrid
solution with reliable, flexible connectivity.
Scalability and speed are critical for their retail business, and they
needed an infrastructure design that optimizes for both.
“We serve millions of guests each month,” says the company’s store
systems specialist. “To achieve speed of service, we know that we need
to have something on-premises to retain our workloads in the store.
And, at the same time, we need to have that cloud connectivity to the
back office.”
One of the team’s other top considerations was how easy the infra-
structure would be to manage, so they factored in what the company
was already using across its retail business.
The hybrid deployment model using Azure Stack Edge and Dynamics
365 Modern Point of Sale provides store operations redundancy in case
of network outage.
As the company continues to roll out its new infrastructure, it’s piloting
a solution to recognize vehicles’ number plates based on real-time
CCTV feeds. One benefit of this is loss prevention, in situations where a
number plate is tied to a known offender in a published database.
They also plan to use the in-store cameras to do machine learning and
AI on Azure Stack Edge virtual machines to predict stock levels and
alert the staff, all locally, without the cloud.
Approach to digital
transformation
Throughout this book, we discuss several concepts related to Success
by Design, including how to initiate, implement, and deploy a project,
but the scope of Success by Design isn’t limited to a successful go live
or operational excellence. The long-term goal of Success by Design is to
create the right foundation for an organization to evolve their business
application landscape and expand their digital transformation footprint.
The business model represents the “why” and what it takes to deliver
the value to customer; how we go about doing it is the business
process definition.
Disruptions in the
business model
Triggers
create opportunities
Disruptions and inflections in the business model create big opportunities
for transformation. for transformation. In a world that’s changing at a pace greater than ever,
businesses have to reinvent themselves more often, improve customer
experiences, and attract and retain talent. The opportunities for impact
are plentiful (for a deeper discussion, see Chapter 1, “Introduction to
Implementation Guides”).
Business process
The next stage of the process is discovery, which involves key business
stakeholders. With a holistic view of the evolving business model and
transformation goals in place, you can identify the related business
processes and applications that need to change. The discovery exer-
cise should be focused on creating clarity around how the business
processes and applications identified need to evolve to meet the
corresponding transformation goals.
Change streams
So far, this chapter has discussed the approach to digital transforma-
tion and how to develop a transformation plan for your application
Business
As we embark on the journey of transformation and start building the
business application based on the goals set by the business, it’s also
important to appreciate that during design and implementation, you
may need to accommodate for further adjustments and refinements.
Embrace the This support for agility and change needs to be fundamentally built
mindset of getting into the program. This is where some of the iterative methodologies
could be beneficial, enabling us to respond to the change without it
comfortable with turning into a disruption.
frequent and
inevitable change. Those leading or adopting transformational change often find that
it’s not well defined in the early period, so adopting changes quickly
and early is key—but so is flexibility as clarity reveals the resulting
business model. When transformation occurs in an industry, what the
industry will look like following the transformation is often not known.
Companies must be able to adapt quickly in the middle of the project
to incorporate these inevitable changes.
User
A key stakeholder in your transformation plan is the business user, who
interacts with the application daily. If the process being implemented
in the application doesn’t meet the needs of the user, or the applica-
tion doesn’t consider working patterns and end user usage, it could
lead to poor adoption and lack of engagement. Incorporating user
feedback continuously throughout the process using well-defined and
frequent touchpoints is key to achieving your transformation goals.
68
SaaS application providers that are competing to build and deliver
business capabilities for customers are helping advance business appli-
cations in way that was unimaginable just a few years ago. Applications
that were just forms over data (mostly passive data capture systems used
to track, view, and report on known data) have evolved into applications
that can automatically capture data, learn from the data, deliver insights,
and guide users to the next best action. This makes it extremely import-
ant for business to watch for new capabilities being made available and
adopt them to accelerate their transformation to be a differentiator in
the industry. The key value proposition of SaaS is also the ability to tap
into the enhancements and features that are based on broad market
research. Activating these features can accelerate your transformation
with minimal effort and without any development cost.
External
External economic, social, and political drivers can disrupt your
transformation plan. The COVID-19 pandemic is an example of how
supporting remote work and creating online collaboration channels
became a top priority for most customers in 2020. This required co-
ordinated changes from infrastructure, to network, to the device and
application layer across the IT level of organizations. Although it’s dif-
ficult to foresee and plan for external events, the iterative approach to
delivering continuous value in smaller batches allows you to pause and
pivot as needed. For example, consider an organization on a long-term
transformation program that releases a major capability to users once
a year versus an organization that has adopted the DevOps culture
of delivering value with regular bi-monthly or monthly updates. The
latter company can realize value from investments and is better posi-
tioned to pivot when change demands it.
For example, a sales system that only has the basic features of contact
Fig.
4-1 management and opportunity management may be considered
essential, whereas a system that can use AI-driven
insights with advanced forecasting might be a
Enhanced Differentiator
Predictive
differentiator that gives you competitive advantage
forecasting
Forecasting in your industry.
Sequences
Premium autocapture
Creating a dashboard for stakeholders that organiz-
Assistance
cards and studio
es the requirements and features in this fashion can
Essential Innovative bring great clarity—not just during initial implemen-
Contact
Conversion
intelligence tation, but for future increments. It’s also important
management
to prepare for time-based decay, in which a feature
Standard cards
Opportunity
that is considered a differentiator today might be
Advanced
pipeline
management
Quotes
Autocapture
considered essential in a few years. Similarly, some-
thing that is innovative or even considered ahead
Innovative of its time could become a differentiator in the near
future. You should plan to revisit and refresh this
Predictive
you can pick up elements from various quadrants
Phase 2 Phase 4 forecasting
and deliver a holistic solution that has the key
Forecasting Sequences
process elements, integrations, and reporting that
Premium autocapture
Assistance cards delivers value (Figure 4-3). Look for quick wins: the
and studio
latest SaaS features that you can easily adopt with
Essential Innovative little effort to bring innovative and differentiator
Phase 1 Phase 3
Conversion
intelligence elements into the earlier phases.
Contact
management
Standard cards
Opportunity Digital transformation is as much about people
Advanced
pipeline
management
Quotes
Autocapture
as the process and technology. The phases and
incremental design should also help product own-
Innovative ers maintain the right level of engagement from
users and stakeholders. This drives excitement and
In the next section, we look at how the right approach to MVP strategy
when getting started with your digital transformation journey can help
get early feedback and drive value sooner rather than later.
Minimal viable
product strategy
The concept of MVP, which enables you to quickly build, measure,
learn from feedback, and pivot if necessary, has been well established
in the startup world.
Phase 1
Sequences create a solution that will help transform your
process and deliver on the most important transfor-
Essential Innovative Phase 3
Conversion
mation goals with the least amount of effort. Most
intelligence
Contact importantly, an MVP is not the end state; programs
management
Standard cards
may be stuck in an MPV state for years.
Opportunity
Advanced
pipeline
management
Autocapture
Going back to the example of a sales application,
Quotes
let’s consider if users are currently using a home-
Innovative grown sales system with a form to enter and view
opportunity data. The system has a backend to
feedback loop with sales system, with little direct value to the business. Alternatively, using
embedded intelligence with opportunity scoring and relationship
minimum amount health can help users target the right opportunities and improve their
of effort.” close rate, which directly impacts business.
– Eric Ries, author of The Lean Startup
Delivers on Production ready with Beyond the essentials Maximum value to business
goals for supported customization and delivers on in shortest time, minimal
transformation techniques (not proof of existing system compromises
concept)
Creates the foundation, Test hypothesis Test, learn, Built to pivot and
supports building blocks and deliver on activate change after feedback
for success expectations
Functional
Although the MVP approach naturally works with new greenfield im-
Learn Measure
plementations, you may have existing apps that are being replaced or
Data migrated to another platform. Your scope should consider the existing
functionality to ensure parity, but shouldn’t try to mimic or replicate
the experiences of the outgoing application. Focus your MVP strategy
on delivering the most value to the business sooner without limiting
Fig.
the scope to essentials (Figure 4-5). An MVP with a very broad scope
4-6
that takes years to deliver may cease to be an MVP.
Users
Feature New
adoption workloads
Drive expansion
Incremental Satellite
changes applications All the approaches and techniques we have dis-
cussed so far in this chapter can help you create an
Expansion effective digital transformation roadmap for your
Application Integrations business application. They’re relevant to the Initiate
standardization
phase, but you can apply these methods iteratively
or revisit them for further expansion. This section fo-
Mobile Aggregation cuses on expanding the usage and scope of business
Pivot applications in different areas (Figure 4-6) to drive
even greater business impact, while considering
additional impact on the following:
Users
The most common expansion scenario for a business solution is add-
ing more users. This could be the result of onboarding new regions
or countries onto the same application. User-only expansion sounds
straightforward, but you could have data migrations on a live system,
security and compliance challenges based on the region, or other
impacts (Figure 4-7).
Mobile
Ensuring support on multiple devices, at times with offline capabilities,
could be critical to adoption. In some cases, this needs to be a part
Fig.
4-7 Users
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Potentially high If the solution Based on the Full-fledged Creation of a Although each Usually, data
impact based remains the environment user change new production user usually migration can
on regional same, impact on strategy and management environment comes with be complex on
regulations ALM is minimal the creation and readiness impacts the their own a live system
of additional effort is required environment capacity and in case
production strategy integration API, of additional
environments, capacity, and environments,
you could see storage could integrations
medium impact be impacted might have to be
replicated across
environments
Incremental changes
Incremental changes are primarily driven through the feedback and
change requests from users and the business. These changes help ensure
that the application meets the expectation of ongoing enhancement
and continues to build on top of the initial MVP to maintain relevance.
It’s still important to look at these improvement requests through the
business value and transformation lens (Figure 4-9).
Feature adoption
As we have seen when discussing change streams, SaaS products make
significant investments in researching and implementing business
capabilities that their customers can easily enable. This is a key value
proposition of the SaaS cloud, and can play a key role in deriving value
from your investments. With continuous updates in Dynamics 365,
Fig.
4-8 Mobile
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Potentially high If the solution Additional User readiness Should not have It should not Custom
impact based on remains the device policies effort is any significant have any embedded
the regulations same, impact on could have an required to impact significant integrations
and need for ALM is minimal impact ensure adoption impact on user might require
mobile device on mobile and capacity work to make
management or account for any them responsive
mobile applica- variations for mobile
tion manage- consumption
ment features
Fig.
4-9 Incremental changes
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Changes within No major No expected User readiness Usually remains Additional Assuming no
the approved impacts if the impact as long effort is unchanged integration and integration
SaaS service solution doesn’t as the data required app workloads changes, impact
boundaries have new PaaS boundaries could have is minimal
and data loss components don’t change some impact
prevention
policies should
have limited
security and
compliance
implications
Fig.
4-10 Feature adoption
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Satellite applications
When deployed, your business application covers core use cases and
scenarios, some of which may only be relevant to a specific team, region,
or role. In such scenarios, you can deliver such targeted capabilities via a
satellite application. Satellite apps are a good opportunity for user-
developed apps and automations while using the data model of the core
application. You could also use these applications for incubation before
incorporating it in the core app. Regarding areas of impact (Figure 4-12),
it’s important to have strong governance around the data model and
creation of additional data stores, which can lead to data fragmentation.
Fig.
4-11 New workloads
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
This could be App- Some apps User readiness New apps Additional apps Data and
medium impact specific ALM could require effort is on existing can impact integration
based on the app; requirements additional required, but environments storage and needs for the
core Dynamics can impact your admin tasks existing users won’t have a tenant API new app can
365 apps might release and will benefit from major impact capacity have an impact
have been build pipelines a familiar UI
already approved
Aggregation
Aggregation can help consolidate multiple business applications with
significant overlap of data and processes into a single application.
Aggregation is different from integration—it completely replaces the
application and brings the core feature set into an existing application
instead of exchanging or embedding information. For example, a
commercial banking sales system could become an aggregator for all
non-personal banking sales systems.
Fig.
4-12 Satellite applications
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
If the data flows Managing As the satellite User readiness Depending Potential impact You should
in accordance the lifecycle apps grow effort is low on the ALM to capacity; avoid
with data loss of satellite organically, the for citizen- and citizen you may also fragmenting
prevention apps requires appropriate developed development need licenses the data with
policies, impact changes to governance community strategy, the to support the additional apps
should be low existing ALM policies need to apps, but might environment makers and using their own
processes be in place to require readiness strategy is consumers if data stores,
manage them effort if they’re impacted they’re not but it may be
adopted more already covered required in
broadly by existing some cases
licenses
Application standardization
This model of growth is driven by creating a generic application that
isn’t targeted at a specific process but a host of similar processes. The
application could even provide a framework that enables the business
to onboard themselves. This can achieve hyper-scale, in which you can
broadly t-shirt size hundreds of processes or applications that serve
more than one business process.
Fig.
4-13 Integrations
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Depending on ALM process Additional Depending You may need Potential Expect impact
the data flows, could be monitoring on frontend to have stubs impact to API on data
additional impacted and integration versus backend or downstream consumption movement and
security depending on telemetry integration, integration make efforts
approvals may complexity is needed the impact and environments to ensure
be needed for to support readiness effort integrations
integration reconciliation of could vary are built to
pattern failures standard
Fig.
4-14 Aggregation
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Level of impact Existing ALM Admin User readiness is This will have Additional users High impact
depends processes are processes are required impact on and increased with respect to
on data changed consolidated environment data volumes data alignment
classification strategy, impact capacity and integration
requiring a consolidation
potential merge
Pivot
Pivots aren’t necessarily expansion, but can trigger major change to an
application, which could fundamentally change and further broaden
its scope (Figure 4-16). For example, a targeted Power App used by a
small team could go viral within an organization, and you must pivot
to make it an organization-wide application.
Conclusion
As we have seen throughout this chapter, achieving true digital trans-
formation is not a sprint, but a marathon—it’s a continuously evolving
Fig.
4-15 Application standardization
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Once approved Existing ALM There could be User readiness is Should have Additional users There could
for appropriate processes process specific required for each minimal impact and increased be an increase
data shouldn’t configurations new process data volumes in the data
classification, change for a required for impact capacity generated by
more processes specific template each new each process
shouldn’t have workload on the
an impact template
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Application Existing ALM Admin and User readiness Potential Additional users Expect high
will need to processes will governance might be changes to and increased impact to data
undergo further potentially scope will required the overall data volumes and integration
scrutiny change need to be environment will impact based on the
reevaluated strategy capacity pivot
Going live with your MVP is the first step to driving value. You will iter-
atively refresh your transformation roadmap, revisit and refresh your
program goals, adopt new product capabilities, and drive meaningful
expansion while staying true to the core program value and transfor-
mation goals. Following this approach can establish your organization
as a leader in your industry.
The holistic value of SaaS is much broader; it goes beyond just the
technology and features of your application. You’re investing not only
in the technology but also in the research and innovation that drives
the product changes. Simplifying experiences and improving interop-
erability further reduces costs and drives business innovation.
Change stream
Account for the various change streams that will impact
the program and ensure budget and timelines are
adjusted to accommodate.
5 Deployment
In this chapter, we explore common industry methodologies and look
6 Change management
7 Conclusion
at common deployment strategies and high-level views on change
management strategy.
This helps deliver a solution aligned with business needs and allows for
change management and a higher user adoption.
87
Some of the examples of success metrics/KPIs include:
▪ Improve the opportunity conversion rate from X % to Y % over
z period.
▪ Reduce sales cycle from x days to y days over z period.
▪ Increase net new customer adds from x to y over z period.
▪ Reduce average call handling time from x mins to y mins over
z period.
▪ Improve service request turnaround time from x days to y days
over z period.
The KPIs mentioned above are not exhaustive, customers have specific
KPIs that cater to their business scenarios. It is important to acknowl-
edge that each KPI the customer identifies and prioritizes has a direct
impact on the functionality that would be deployed on the Dynamics
365 Business Applications solution. Consequently, having a definitive
list of metrics to refer back to enables prioritization of the right use
cases and allows customer stakeholders to gauge the project success in
a tangible way.
The key roles required for all projects can often be grouped into two
types of team structures.
▪ Steering committee The group of senior stakeholders with the
authority to ensure that the project team stays in alignment with
KPIs. As they monitor the progress they can help make decisions
that have an impact on overall strategy, including budget, costs
and expected business functionality. Steering committees usually
consist of senior management members and group heads whose
business groups are part of the identified rollouts.
▪ Core implementation team This is the team doing the actual
execution of the project. For any Dynamics 365 implementation
project, core implementation teams should include project
The project may require additional roles depending on the scope and
methodology. Some roles may need to join temporarily during the
project lifecycle to meet specific needs for that period.
A common decision that most teams face is finding the right balance
between delivering a high value solution faster by using the off the
shelf capabilities versus extending the product capabilities to implement
the business requirements and needs. Extending the Dynamics
89
365 applications not only requires initial development costs but also
maintainability and supportability costs in the long run. This is an area
Forrester’s Total Economic
Impact (TEI) of deploying various
where implementation teams should carefully revisit the cost impact.
Dynamics 365 apps is a good In the cloud world with rapid availability of new features and capabil-
recommendation to assess the
costs and benefits associated with ities, there is a need to rethink the investments required for extending
Dynamics 365 Implementation. the application. Refer to Chapter 15, “Extend your solution,” for more
Some of the exclusive TEI done by
details on assessing this impact.
Forrester are:
Choose a methodology
Before you discuss choosing a methodology for your Microsoft
Dynamics 365 project, we need to understand why a methodology is
important for Dynamics 365 projects.
Start
Go live Go
live
Agile
Success by Design Framework is
An Agile implementation is an iterative approach that uses a number
methodology agnostic and is aligned
to ensure proactive guidance and of iterations or sprints.
predictable outcomes irrespective
of chosen methodology. For more
information, refer to Chapter 2, In Microsoft Dynamics 365 Business Applications projects, the majority
“Success by Design overview.”
of requirements are delivered by the packaged software. There are
specific challenges and tasks that are out of the box and need to be
managed and mitigated as part of the project methodology with the
user stories contained in the form of a backlog. Using those stories, you
carve out plans and within them a number of iterations or sprints of
development and testing are executed. Within each sprint, you have a
number of user stories outlining the requirements.
The idea is to release software faster, take early and regular feedback,
adapt, and release again. This cycle continues until all the requirements
are met or the backlog is cleared.
Waterfall
The Waterfall approach to solution delivery is a sequential process
that depicts a linear flow of activities from one phase to another,
culminating with the solution being promoted to production and
then into operation.
management
Project management Project Project
Analysis
Program
management management
1.2.1 1.2.2
Conduct Gather user
solutions training
overview requirements
1.3.1 1.4.1 Design
Conduct detailed Gather
Agile preparation
4.2.2
business process business
Training
Conduct end
analysis requirements
user training
1.7.1
Set up DEV Development
and other
non-production 1.4.2
environments
process analysis
Conduct fit gap
analysis
Business
1.8.1
Establish Deployment
integration
strategy
1.9.1
1.4.3
Gather data
Define
Requirements and
migration Operation
configuration
solution backlog
requirements
5.4.2
4.4.2
Transition to
Go live
support
Custom
coding
backlog
2.1.4 2.1.2
Conduct sprint Conduct sprint
post mortem planning meeting
and testing
4.6.2
Quality
Daily sprint cycle Conduct UAT
3.1.1 3.1.2
Agile execution
Infrastructure
3.1.3
3.1.6 4.7.1
Create scripts
Conduct Build production
for testing
sprint testing environment
3.1.4
3.1.5
Sprint
Generate
configuration
daily build
and development
and interfaces
Integration
2.1.3
Conduct sprint
technical preview
4.9.3
3.6.1 3.7.1
migration
Final data
Conduct Finalize
Data
migration to
solution production
production
testing specification
Hybrid
In the current landscape of rapidly changing business needs and
changing technology trends, projects commonly adopt a hybrid
approach. This is a combination of Waterfall and Agile practices that
implementation teams use to meet their project needs. It allows teams
to tailor an approach that enables a successful implementation of
Dynamics 365 applications. It is also our recommended approach for
Dynamics 365 engagements.
With this approach, we can manage initial activities, like initiation and
solution modeling, and final activities like system integration testing,
user acceptance testing, and release to production, in a noniterative
way. Then the build activities, such as requirement detailing, design,
development, testing, are completed with an iterative approach.
This helps provide early visibility into the solution and allows the team
to take corrective actions early on in the overall cycle. This approach is
considered to use the best of both Waterfall and Agile approaches and
is a win-win for any size of implementation.
95
Fig.5-4
Training plan
Transition to
requirements
Technical design support
Solution testing
Fit gap analysis
Solution design
Data migration
Technology
requirements
Integration and Production Production
interface design specifics environment
Environment
specification
Fig. 5-5
Analysis Process
Design End-to-end
Development Performance
Iteration testing UAT
Once per
Once per project Once per release iteration Multiple cycles per release Once per release
in a release
The idea is to get early feedback and validation from the business team
on the business requirements scope and understanding. This approach
allows the project to showcase working solutions faster than other
methodologies and achieves a higher rate of implementation success.
Define deployment
strategy
A deployment and rollout strategy describes how the company is
going to deploy the application to their end users. The strategy chosen
is based on business objectives, risk propensity, budget, resources,
and time availability, and its complexity varies depending on each
implementation’s unique scenarios. There are several key factors to
consider prior to choosing any of the approaches detailed below.
▪ What is the MVP (minimum viable product) needed for faster val-
ue to customers and then later plan for continued enhancements?
Refer to Chapter 4, “Drive app value,” to define an MVP strategy.
▪ Is this a multi-country rollout or single-country rollout?
▪ Is there a need to consider pilot rollouts prior to wider
team rollouts?
▪ What are the localization requirements, especially in scenarios of
multi-org or multi-region rollouts?
▪ Do we need to phase out a legacy application by a certain key
timeline or can it be run in parallel?
Big-bang
In the big-bang approach, as the name suggests, the final finished
software solution is deployed to production and all users stop using
the old system and start using the new system on the go-live date.
The term “big-bang” generally describes the approach of taking a
large scope of work live for the entire user base of your
organization simultaneously.
The risks of this approach are that the entire project could be rushed,
minor but important details could be ignored, and business processes
transformed may not be in the wider interest of the organization. In
98
an aggressive big-bang rollout, the risks are magnified due to the dual
combo of a large scope and shortened rollout period.
Fig. 5-6
Large project scopes are at much greater risk by using the big-bang
approach since the delivery of
Pros Cons finished software to the produc-
▪ Shorter, condensed ▪ May not accommodate for rapidly
tion takes up a longer timeline,
deployment times, minimizing changing market scenarios and
there is more to organize during
organization disruption product capabilities, leaving a
lack of solution alignment at transition. If the final rollout runs
▪ Lower costs from a quick rollout
without loss of overall work, but deployment causing a negative into challenges, the wider user
with an added need to account impact for end users community is impacted due to
for resource costs to manage the ▪ Daily task delays from users the resulting delays. If Waterfall
complexity learning to adapt to the new methodology is used as the
application without any prior
▪ Same go-live date for all users implementation, then the end
experience
▪ Reduced investment in users don’t have a feel for the real
temporary interfaces ▪ Transitioning back to legacy
system when it is finally rolled out.
systems can be costly and
difficult, if even possible
However, if hybrid methodology
is used, it is possible to involve end
▪ Any failures during the rollout have
a high impact on maximum users users from the beginning, keeping
Phased rollout
In a phased rollout approach, we are releasing the software in a
number of phases or releases. The phases can be planned in several
ways and can be based on modules, business priority, business units,
or geographies. One approach for a phased rollout can be to release
an initial set of capabilities in the first phase and build on that by
releasing the rest of the requirements in subsequent phases. At the
release of the first phase for a business unit or module, it is expected
that the specific business unit’s end users stop using the legacy system
Fig. 5-7
Pros Cons
▪ Higher business value features can be prioritized. The ▪ Longer implementation and deployment timeline due
implementation team may also choose to prioritize lower to multiple go live events.
complexity features to adapt to the new functionality. ▪ Complexity of data migration from legacy application
▪ Unlike big-bang, phased rollout releases can bring to the target application increases as it needs to be
new use cases gradually, allowing for more buy in planned in a staggered approach.
from end users as they get used to the new system. ▪ Organization needs to plan for continuous disruption
▪ Project teams can optimize later phases by incorporating over longer periods of time, like parallel efforts of
the learnings from earlier ones. supporting deployments while working on future
the initial phase are limited to specific business areas ▪ Employee morale may suffer as they face change fatigue.
and users and rest of the business is not affected. Phased projects need lots of focus and coordination as
▪ When phases are based on functionality, rather than the team might start losing momentum after first few
module, geography, or business unit, it typically phases. These projects don’t have the benefit of focused
results in a faster time to value. intensity as the big-bang approach does. It is important
that the change management team keeps a tab on
▪ For large implementations, it reduces the level of
employee morale and plans initiatives to keep their
investments and resources needed for ramp up for
interest alive.
each deployment relative to a big-bang approach.
▪ Need to implement temporary scaffolding solutions to
manage integrations between new and legacy systems
until the new solution is fully rolled out.
For example, a team can start the implementation with one business
unit. Once implementation is complete, they travel to the next business
unit. As the team gains experience and learns from past mistakes, the
subsequent rollouts become smoother. This leads to less risk and more
employee adoption, as illustrated in Figure 5-7.
Parallel rollout
In a variant of phased rollout approach, parallel rollout, the legacy
system is not discarded but kept alive along with the new system. More
than a rollout approach, it is a validation technique allowing users an
opportunity to learn, test, and get comfortable with the new system.
Typically, for few months both systems are actively used and users are
required to key in information in both systems.
▪ There is much more effort required from the users as they double
key information.
▪ This rollout may be preferred by projects where the business risk is
very high and cannot be easily mitigated by testing. At the same
time, we need to be mindful of making sure team is not cutting
corners on testing and training efforts.
▪ This rollout is less and less popular due to the extra efforts and
costs involved in keeping two systems.
Fig. 5-8
Pros Cons
▪ Less risk ▪ High efforts and high costs
▪ Users take their gradually plan to migrate to the ▪ Needs for data duplication can get complicated. We
new system recommend having a clear exit plan that is time or
workload based, for example, all existing cases closed
in old system
planning and a employees with the preparation, support, and skills they need to
succeed in change.
structured approach
to achieve high PROSCI’s framework that is followed at Microsoft describes the three
adoption and usage. sequential steps that are followed in change planning.
103
▫ Status reporting, including steering committee reports
▪ Sponsor roadmap The sponsor roadmap outlines the actions
needed from the project’s primary sponsor and the coalition of
sponsors across the business. In order to help executives be active
and visible sponsors of the change it identifies specific areas that
require active and visible engagement from the various leadership
teams, what communications they should send, and which peers
across the coalition they need to align with to support the change.
▪ Training plan Training is a required part of most changes and is
critical to help people build the knowledge and ability they need
to work in a new way. The training plan identifies the scope, the
intended audience, and the timeframe for when training should
be planned for. It is important that the training plan be sequenced
in a way that allows for awareness and desire building before they
are sent to training. A common aspect that tends to be overlooked
is cloud focused training for IT administrators. As organizations
implement Dynamics 365 cloud applications, it also may mean a
significant change for IT teams, so it is important to plan training
that keeps their persona in perspective. They may require training
not only on Dynamics 365 administrative activities, but also for a
general understanding of cloud concepts.
▪ Coaching plan The coaching plan outlines how you engage with
and equip managers and people leaders to lead the change with
their own teams. Managers can play a significant role in aiding
the change management efforts, but they need to be engaged as
employees themselves first and allowed to work through their own
change process. Once that’s done, you can give them the information
and tools to lead the same change process with their own teams.
▪ Resistance management plan Resistance to change is expected,
so proactively defining the activities to mitigate the areas of concerns
should be initiated early on in the project lifecycle. Engaging user
champions from the end user community early on to build a user
centric solution contributes significantly towards addressing this risk.
Conclusion
When a team defines an implementation strategy, they set clear
expectations and guidelines for the team and wider business on how
the project goals are going to be attained. This helps define a clear
translation of business goals into a structured implementation plan.
106
6 Guide
Solution
architecture
design pillars
You must be sure about the
solution before you start building,
because failure is not an option.
Introduction
You can’t have a solution without first having a vision.
When you know the answer you’re looking for, only
then you can find a solution to get you there.
But it’s not always easy to know and articulate what you want, let
alone identify the elements that are essential for creating a blueprint
of your solution.
Solution architecture
design pillars
Most of us know what “architecture” means in the building industry—
it includes the job of the architect, the scope of the work, and what’s
ultimately delivered. The architect starts with the project requirements
Let’s take the simple case of building a new house for a dog. In most
cases, this is a single-person job with a general vision of the final
product. Building a doghouse is cheap, and the risk of failure is low,
so a trial-and-error approach is one option.
But what if you were asked to build something as complex as the Sydney
Opera House in Australia? The approach would be completely different:
the job can’t be completed by a single person, the design needs to
be carefully planned, and architects, construction workers, plumbers,
electricians, and decorators have to coordinate their workflow. With so
many moving parts, there must be well-defined processes, an explicit
design, and powerful tools. When all the right pillars are in place, the
likelihood of failure is dramatically reduced.
Fig.
6-1
Vision
A vision is the desire to achieve something—to change the present and
improve the future.
As the vision comes together, the plan for achieving it can start
taking shape.
Business strategy
Every vision serves a purpose, as does every organization, and any
solution needs to be aligned with this purpose. A business strategy
supports your vision by answering fundamental questions, such as:
▪ Why are you making this change, and what are the anticipated
benefits? What is the business value sought by the solution? Where
do you imagine the organization will be in five, 10, or 20 years?
▪ What business capabilities can your organization offer with
the new solution? What business processes can you run? What
information and data would you like to record and report on, in
line with your organization’s services or products?
▪ Which clients, customers, or people inside the organization will be
served by the new solution, and who will be affected by it?
▪ Would you like to improve your current line of business or are you
open to a new industry?
▪ When do you plan to have the vision materialized? What is the
high-level timeline? And do you intend to deliver the solution at
once or grow in stages?
▪ Where are the regions—geographically and in terms of business
functions—to which the solution will apply? Will you apply it to
all or just some of them?
▪ Who is going to plan, design, and deliver the solution? Do you
have a preferred partner or do you need to select a vendor?
▪ How will you incorporate technology into your solution? (This
is the first step of solution design and the link to your solution
strategy, as well as the business case for IT transformation.)
It promotes productivity by
Any transformation starts with defining your processes, which is also a
eliminating inefficiencies and
establishing one workflow for good time to revisit and improve them.
all users.
Linear flow
Product
prototyping
Test against
Conduct Put ideas into ideation Select ideas for
strategic
competitive analysis pipeline conceptual design
imperatives
Conduct ongoing
research
Inbound Wave
Receiving Put away Count Pick Pack
logistics processing
Planning
Requirements
traceability matrix (RTM) Fit gap assessment
Fig.
▪ Solution design Once you have a better end-to-end business
6-4
understanding, it’s time to translate it into your system processes.
People With the Dynamics 365 solution design, a Microsoft partner works
with the process leads to develop the solution, and it can be helpful
Solution to create process flows that show how the processes can be run in
design the system.
s
se
es
oc
Pr between people and applications, and data goes out as output
in the form of documents, analysis, and reports (Figure 6-4).
▪ Test management Once the solution is designed and developed,
the process architecture, along with the RTM, establishes a baseline
for testing. When you test every process, you ensure that every
requirement is addressed and the solution is appropriate. For more
information, refer to Chapter 14, “Testing strategy.”
▪ Training Process architecture also defines and drives your
training content. You can use your process flows and guides as
a first draft for your training materials. Learn more in Chapter
7, “Process-focused solution,” and Chapter 2, “Success by
Design overview.”
Data
The third pillar of solution design is data.
Solution strategy
Your solution strategy is a consolidated view and approach that defines
your overall solution. A solution blueprint is a living document with sev-
eral review points (Figure 6-5) during a project’s lifespan to help you
identify and take necessary actions, mitigate risks, and resolve issues as
they arise. In the Success by Design framework, the blueprint is consid-
ered essential for a project’s success, and provides a view of the overall
solution architecture and dependent technologies.
Fig.
6-5
Kickoff Go live
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
Requirements
Design/build iteration
Stabilization
Go live prep
Capturing these details helps you understand the project, and validates
your solution design via the complete and well-organized processes, data,
and people enabled by Dynamics 365.
Technology
While technology does not drive all the processes, it provides the
backbone of products and services required to fulfill your businesses
strategy. Your processes dictate which technology to use and when, and
that technology will bring your digital transformation to life. For exam-
ple, the Dynamics 365 Sales app provides value-added details about
your organization’s lead-to-opportunity pipeline. Other examples of
technology that’s under the hood to support users include:
▪ Infrastructure as a service (IaaS)
▪ Software as a service (SaaS)
▪ Integration tools
▪ Business intelligence tools
▪ Azure AI
▪ Azure Machine Learning
▪ Data connectors
▪ Customer portals
▪ Mobility solutions
Methodologies
A methodology is a collection of methods used to achieve predictable
outcomes. Good methodology also demonstrates why things need
to be done in a particular order and fashion. The Success by Design
framework is a higher-level abstraction through which a range of
concepts, models, techniques, and methodologies can be clarified.
It can bend around any methodology, including The Open Group
Architecture Framework (TOGAF), the Microsoft Solutions Framework
(MSF), and the Zachman Framework.
Change management
Project management (the “how”) and change management (the
“who”) are both tools that support your project’s benefits realization.
Without a change management plan in place, your organization’s
objectives are at risk. To drive adoption with your end users and
accelerate value realization, apply a vigorous change management
approach, which is most effective when launched at the beginning of
a project and integrated into your project activities.
– Wangarĩ Muta Maathai, Every project component contains an element of uncertainty and
Nobel Peace Prize winner is based upon assumptions made before details are known or fully
understood. So, if we know this, why would we expect a project
119
manager to be solely responsible for project governance? The
project manager is of course accountable for the project outcome,
Read more in Chapter 5, but 360-degree governance requires everyone—especially the
“Implementation strategy,” and
architects and leads—to play a role.
Chapter 8, “Project governance.”
Fig.
6-6
Governance domains
Project Stakeholder Solution Risk and issues Organizational change
management engagement management management Change control and communication
▪ Weekly status ▪ Executive steering ▪ Oversight of ▪ Objective risk ▪ Identification and ▪ Intentional and
reports committee functional and assessment and prioritization of customized
nonfunctional mitigation planning requirements and communication to
▪ Project plan and ▪ Communication
attributes solution attributes stakeholders
schedule and promotion ▪ Issue identification,
▪ Data, integration, ownership, and ▪ Tracking and ▪ Periodic
▪ Resource ▪ Escalation of
infrastructure, and resolution reporting via ADO effectiveness check
management unresolved issues
performance
▪ Prioritization and ▪ Organization and ▪ Solution adoption
▪ Financial ▪ Solution decision
▪ Customizations and severity assessment process changes and operational
management making and
process adaptation transformation
collaboration ▪ Risk response
▪ Risk management management
▪ Security and management
▪ Business alignment
▪ Change compliance
and tradeoffs
management
▪ Project status ▪ Project change ▪ Architecture board ▪ Project status ▪ Statement of work ▪ Communication plan
meetings and reports management meetings and report
▪ Governance ▪ Project artifacts ▪ Stakeholder
▪ ADO (project tools) ▪ Solution framework ▪ ADO (project tools) engagement
▪ Project status report
management
▪ Governance ▪ Organizational
▪ Governance
framework ▪ Organizational change management
framework
change management
Conclusion
As we’ve explained in this chapter, solution architecture follows a
systematic approach that identifies your desired solution and the
building blocks needed to construct it. Solution architecture design
takes your business vision and breaks it into logical sections that
become a blueprint for building your solution. Here is a list of the key
steps for solution architecture design:
▪ Define your vision.
▪ Create a business strategy.
▪ Outline your business processes.
▪ Determine how people are connected in and around
your organization.
▪ Know your data.
▪ Develop a solution strategy using technology and tools.
▪ Apply project management, change management, and
governance and control methodologies.
When the project starts, putting the business process view of the
organization at the core pays dividends.
Defining the
scope of the
implementation Opportunity for
Defining your
requirements
optimization
It is easier to find good opportunities to optimize your process using
Fit to standard and
fit gap analysis business process mapping.
Implementation
lifecycle connected
to processes Mapping processes
A solution that helps you to Processes are the heart of every business. They describe how the business
operate your business operates. The fact that your products or services reach your customer
126
It is essential that the right team is involved in mapping business
processes to the new technology. This team should be guided by the
business stakeholder who owns the process in the business opera-
tion. This individual is charged with making the right decisions about
structure and investment in the process to achieve their goals. The
stakeholder is supported by one or more subject matter experts who
are familiar with how the business process operates in the real world.
These experts can provide depth around the various scenarios under
which a process is executed. Their knowledge, coupled with a mindset
open to new ways of working, helps drive the right conversation on
process mapping.
These process diagrams help define the baseline for the organization’s
current business and are a good place to start when plotting optimization
The key to getting the processes mapped well and quickly is to ensure
the following.
▪ The right people are in the room to direct the mapping.
▪ Any mapping software or tools that help facilitate the rapid definition
of the process and do not slow things down.
▪ The level to which the process is described is directly proportional
to its importance to the business. For example, once the end-to-
end scope of a widely used and very standard finance process is
defined to a certain level it should not be further broken down to
step-by-step processes.
▪ The process mapping effort is interactive, not a series of documents
that go through long handoffs and approvals.
▪ The tools for the mapping can be anything from sticky notes, to
Microsoft Visio, to specialized process drawing software. The pro-
cess maps are visual and not hidden in wordy Microsoft Excel lists.
This is a vital step that needs to be executed early. Plan so that processes
are defined as fast as possible using the simplest tools in the workshop,
like sticky notes, pens, and whiteboards. The drawings and comments
can then be transferred offline to more sophisticated applications.
work today may not reality, the Internet of Things, big data, virtual conference rooms, and
many others create possibilities that did not exist a few years ago.
remain competitive
tomorrow. When reviewing your business processes, it is important to keep in
Defining the
scope of the
implementation Defining the scope of the
Defining your
requirements implementation
Fit to standard and
fit gap analysis A business application implementation is essentially the delivery
of a new capability for end-to-end business transactions using the
Implementation
lifecycle connected application. Processes are the foundation for the definition of the
to processes solution scope. Processes embody many of the properties that make
A solution that helps you to for a good scope definition.
operate your business ▪ Business processes are well understood by the customer as they
In most projects this analysis is not starting from a blank piece of paper;
the implementation partner has conducted some part of this analysis as
part of providing estimates on cost and time and a solution overview.
It is recommended that the customer project team get deeply involved
in this process to confirm that the key business processes were correctly
interpreted and to start gaining knowledge of the standard processes
embedded within Dynamics 365.
133
than going directly into fit gap analysis with detailed require-
ments that may be influenced by the existing or legacy system.
▪ It helps reduce the risk of missed requirements as the evaluation
of the fit with the new system is based on the richer and broader
context of business processes. As these business processes are the
natural language of business users, their evaluation is more com-
prehensive, meaningful, and effective compared to working with a
list of requirements.
▪ The process catalog can direct the fit-to-standard assessment
by working iteratively through the processes, starting with the
higher-level core processes, and then working with the more de-
tailed sub processes and variants. This also helps the business users
more clearly see how well their underlying business requirements
are being met within the system.
▪ The project is more likely to adopt modern recommended
standard processes embedded in the system.
▪ It creates higher-quality solutions as the processes are tested
by Microsoft and are more likely to be market-tested by others,
whereas custom developments and variants, especially complex
ones based on the legacy system, will need to be specifically
validated by the customer.
▪ The standard solution allows for more agility in adopting related
technologies and by keeping to standards where possible, make it
easier to add the real value-add custom extensions.
▪ It enables a faster delivery of the new solution; there is no need to
wait longer for a custom solution to be developed.
▪ Standard processes are more easily supported by internal and
external support teams, including Microsoft Dynamics Support.
Gap analysis
As discussed in the previous section, adopting a process-centric
solution within Dynamics 365 has clear benefits. However, there may
be specialized functions that are not part of the standard solution as
shown in Figure 7-3. That is identified with the fit gap analysis. After
having the configurations set, you can look for gaps in the processes
and make a decision whether to customize.
Process
Process Process Process End
Start step 3 with
step 1 step 2 step 4
Extension
Third-party solutions
An alternative to extending is to buy an existing solution from a
third-party vendor, also known as an independent software vendor
(ISV). This option is more common when there is a significant gap
and developing a functionality in-house can be complex and costly.
Sometimes you don’t have enough resources, budget, or expertise to
develop such a significant solution.
Process 4 in
external app
or ISV
Dynamics 365
Process
Process Process Process End
Start step 3 with
step 1 step 2 step 5
Extension
Representing the fits and gaps in a business process map in terms of the
processes drives a better understanding within the implementation team
and the organization. This is also needed to enlist the requirements for
the design of the future solution.
Defining your
Start your implementation
project with business processes
A solution that helps you to Business process mapping helps draw the as-is processes to understand
operate your business how the business is running right now, and the to-be processes to show
how they work in the future. This also emphasizes the importance of
business process mapping early in the project.
After determining what you keep or build, create the list of requirements,
aligning them to the business processes for an accurate picture. You can
use tools equipped for the traceability of the requirements.
Process-centric
Start your implementation
project with business processes
Design
When creating the solution blueprint, the vision of the overall solution
architecture gives solidity and certainty to the project scope. This is the
transition from scope as business processes to scope as designs within the
system. As it is expressed in business language, it helps to ensure that busi-
ness users can be usefully engaged in the review and approval of the scope
and proposed solution. There are, of course, other views of the solution
architecture in terms of data flow, systems landscape, and integrations.
Product and
Account Contact Quote Order Invoice
pricelist
Dual-write
Having the high-level process, you can start breaking down the end-
to-end business processes into meaningful subprocesses, as shown
in Figure 7-6.
Product and
Account Contact Quote Order Invoice
pricelist
Dual-write
Add products
Start Create order
to order
Price order
Testing
A project that has taken a process-centric view reaps the benefits during
testing. Other than the necessarily narrow focus of unit testing and some
non-functional tests, almost all other test types should be rooted in
some form of process definition. When there is an existing set of process
definitions and the associated designs, and a system with the designs
implemented, there are many advantages to the testing methods.
▪ It helps ensure full testing coverage and enables earlier detection
of any missing or inadequately defined functions or processes.
▪ The testing can more easily follow the process-focused path already
created earlier in the project.
▪ Testing has a natural focus on evaluating business process outcomes
which tends to be a closer match to the intention of the design and
eventual production use.
▪ It enables incremental testing of processes and subprocesses
which in turn helps engineer quality into the solution.
▪ Testing end-to-end processes, which is how the system is used in
production, is enabled.
See Chapter 14, “Testing strategy,” for more details on the overall
approach to testing.
Training
Training to use business systems like Dynamics 365 applications is
fundamentally learning how to conduct day-to-day business processes
using the system. Most of the functional roles in the system interact
with multiple functions and processes. Rarely do roles exist in a vacuum
of their own process, but instead interact with other processes. Having
business process flows defined and available in the process catalog,
including flows across the seams of system roles, helps both in the
collation of process-based training materials and to guide the training.
During the design process, if the roles are mapped against the processes
as part of security design, they can be directly used for testing and for
generating role-based training.
Even where some of the system roles may be very specifically restricted to
a specialized function in the system, there is often a need to understand
the upstream and downstream processes to perform a function well, and
to understand any implications of any delays or changes in the flow.
The process-based training not only helps prior to go-live—it also can
be used with new hires and when users change roles. It allows those
new to the business to absorb the business process simultaneously
while understanding how that process is performed within the system.
143
Roles can be easily defined in a business process and use the correct
security roles for testing and training as shown in Figure 7-7.
Support
The process catalog created in the project provides more than the
framework for implementation. It can also help with supporting the
solution. Functional issues raised to support usually need to be repro-
duced by the support team to enable the team to investigate the root
causes. A robust process-based definition of the flow allows support
to recreate the steps defined by the agreed standard process flow. The
visual depiction and description of the process allows the support team
to speak a common language with the business user when discussing
the issues.
This ability to place yourself in the business process helps reduce the
number of back-and-forth cycles of communication and the overall
time taken to understand the issue in business terms. It increases the
confidence of the user that their reporting of the issue has been under-
stood. This reduces anxiety and improves user sentiment of the system.
Add products
Sales manager Start Create order
to order
Price order
Defining your
requirements As you have read through the chapter, business processes are involved
during all the journey of the implementation and they also remain useful
Fit to standard and
fit gap analysis for future improvements. They have a perpetual impact in your business.
Business processes focus Ensure the business process definition is complete and
considers all activities and subprocesses.
Ensure the business process view of the organization is
Take advantage of the latest SaaS technology to drive
at the core of the definition of the project.
efficiency and effectiveness for the process optimization.
Clearly articulate the key business processes that are in
Ensure future readiness when mapping your business
scope and the respective personas so they are under-
process to the solution by incorporating configurability
stood by all involved parties in the implementation.
by design.
Ensure business model analysis, process engineering,
and standardization strategies are considered part of the
Fit gap analysis
project definition and deliver a strong process baseline
before implementation starts. Adopt a fit-to-standard approach and align to the
philosophy of adopting wherever possible and adapting
Collect the business processes in a structured and hierar-
only where justified.
chical process catalog during the requirements phase.
Process-centric solution
Use business processes for each phase of the project to
deliver better outcomes (all phase activities are better
planned, performed, and measured).
The reviews needed multiple parties to read, understand, and validate the
requirements and there was a concern among the business approvers that
if they missed any requirement, or even a nuance, their system would not
function well.
The design phase was similarly based on writing and reviewing complex
design documents and was running late. The focus tended to be on the
“gaps” identified, and as these were not always placed in context, the
discussion with business users was not always productive and was often at
cross purposes. As further delays accrued and the business users were
becoming even less confident about the proposed solution, the
stakeholders decided to review the project direction and the reasons
for the continuous delays and general dissatisfaction.
At each sprint end, the business attendees reviewed and approved the
incrementally complete and functioning processes in the Dynamics
365 system instead of reviewing and approving complex and technical
design documents. Individual gap designs that had previously been
circulating on paper for weeks and months were getting translated
into end-to-end working software. Business engagement significantly
increased as the project was talking their language and they were
working directly with the emerging Dynamics 365 system rather than
with abstract design documents and lists of requirements.
The project successfully went live, and the customer continued to adopt
a process-centric view throughout the remainder of their go-lives in
other countries. The implementation partner decided to adopt this
process-centric approach as their new standard implementation approach
for their other projects because they could clearly see the benefits.
Project approach
We need to ensure that we’re creating a project governance model
Classic structures
that is fit for the world today and tomorrow. This includes considering
Key project areas new governance disciplines and reviewing the classic governance
Project plan model for effectiveness. We also need to consider that many partners,
system integrators, independent software vendors (ISVs), and
We explore each of these areas in more
detail in this chapter. customers may have their own approach and governance standards
that they may have used previously.
The project governance topics discussed in this chapter are relevant to any
implementation methodology; we focus on the underlying principles and
provide guidance in a way that allows customers, partners, and others to
evaluate their own approach and adjust as necessary.
Fig. 8-1
support corrective actions
▪ Provide the right amount
of flexibility and agility to
respond to unexpected events
and to adjust to the specific
constraints of the customer’s
business or project
Why is project
governance important?
All system implementation projects, including Dynamics 365 applications,
need good project governance to succeed. However, business application
Fig.
8-2
projects have specific needs and challenges, and aren’t theeasiest of
projects to implement. Business applications directly impact and are
Typical directly impacted by the processes and the people in the business. For a
business application implementation to be successful, it’s not sufficient
governance to have good software and good technical skills; the project also needs to
Unclear or ever-changing A Dynamics 365 project needs to understand the business requirements
project scope deeply and at a domain and industry level. It also needs significant and
sustained participation from business users representing many different
Late discovery of roles, disciplines, and skills. Many, if not most business participants,
project slippage
may not have previous experience implementing business application
or business transformation projects, let alone Dynamics 365. This puts
Disputed areas of
accountability an additional burden on ensuring that the methodology, approach,
and governance models are sufficiently robust to support and drive the
Low or sporadic user and Dynamics 365 business application project. It also requires business users
business engagement
to gain sufficient knowledge of the standard capabilities of the Dynamics
365 application to better map their underlying business requirements to
Delays to go live at
a late stage the out-of-the-box solution.
Technical issues hiding It’s perhaps worth reminding ourselves of the most common problems
underlying governance issues
related to implementing business applications (Figure 8-2):
▪ Unclear or ever-changing project scope
Mismatched expectations
between customer and partner ▪ Late discovery of project slippage
▪ Disputed areas of accountability or project responsibility
Stakeholders blaming ▪ Low or sporadic user and business engagement
each other
▪ Delays to go live (often at a late stage)
The list is long, and you may have seen other issues, but most of these
issues result in project delays. However, the root cause of the issues
tends to lie in gaps in the definition of the governance model, or in the
effectiveness of operating the project governance processes. Even after
the project goes live and meets most of the business requirements, if the
project delivery isn’t smooth, it can create stakeholder dissatisfaction
and a lack of confidence in the project.
Next, we explore the various areas that you should think about as part
of establishing your project governance model.
Key project areas Instead, based on our experience across thousands of projects, we look
at the areas where we often see the need for reinforcement and expan-
Project plan sion to drive a more reliable and successful project delivery experience:
Project goals
Project goals Well-defined project goals are essential for steering a project and for
defining some of the key conditions of satisfaction for the stakeholders.
Project organization Often, the project goals are described in the project charter or as part
of the project kickoff. In any case, it’s worth shining a spotlight on them
Project approach
as part of the Initiate phase of the project and regularly throughout
Classic structures the project. We recommend taking deliberate actions to reexamine the
goals in the context of your latest understanding of the project.
Key project areas
When reviewing or crafting project goals, consider the following:
Project plan
It’s essential to have all the stakeholders pull the project in the same
direction—conflicts at the level of misaligned goals are extremely
hard for the project team to solve. For example, if Finance leadership is
looking to improve compliance by adding more checks and approvals
in a purchase process, and Procurement leadership is looking for a
faster, less bureaucratic and more streamlined purchasing process,
It’s essential to have unless the goals are balanced, the project delivery will falter. The end
all the stakeholders result will probably disappoint both stakeholders. Another common
pull the project in example is when IT leadership has a goal of a single platform or
instance for multiple business units, but the business unit leadership
the same direction— has no goals to create common business processes. This mismatch
conflicts at the level can remain hidden and undermine the feasibility and efficiency of a
of misaligned goals single platform.
157
goals from the business, but also ownership of the successful delivery
of the project goals.
Are the project goals well understood by all the project members?
Some projects rely on a single kick-off meeting to communicate the
goals. However, many goals would benefit from more in-depth discussion
(especially with project members from outside the business) to better
explain the underlying business reasons. Consider how you can reinforce
this communication not just during the initial induction of a new project
member, but also throughout the project lifecycle.
Once a project starts attention needed for the day-to-day management and delivery can
sometimes overshadow the importance of the strategic project goals.
the Implementation This is one of the key findings from project post go-live reviews—the
phase, the necessary original aims of the project faded into the background as the project
attention needed battled with the day-to-day challenges.
overshadow the mapped to the project deliverables and take any corrective actions.
Project plan How well is the project team aligned to the business?
Teams that have good alignment between the business streams and
project functional workstreams tend to have more effective and high-
er-velocity projects. A common model is for each key business stream to
be matched with a corresponding project workstream. An experienced
leader from the business stream is usually seconded to the project and
takes on the role of the lead subject matter expert (SME). For larger
projects, multiple SMEs may be appointed for a single workstream.
Fig.
8-3
Projects in which the senior stakeholders are more passive and just
occasionally asking “How is it going?” or “Let me know if you need
something” tend to have poor outcomes.
How well are the team roles aligned to the solution complexity
and system design requirements?
You should conduct an honest analysis of the experience and ability
of the resources in key roles when compared to the complexity of the
design and its constraints. Be wary of job titles that don’t match the
experience and authority that normally accompany such titles.
This is particularly important for the key roles of lead solution architect,
lead technical architect, and project manager.
This honest review is especially recommended for the roles that are
likely to be the most constrained, so that mitigation plans can be
initiated in a timely manner.
During the implementation, you should regularly assess how the day-
to-day working of the control, communication, and feedback functions
are being helped or hindered by the project team organization
161
(Figure 8-4). How well does the project organization structure facilitate
or constrict the undiluted and timely flow of direction, guidance, and
decisions from the business to the project workstreams? In the other
direction, does the structure enable or hinder the flow of accurate,
actionable, and timely feedback from the workstreams to the project
leadership and business stakeholders?
You should also confirm that the project organization allows all roles to
be successful by enabling them to properly assert their expertise and
strongly influence the design and implementation, thereby delivering
the full positive impact of their roles.
Project approach
Project goals When talking about project approach, one of the dangers is that it
can be assumed to be synonymous with the project implementation
Project organization methodology. This can then leave a vacuum in the processes that need
to be defined outside of the implementation methodology. This can
Project approach be especially true if the implementation partner is providing a limited,
technical implementation methodology. If you’re the customer, you
Classic structures
should consider the wider set of processes required to define your
Key project areas project scope, manage your resources, and manage the changes,
tasks, and processes. Prioritize areas that aren’t directly covered by the
Project plan partner but are necessary for every business to perform in support of
166
▪ Is the testing strategy fit for purpose?
▪ Do controls such as regular and appropriate reviews by the project
team and the business exist?
▪ Is a good process for project status analysis and reporting in place?
Classic structures
Project goals Most, if not all, business application projects have the common, classic
governance structures in place. This section doesn’t give a general
Project organization overview of historically well-known disciplines around these common
governance areas; instead we look at how to assess the true effective-
Project approach
ness of these processes and controls in practice. The mere presence of
these classic governance structures such as steering groups, program
Classic structures
boards, or risk registers can sometimes lull projects into thinking that
Key project areas they have adequate active governance. Let’s dig deeper into these
areas to explore how we can get better insight into their function and
Project plan evaluate their effectiveness.
Risk register
Most projects have some form of a risk register. When used well, this
register is a good way to communicate risks and help teams focus their
attention on removing barriers to success. The following are examples
of ineffective use of risk registers:
▪ Risks that are of little practical value to the project are being used
to shrug responsibility or provide cover against blame.
▪ Risks remain on the register for a long time with just new comments
and updates being added weekly at each risk review meeting, but
with no resolution. This implies that either the risk isn’t getting the
attention it deserves or it’s difficult to resolve and consuming more
and more resources without results. In any case, you should either
treat risks stuck in this loop urgently with focus or accept them with
a mitigation so they don’t drain the project over time.
▪ Risk priorities don’t reflect the project priority at that stage of
the project. You should take care to ensure that the risk register
doesn’t create a parallel project effort with its own priority and
path. Risks and issue resolution should be incorporated as part of
the main project delivery.
▪ The risk register has a very large number of risks, many of which
are stagnant. Consider how many risks the project can realistically
work on at any given time and trim the register according to real
project priority.
Stage gates
Stage gates or milestone-driven planning and reviews are a common
feature of the majority of business application projects, including more
agile projects. These milestones are regarded as important checkpoints
spread throughout the project timeline, which the project can only
pass through if they have met certain criteria. The reality of many
completed the the tasks of the next stage. There may be other perspectives, such
as commercial ones to trigger stage payments or just periodic
precursor tasks points for the project to review progress or cost burndown, but
so that it’s safe, we’re focusing on a project lifecycle point of view. Some common
efficient, and examples of stage gate criteria are as follows:
▪ To-be business processes defined An approval step from the
meaningful to start business to confirm the business process scope helps ensure that
the tasks of the prior to spending a lot of effort on detailed requirements and de-
next stage. sign, the process scope has been well established and agreed upon.
171
▪ Solution blueprint defined This helps ensure that the require-
ments have been analyzed and understood sufficiently well to be able
to define an overall, high-level design that is confirmed as feasible.
Additionally, the key design interrelationships and dependencies are
established before working on the individual detailed designs.
▪ Formal user acceptance testing start agreed Starting formal
UAT with business users tends to be the final, full formal test phase
before go live, with the expectation that go live is imminent and achiev-
able. Prior to starting this phase, it makes sense to validate the readiness
of all the elements to ensure that this test phase can complete within
the allocated time period and meet the necessary quality bar.
The design review board should also have representatives, not just
from the project, but also from the wider business and IT, to help
ensure the business case for the design is sound and the impact on
related systems (integrations) and overall enterprise architecture are
considered. Another characteristic of a successful design review board
For the best project velocity, you should set an expectation that the
reviews will be interactive and expected to be resolved within a single
iteration. This requires the design review board to be pragmatic and well
prepared. It also requires the project to have the discipline to provide
sufficiently detailed proposed design information. This process needs to
be organized and communicated as part of the project governance in
the Initiate phase so you’re not trying to establish these practices during
the intensive pressures of a running project implementation.
Project plan It’s worth considering what governance is appropriate for these areas to
ensure the project (and hence the business) derives the best value. In this
section, we discuss the key areas of application lifecycle management
(ALM), test strategies, data migration, integration, cutover, and training
strategies (Figure 8-6).
Test strategy
A well-defined test strategy is a key enabler for project success. A high
level of governance is required to ensure that the right test strategy is
created and implemented, because multiple cross-workstream parties,
business departments, IT departments, and disciplines are involved.
When evaluating the suitability of the test strategy, in addition to the
technical angles, consider how well some governance areas are covered:
▪ Does the right level of governance exist to ensure that the testing
strategy is mapped to, and proportionate with, the project scope
and the business risks?
▪ Is the business sufficiently involved in the test coverage, test case
development, and approval of the testing outcomes?
▪ Is the project approach in line with the test strategy and test planning?
▪ Does the right governance exist to make sure the business pro-
cesses are tested end to end?
▪ Is the data to be used in the various testing phases and test types
sufficiently representative to fulfill the test objectives?
▪ Is the test coverage across the functional and non-functional areas
adequate to help assure the business of safe and efficient opera-
tion in production use?
Data migration
For data migration, examine the related governance coverage to
ensure that this process is well understood at a business level:
▪ Do you have the right level of business ownership and oversight
on the types and quality of data being selected for migration?
Integration
A common area missing governance is the management and own-
ership of non-Dynamics 365 systems involved in the integrations,
because they tend to be managed by those outside of the project.
The cutover from the Make sure that the business impact on either side of the integration is
understood and managed. Confirm that the data exchange contracts
previous system to are defined with the business needs in mind. Examine if the security
the new Dynamics and technology implications for the non-Dynamics 365 systems are
365 system is a time- properly accounted for.
Cutover
The cutover from the previous system to the new Dynamics 365 system
is a time-critical and multi-faceted process. It requires coordination
from multiple teams for the related tasks to all come together for a go
live. You almost certainly need to include business teams that aren’t
directly part of the project. Therefore, cutover needs to be shepherd-
ed with a deep understanding of the impact on the wider business.
Preparation for the cutover needs to start early in the project, and the
governance layer ensures that the cutover is a business-driven process
and not a purely technical data migration process. For example, early
The legacy systems shutdown window for the final cutover is typically
short, perhaps over a weekend. For some cutover migrations, that window
may be too short to complete all the cutover activities, including data
migration. In such cases, the project team may perform the data migration
as a staggered, incremental migration, starting with slow-moving primary
data and frozen transactional data. This leaves a smaller, more man-
ageable remainder to address during the shutdown. This needs careful
governance, and the strategy needs to be decided early because the data
migration engine needs to be able to deliver incremental data loads. You
should also carefully consider what business activities you may need to
throttle or perform differently to reduce the number and complexity of
changes between the first migration and the final cutover. The changes
need to be meticulously recorded and registered (automatically or
manually) so that they can be reflected in the final cutover.
Training strategy
Most projects have plans to train at least the project team members
and business users. All too often though, training is seen as a lower pri-
ority. If any project delays put pressure on timelines or budget, training
can be one of the first areas to be compromised.
This trade-off between respecting the go live date and completing the
full training plan can be easier for the implementation team to rational-
ize because the team is aiming for a go live and the risk of poor training
can be seen (without sufficient evidence) as manageable. The worst
consequences of poor user training are felt in the operational phase.
177
The test for people readiness needs to be meaningful; it should be an
evaluation of the effectiveness of the training, and not just that it was
made available. The test should be equivalent to assessing whether
enough people in key roles can conduct their critical day-to-day busi-
ness process safely and efficiently using the Dynamics 365 application
and related systems.
Project plan
Project goals Project plan analysis is where the outcomes of a lot of governance top-
ics become visible. The effects of good governance are felt on a project
Project organization by noting that the project plan is resilient to the smaller changes and
unknowns that are a reality for all business application projects. Poor
Project approach
governance, on the other hand, manifests itself as continuously missed
targets, unreliable delivery, and repeated re-baselining (pushing back) of
Classic structures
the project plan milestones. The key is for the project planning process
Key project areas to have its own governance mechanisms to avoid poor practices, detect
them early, and provide the agility and insights to fix and adjust quickly.
Project plan
When determining how the project plan should be constructed, the
project plan should be able to demonstrate the following:
Projects should institute thresholds for how out of date a project plan
is allowed to become. A project plan that is inaccurate in terms of the
estimated effort (and remaining effort), duration, and dependencies
will promote the wrong decisions and allow risks to remain hidden, and
ultimately give an inaccurate picture of the project status.
Status reporting
A well-constructed project plan facilitates accurate project status
reporting. However, it needs to be deliberately designed into the
project plan with the right dependencies, estimated effort, and mile-
stones so it can be easily extracted from the plan. This means that the
project should be deliberate in the detail to which tasks are scheduled
so dependencies and milestones are created with this in mind. Because
status reporting often has milestone-level status as a key indicator, the
meaning of a milestone’s completion must be explicitly defined so the
implications are clear to the intended audience.
Some projects, especially ones using more agile practices, may use
alternative or additional analysis and presentation methods such as a
backlog burndown, remaining cost to complete, or earned value, but
these principles apply nevertheless and they all rely on having accurate
underlying data on actual versus expected progress to date and the
remaining effort expected.
Fig.
the readiness of the key processes
8-7
for the next major milestone (such
as sprint review, SIT, UAT, or go
Business process heat map example live). The functional heatmap is a
business process perspective of
Legend
Process not complete Process complete, the project plan status.
and not on track in repair, or testing
The project status should not be the only source for this feedback loop—
consider all the other controls and procedures in the project that can
generate actionable information, if only there were a systematic mech-
anism defined to diligently listen and extract the actions. For example,
updates from daily stand-ups, design reviews, sprint playbacks, or even
informal risk discussions can help provide useful feedback.
Conclusion
The primary intention at the start of this chapter was to provide an
overview of the importance of good project governance and provide
guidance on how to assess the effectiveness of your existing or pro-
posed governance model.
You can use the key objectives of good project governance to judge
the overall suitability of a given governance model. Ideally, you should
test a proposed governance model against these objectives during the
Initiate phase of the project and throughout, but it’s never too late.
183
Checklist
Project goals Identify relevant project roles and areas of ownership and
assign them to team members with the right expertise
Ensure goals are clear, realistic, mapped to actions and and experience.
measures, correctly translated into project deliverables,
and regularly monitored throughout the project lifecycle.
Project approach
Align goals and ensure they are sponsored by relevant
Analyze, review, and confirm that your chosen imple-
stakeholders across the organization, and properly com-
mentation methodology works well with the business
municated and understood by the implementation team.
and project constraints and circumstances.
Introduction
Defining your environment strategy is one of the
most important steps in the implementation of your
business application.
Environment-related decisions affect every aspect of the application, from
application lifecycle management (ALM) to deployment and compliance.
• What types of environments are re- Environment-related decisions are hard and expensive to change. As
quired, and when?
we explain throughout this chapter, environment strategy can affect
• What are the different roles (such as
developers, testers, admins, business how solutions are implemented. This exercise needs to be done at the
analysts and trainers), and which envi-
ronments do they need access to?
start of the project—even before beginning implementation—and is
• What’s the access procedure for not something that should be left to the end as a part of a go-live plan.
an environment?
Tenant strategy
• Which apps will be installed in
the environment?
• Which services and third-party applica- Understanding tenants is fundamental to defining an environment
tions do I need to integrate with? strategy for your deployment. A tenant is a logical structure that rep-
resents your organization. Every cloud service to which an organization
Test
Operations
environment
Dynamics 365
and Power Platform
Production
Tenant Sales app
environment
SharePoint
Microsoft 365 SharePoint
site
191
Your environment strategy should provide a balanced level of isolation
to meet the security and compliance needs of your organization. It also
should take into consideration the administration, ALM, and collabo-
ration needs of the project. To define the right tenant and environment
strategy for your organization, it’s necessary to understand the con-
trols that are available for each service to isolate code, configuration,
data, and users. It’s also important to note that the isolation provided
Fig. by services doesn’t necessarily reflect the underlying infrastructure or
9-3
design of a cloud service. A separate tenant doesn’t mean it’s a sepa-
Global single rate server farm—and having a separate environment doesn’t give you
a different front end.
tenant vs. global
multitenant setup Global single-tenant setup
Global single tenant Using a single Microsoft tenant to represent the organization in
the Microsoft cloud (Figure 9-3) is a common setup. It provides
unified administration for user access and licenses, enables seam-
Product
less integration between services, and lets the organization share
tenant resources.
Account Contact Addresses
All Dynamics 365 and Power Platform environments will be a part of
the same tenant. There could be different apps deployed in different
environments, or they could have different users for each app and
Global multitenant
environment, but all would belong to the same Azure AD that is
associated at the tenant level. Using the Multi-Geo setup, you could
Product #1
create environments in different countries or regions to meet your
compliance and application needs, but they would remain a part of the
Account Contact Addresses same organization-wide tenant. Sovereign cloud deployment requires
a separate Azure AD and might have additional regulatory restrictions
Product #2 that limit your ability to create environments in different countries
or regions.
Let’s examine some of the pros and cons for a global multitenant setup.
194
delivery (CI/CD) pipelines that span multiple tenants can be more
complicated and might require manual intervention, especially
when managing connections to the service.
▪ You may have to purchase a significant number of licenses for con-
ducting testing. With load testing, for example, you can’t reliably
simulate a load from 1,000 users using five test-user accounts.
▪ If you’re using capacity add-ons, you will have to make duplicate
purchases for each tenant to develop and test your solutions.
▪ Integrations with other services, such as Exchange, can’t be done
across tenants, which means potentially purchasing licenses for
other Microsoft services for each tenant.
Compliance
Security and compliance are critical considerations for an environment
strategy. Each organization needs to ensure that data is stored and pro-
cessed in accordance with local or regional laws, such as data-residency
requirements for a specific region. For example, the European Union (EU)
enforces the General Data Protection Regulation (GDPR) for EU residents,
as well as its citizens outside of the EU.
Application design
The environment strategy can affect the application design. Conversely,
the needs of an application can drive the environment requirements,
and it’s not uncommon for IT systems within an organization to reflect
the actual structure of the organization. Depending on your organiza-
tion’s strategy for data isolation, collaboration, and security between
different departments, you could choose to have a single shared envi-
ronment or create isolated environments. For example, a bank might
allow data sharing and collaboration between commercial and business
banking divisions while isolating the personal banking division, which
reflects the bank’s application design and environment strategy.
The data store for an application and the supporting business process
196
plays a key role in the environment decision. If multiple apps for different
departments can benefit from leveraging each other’s data, a single envi-
ronment with integrations can improve consistency and collaboration.
The user experience can be tailored via dedicated apps for different
personas and secure data access using the security model.
Performance
Microsoft cloud services provide a high degree of scalability and
performance. Based on considerations such as network latency,
firewalls, network traffic monitoring, organizational proxies, and
routing by internet service provider (ISP), globally distributed users
can experience higher latencies when accessing the cloud. This is why
we recommend creating a latency analysis matrix (Figure 9-4) for
solutions that have a globally distributed user base.
This exercise gives a fair idea of how the network latency will
affect the user experience based on the location of your environment.
This information can be used to make a balanced choice so most
users have an acceptable level of latency, and application design can
be optimized for users in high-latency locations. For example, using
Fig.
9-4
Environment
User location Device Network Latency Bandwidth
location
Scalability
Scalability of the SaaS platform is a critical consideration for business
applications. Traditionally, scaling up in on-premises deployments was
about adding more servers or more CPU, memory, or storage capacity
to existing servers. In a cloud world with elastic scale and microservice
architecture, the server could be replaced by an environment and the
compute and data transfer units by the API capacity. (This is just used
as analogy—it’s not a one-to-one mapping where one environment
corresponds to a server in the SaaS infrastructure.)
Vertical scalability
Organizations commonly operate single environments supporting
thousands of users, and each user has a defined API entitlement based
on the license type assigned. The environment’s storage grows as more
users and applications store their data. Dynamics 365 SaaS services
are built on a scalable cloud infrastructure that can store terabytes of
data to meet the requirements of large enterprises. When it comes
to workflows automation, each environment can have any number
of Microsoft Power Automate flows, each with thousands of steps
Horizontal scalability
With horizontal scalability, organizations can have several separate en-
vironments, with any number of Power Automate flows on the tenant.
There are no native sync capabilities between environments, and you
still need to take license entitlements into consideration, especially
when it comes to tenant-wide storage and API entitlement.
The effort to
maintain the
Maintainability
solution is directly
Maintainability measures the ease and speed of maintaining a solution,
proportional to including service updates, bug fixes, and rolling out change requests
the number of and new functionality.
environments
involved. If you have several applications sharing common components in
the same environment, you should consider maintainability when
upgrading or deploying a new release. It’s also important to invest in
automation testing so you can run regression tests and quickly identify
any dependencies that could cause issues.
ALM
ALM includes the tools and processes that manage the solution’s
lifecycle and can affect the long-term success of a solution. When
following the ALM of a solution, consider the entire lifespan of the
solution, along with maintainability and future-proofing. Changes to
your environment strategy will directly affect the ALM (and vice versa),
Types of environments
Production environments are meant to support the business. By default,
production environments are more protected for operations that can
cause disruption, such as copy and restore operations. Sandbox
environments can be used to develop, test, and delete as required.
Purposes of environments
▪ Development One or more development environments are
usually required, depending on the customization requirements and
time available. Development environments should be set up with
proper DevOps to allow for smooth CI/CD. This topic is covered in
more detail in Chapter 11, “Application lifecycle management.”
▪ Quality assurance (QA) Allows for solution testing from both
a functionality and deployment perspective before the solutions
are given to the business teams in a user acceptance testing (UAT)
environment. Only managed solutions should be deployed here.
Depending on project requirements, there can be multiple testing
environments, including regression testing, performance testing,
and data-migration testing.
▪ UAT Generally the first environment that business users will have
access to. It will allow them to perform UAT before deployment.
▪ Training Utilized to deliver training. It allows business users to
practice in a simulated environment without the risk of interfering
with live data.
▪ Production The live system for business users.
Citizen development
One of the key value propositions of the Power Platform—the underly-
ing no-code/low-code platform that powers Dynamics 365 Customer
Engagement apps—is that it enables people who aren’t professional
developers to build apps and create solutions to solve their own problems.
strategy, consider
any future phases Future-proofing
and rollouts of Future-proofing is the process of developing methods to minimize
any potential negative effects in the future. It can also be referred to as
the solution, as resilience of the solution to future events.
well as changing
requirements. Your environment strategy needs to take into consideration any future
phases and rollouts of the solution, and allow you to deal with changing
requirements and build a solution that will not limit the business or take
away necessary control. As an example, consider country-specific needs,
as well as variations to and deviations from the standard process.
Environment app
strategy
If you’re deploying multiple business apps on the Dynamics 365 platform,
you will need to decide on an environment app strategy. Should all apps
be deployed in the same environment? Or should each app have an
environment of its own?
Multiple-app environment
In a multiple-app environment scenario (Figure 9-5), a single production
environment is used for multiple apps. For example, the production
environment might have the Dynamics 365 Marketing and Sales apps
to enable collaboration between the marketing and sales teams, and
facilitate a quick transfer of qualified leads to sales representatives.
Similarly, having the Sales and Customer Service apps in the same
environment gives the sales team insights into customer support
experiences, which could affect ongoing deals.
Let’s examine some of the pros and cons for the multiple-app
Fig.
9-5 deployment model.
Pros of a multiple-app
Multiple-app environment deployment model:
▪ It enables stronger collabora-
Per-app environment
In a per-app deployment model (Figure 9-6), every application gets
its own production environment, with a separate set of environments
to support the underlying ALM and release process. There is complete
isolation of the data, schema, and security model. A per-app environ-
Fig.
ment might seem simpler from a deployment perspective, but it can
9-6
create data silos such that one business process cannot easily benefit
from sharing information with an-
Per-app environment other, leading to additional effort
in building complex integrations.
Production Production Production
environment environment environment Also, the security model is defined
per environment, so you can’t use
the platform security constructs
App App App
to define which data from one
environment will be accessible to
a user in a different environment.
Sales Customer Field
Service Service Consider an example where
the Sales and Customer Service
Let’s examine some of the pros and cons for the per-app
deployment model.
these factors in ▪ Its ALM and release process for each app can be independent.
▪ Its capacity utilization for individual apps can be tracked easily for
coordination with cross-charging within the business.
the multiple- ▪ Its design issues in one app will not directly affect other apps.
app and per-app ▪ It is the preferred approach if there is a need to segregate the
Global deployment
scenarios
This section will focus on three common app-deployment models you
could use—along with the tradeoffs, pros, and cons when choosing
one approach over the other—to help you find the best option for
your deployment.
Your organization Your organization might have to adjust the security model for each
region to meet local regulations and accommodate cultural differences
might have to around sharing and collaboration.
adjust the security
model for each Let’s examine some of the pros and cons of a global single environment.
Each environment has its own database, and data isn’t shared across
environments. This builds a physical security aspect on top of logical,
By creating
In terms of ALM, the solution-release process can be unique for each
multiple production environment, which will support specific business units, departments,
environments, and countries. However, it will increase the number of environments
it’s possible to needed to support development, testing, and training activities, as
easily implement each production environment needs its own environment to support
the solution-release process.
specific business
requirements and Because the data is not unified in a global multiple-environment setup,
have different the reporting and analytics strategy is more complex, and typically
This model makes it easier for local operations to run and maintain
their solutions, but it also increases the costs, and may lead to siloed
work and deployment.
Let’s examine some of the pros and cons of a global multiple environment.
nd
nm
A
-sp
ke
ok
o
B
po
vir
eB
en
d-s
Spoke A en
solut
viro
Hub-an
Hub environment
Hub-and-spoke model
ion
nment
Hub solution
Fig.
9-7
The hub-and-spoke model could
Global multiple environment Global single environment be a variation of the multiple-
environment model where each of
tio
n Hu
b-
Sp
the regions have an independent
lu Spo a
n
ent ke tio ok
so
nd
-sp
lu
ke
eB
ok
o
so
po
vir
eB
en
data model.
d-s
sol
Spoke A
Spoke A en
solut
viro
Hub-an
ution
Hub environment
ion
nment
Hub solution
Hub solution
Alternatively, this could also be
achieved with multiple apps in a
Sp t
ok en Sp
eCe
n v iro n
m o ke C
s o l u ti
on single environment, where each
Hu
b-an
d-spoke C solut
io n region is able to independently
manage the variations.
Creation
An organization might create a new environment for several reasons.
The purpose of the environment—and considerations such as the
environment type, country or region, apps required, access security
groups, URL and agreed-upon naming conventions, and tiers and service
levels—should be well defined and understood before its creation.
Environment transition
An organization might have multiple types of environments, each
targeting specific use cases, including trial environments, default
environments, sandboxes, and production environments. Be aware
of possible transitions between types, as there could be limitations
and implications to the service. For instance, changing a production
environment to a sandbox might change the data-retention policy and
applicable service-level agreements (SLAs).
Copy
Most often used in troubleshooting scenarios, environment copy lets
you create a copy of an existing environment with all its data or only
Restore
Sometimes, you might need to restore an environment. Depending
on the type of environment and service, there could be several restore
points or a feature allowing you to restore an environment to a precise
time. Restore could lead to data loss, so it’s not really an option for
production environments unless performed in a limited window.
Geo-migration
Geo-migrations could be triggered by changes in regulations, or simply
because of a need to move an environment to a lower-latency region
for a better user experience. This would normally involve creating a copy
of an environment and then physically moving it to a different region. It
almost certainly will involve downtime for users, and requires approval
from the compliance and security teams. It also could lead to change in
service URLs and IP ranges, affecting integrations and network security.
Tenant-to-tenant move
Any change in tenant strategy might trigger a request to move an
environment from one customer tenant to another. This is not a com-
mon request and will require several steps before the actual move. Most
importantly, users must be correctly mapped between tenants and the
record ownerships must be restored. It might require a regional migration
first and could involve several hours of downtime.
Merged environments
Merging multiple environments into a single environment takes
substantial planning and could be extraordinarily complex. Merging
involves the data model and the processes in the app, followed by the
actual data with necessary transformations and deduplication, then the
security model and, finally, the integrations.
Administration mode
Administration mode can be used for maintenance when only selected
users can access the environment. Before you do this, assess the sys-
tems that will be affected by putting an environment in administration
or maintenance mode, such as a public-facing portal that connects to
the environment.
Deletion
Deleting an environment removes all the data and customizations, as well
as the option to restore and recover. The process of decommissioning or
deleting an environment needs gated approvals.
211
Good governance is critical during deployment and for long-term
system operation. But the governance and control processes for a
centrally deployed and managed IT solution might be vastly different
A CoE helps enforce from processes used to deliver a secure ecosystem of business-user-
developed applications.
security and policies,
but also fosters A CoE helps enforce security and policies, but also fosters creativity and
creativity and innovation across the organization. It empowers users to digitize their
innovation across business processes, while maintaining the necessary level of oversight.
the organization. With the changing technology landscape and evolution of security
It empowers users mechanisms, organizations should review and update policies on a
to digitize their continuing basis. The better organizations understand the service
business processes, architecture, the greater the opportunity to fine-tune the controls,
rather than creating overly restrictive policies.
while maintaining
the necessary level To define a robust environment governance policy:
of oversight. ▪ Examine your organization’s usage, application types, user personas
and location, data confidentially, and access control
▪ Evaluate your capacity needs in terms of storage, API call, and
other factors
▪ Understand the release and rollout strategy, and how it affects the
environment lifecycle
▪ Enforce audit requirements at the environment and application levels.
▪ Ensure DLP policies apply tenant-wide
▪ Define admin roles and processes to obtain admin privileges
▪ Monitor deployment and usage
Product-specific
guidance: Operations
Now we’d like to offer some guidance specific to the Dynamics 365
Finance, Dynamics 365 Supply Chain Management, and Dynamics
365 Commerce apps. A team implementing a project with these apps
requires environments to develop, test, train, and configure before
production. These nonproduction environments come in a range of
For less complex projects, you can use the out-of-the-box tier 2 instance
for testing. You still need to consider development environments for
development activities, with one development environment per devel-
oper. These environments have Microsoft Visual Studio installed.
You should also think about the build environment. Source code
developed in the development environment is synched to Azure
DevOps. The build process will use the build environment, which is
a tier 1, to produce code packages that can be applied to sandbox
and production.
Development 1 Development
Build 1 n/a
Training 2 or 3 Sandbox
Self-service deployments
For Finance, Supply Chain Management, and Commerce environments,
General recommendations
▪ Plan for environments early in the project and revisit the plan at
regular intervals.
▪ Define a consistent naming standard for your environments. For
example, a gold environment should have “gold” in the name.
▪ Have a regular schedule to deploy updates and import fresh
data (if needed).
▪ Keep all environments in the same region if your business is in
Assign the one region. For example, avoid having a test environment in one
environments to geographical location and production in another.
an owner who will ▪ Deploy environments by using an unnamed account, such as
dynadmin@your_organization_name.com. Assign the environments
be responsible for to an owner who will be responsible for their status and mainte-
their status and nance. We strongly recommend using the same dedicated admin
maintenance. account on all environments.
Test environments
▪ Consider the number of testing environments you will need
throughout the project. A lack of environments may prevent
concurrent testing activities and delay the project.
Training environments
▪ Make sure all users have access with appropriate roles and
permissions, which should be the same roles and permissions
they will have in production.
▪ Plan for downtime and have a communication plan to alert users.
(Zero downtime is the eventual goal.)
▪ Ensure all integrations are set up and working, so users can
experience the end-to-end cycle of a business process.
Data-migration environments
▪ Assess whether you need a dedicated environment for data
migration, which is a disruptive task that can’t generally
coexist with other types of test activities. Multiple data-migration
environments may be needed to avoid conflicts if multiple parties
are migrating data concurrently.
▪ Account for data-migration performance in environment planning.
Depending on the size of the data-migration task, it may be necessary
to use a tier 3 or higher environment to perform data-migration
testing. (You can also use an elevated cloud-hosted environment.)
Pre-production environments
▪ Assess whether there is a need for a separate environment to
test code or configuration changes before they’re applied to
production.
▪ If there will be continuing development of the solution after
you go live, you may need a separate pre-production environment
to support concurrent hotfix and hotfix test activities. (This
environment should have the same code base and data as
production, so a like-for-like comparison can be performed for
any new changes before they’re applied to production.)
Performance-testing environments
▪ Plan a specific environment for performance testing, or you won’t
be able to do performance testing activities in parallel with other
test activities
▪ Avoid doing performance testing in a cloud-hosted environment, the
production environment, and tier 1, tier 2, or tier 3 environments
▪ Use of tier 3 or smaller environments typically won’t provide the
resources required to perform a performance test
Developer environments
▪ Ensure each developer has an independent development environment
Production environment
▪ Raise production environment requests through support, as you
don’t have direct access to this environment
Product-specific guidance:
Customer Engagement
Throughout this chapter, we have discussed concepts related to envi-
ronment strategy that are product-agnostic and apply to most cloud
deployments. Now we’re going to focus on product-specific resources
that apply to Power Platform and customer-engagement apps.
Environments and
Dataverse in Power Platform
For each environment created under an Azure AD tenant, its resources
can only be accessed by users within that tenant. An environment is
also bound to a geographic location, such as the United States. When
you create an app in an environment, that app is routed only to
datacenters in that geographic location. Any items that you create in
that environment—including connections, gateways, and flows using
Power Automate—are also bound to their environment’s location.
Every environment can have up to one Microsoft Dataverse database
(Figure 9-9), which provides storage for your apps.
Microsoft unified CRM and ERP within Dynamics 365 and the Power
Platform to make it easier to create apps and share data across all
Dynamics 365 applications. The combination also creates a set of pur-
pose-built apps with threaded intelligence to connect front-office and
back-office functions through shared data. Rich analytical capabilities
provide organizations with deep insights into each functional area of
their business.
Fig.
9-9 Tenant: Business
Development Development Development
Conclusion
As we have seen throughout this chapter, defining an environment
strategy for your organization is a critical step when planning for
deployment. Decisions made about environment strategies are hard
to change and could be very costly later in the program lifecycle.
The wrong environment strategy can create unnecessary data
fragmentation and increase the complexity of your solutions. Early
environment planning aligned to your organization’s digital roadmap
is fundamental to success.
220
Checklist
Organization environment Assess the impact on the overall data estate. Avoid
excessive fragmentation and promote reuse of existing
and tenant strategy integrations and data flows.
Define environment and tenant strategies and obtain
agreement for them at the program level with all key
Global deployment
stakeholders, including business, IT, security,
and compliance. Account for global deployment scenarios; additional
coordination and agreement may be required to meet
Create a strategy that considers the future growth of the
regional requirements and compliance needs.
solution.
Assess the latency to choose the optimal location—
Create an environment strategy that can support the
global deployments are prone to network latency
ALM processes and necessary automations.
related performance issues.
Create an environment strategy that considers the
short- and long-term impact on licensing, compliance,
Governance and control
application design, performance, scalability,
maintainability, and ALM of the solution. Establish governance processes for provisioning, mon-
itoring, managing the lifecycle, and decommissioning
Create a strategy that considers the potential need for
the environments early on.
citizen development scenarios or collaborative
development with both IT and business users. Ensure the different security personas involved in the
environment management are understood and appro-
Create an environment planning matrix with key
priately assigned.
considerations of the pros and cons to help plan and
visualize the impact. Use the CoE Starter Kit to make necessary adjustments
to citizen development use cases as needed because
The management team was not clear that the environments, services,
and default capacities included in the SaaS subscription license—mainly
the production and sandbox environments and hosted build automation
services—were the minimum needed to run the solution on an ongoing
basis, but were insufficient on their own to complete all the implementa-
tion activities required for the moderately complex rollout.
224
10
Implementation Guide: Success by Design: Data management
Guide
Data
management
225
Deploy faster and
more efficiently.
Introduction
Data surrounds us every day, like a blanket. Whether
you are posting on your social media, scheduling
a doctor’s appointment, or shopping online, the
information collected is one of the most valuable
assets of any business.
With the right data, organizations can make informed decisions,
improve customer engagement, and gather real-time information
about products in the field.
This chapter aims to break down the various functions within data man-
In this chapter, we discuss the many
ways data plays a part in defining a
agement that collectively ensure information is accessible, accurate, and
solution. Data plays a vital role in the relevant for the application’s end users. We focus on the most common
success of every deployment.
discussion points surrounding the lifecycle of data within a project.
You learn about:
• Data governance
• Data architecture Regardless of your role, take the time to consider what is important
• Data modelling
to each person interacting with the data. For example, users of a
• Data migration
• Data integration system focus on their data quality, ability to search, data relevance
• Data storage
• Data quality
and performance, while architects and administrators are focused on
security, licensing, storage costs, archival, and scalability.
Governance
Data governance
Architecture
Let us start the discussion with data governance before we start
Modeling unpacking different principles of data management.
Storage
While data management is usually seen as dealing with operational
issues of data, data governance is about taking a high-level strategic
Migration
view of policies and procedures that define enterprise data availability,
Integration usability, quality, and security.
Quality
With artificial intelligence (AI) and machine learning (ML) taking
center stage in most digital transformation projects, and the fact
that the success of these initiatives is highly dependent on data
quality, it is prudent that executives start considering data
governance seriously.
Sharing data within the company is crucial for two economic reasons—
growth and distribution. Data is a non rival resource. It is not a material
resource that if one person uses it, others cannot use it. If data is shared
with everyone in your company, people can start becoming self-suffi-
cient and build on top of it increasing the overall value as it becomes
available to all.
Data stewardship
A data steward is a role within an organization responsible for the
management and oversight of an organization’s data assets, with the
goal of providing that data to end users in a usable, safe, and trusted
way. By using established data governance processes, a data steward
ensures the fitness of data elements, both the content and metadata,
and they have a specialist role that incorporates processes, policies,
guidelines, and responsibilities for administering an organization’s
entire data in compliance with policy and/or regulatory obligations.
Data quality
The quality of the data cannot be thought of as a second or third
process. Data quality is at the front of data governance policies.
It should be thought of as high quality and fit for the purpose
of whatever the intended use is. Driving data’s accuracy and
completeness across the organization is typically managed by
dedicated teams who may use a variety of tools to scrub the con-
tent for its accuracy. Although these tools aid the process more
and more, this is still typically a human responsibility.
For example, one retailer wants to better target its customers through
email campaigns, which in the past failed to deliver expected results
due to incorrect contact information being captured. While defining
this use case, the company also set up the right data governance
principles that ensure what data quality is required for email cam-
paigns. They were challenged to define “good” data to satisfy the use
case. This simple use case uncovered other issues. The company found
that online customers, who buy as guests, could enter any value in the
Without data
email field and there was no validation. This led to data stewards and
governance in LOB process owners setting up new validation processes.
place, organizations
struggle to control Without data governance in place, organizations struggle to control
corporate assets needed to ensure data quality. During the require-
corporate assets ments gathering stage of your implementation, start paying particular
needed to ensure attention to the availability of data required for your solution. Early
data quality. discussion and identification goes a long way in defining your use cases.
229
For an example of a proper use case, see “A Show-Don’t-Tell Approach
to Data Governance.”
Governance
Data architecture
Architecture
After data governance is understood within the organization, the
Modeling next step is to look at data architecture and the different types of
enterprise data.
Storage
The key point to data architecture is to create a holistic view of the data
repositories, their relationships with each other, and ownership. Failure to
implement data architecture best practices often leads to misalignment
issues, such as a lack of cohesion between business and technical teams.
When you prepare your data architecture, use the questions below to
guide you.
▪ How do you use your data?
▪ Where do you store your data?
▪ How you manage your data and integrate it across your lines of
business?
▪ What are the privacy and security aspects of the data?
Before you can start working with data, it is important to know what a
typical enterprise data is made up of, as illustrated in Figure 10-1.
Fig.
10-1
Azure APIs
Telephony Interactions
Corporate Line of
application business
End
users Company’s
Contacts Activities Master relationships
A accounts
Cases Other
Microsoft
Microsoft Master data integration
USD Dynamics
365 Master Employees
B
Email Exchange
Power server-sync
Applications Master Products
C
External
knowledge Knowledge
Customers base base
self-service Master Product
articles
D alerts
SharePoint SharePoint
document
library
Financials
Corporate
accounting
application
Transactional data
This type of data is generally high in volume due to the nature of its use.
Transactional data typically refers to events or activity related to the
master data tables. The information is either created automatically or
recorded by a user. The actual information could be a statement in fact
(like in banking), or it could be a user interpretation, like the sentiment of
a customer during a recent sales call. Here are a few other examples:
▪ Communication history
▪ Banking transactions
▪ IoT transactions
▪ ERP transactions (purchase orders, receipts, production orders, etc.)
Inferred data
Inferred data is information not collected by the business or users.
Typically, this information is automatically generated based on other
external factors, which adds a level of uncertainty. For example:
▪ Social media posts
Governance
▪ Credit score
Architecture
▪ Segmentation
Modeling
Storage
Data modeling
With a base understanding of the data governance and data architecture,
Migration we can now focus our attention on how we plan to store the information
we collect. Data modeling should always be completed before any config-
Integration
uration begins. Let us start with a basic understanding of data modeling,
Quality recommended practices, and how it is related to a project.
Data architecture and data modeling need to work together when de-
signing a solution for a specific business problem. For example, if your
organization wants to implement an e-commerce solution, you cannot
do that unless somebody knows and has defined the existing data ar-
chitecture and data models, different integrations in play, existing data
imports and exports, how customer and sales data is currently flowing,
what kind of design patterns can be supported, and which platform is a
better fit into the existing architecture.
234
If your project requires creation of new tables to support storing data
for some specific business processes, it is recommended that you
create a data model to define and understand the schema and data
flow. But if yours is largely a vanilla implementation or has minimal
Fig.
10-2 customizations, you may not need to do this. It is recommended to
discuss with your system
integrator and solution architect to
1 Technical contact 0
Customer see what works best for your case.
Customer
contact
1 Account manager 0
There are multiple types and
1
standards for data modeling,
To technical
contact
including Unified Modeling
Language (UML), IDEF1X, logical
0
data model (Figure 10-2), and
Monthly 1 Attached 1 Market physical data model (Figure 10-3).
report email report PDF A common type of physical data
model is an entity relationship
diagram (ERD).
Fig.
Data storage
10-3
First, you need to calculate the approximate volume and size of the
data to properly come up with an estimate. You can gather this detail
from your existing data sources. You should come up with a size either
in megabytes (MB) or gigabytes (GB) needed in production.
The reality is no Based on the estimated size for production, you then need to allocate
a percentage for each environment. Figure 10-4 provides an example.
matter where your
data is stored, there The best practice is to build a data storage forecast for a minimum
is always a cost of three years, including an average increased annual volume. We
Environments should be
% of
Environment Use managed with a well-planned
Product
environment strategy. For
Production Contains all data required for release 100 example, a developer should not
be able to simply go ahead and
Contains a sampling of data for create new environments on the
Training training
15
fly. Doing so could create
opportunities to go over
QA Contains a sampling of data for testing 15
entitlements, which leads to
enforcement, which in turn could
SIT Contains a sampling of data for testing 15 limit your ability to deliver on
the application lifecycle
Contains limited data only for management (ALM) strategies.
DEV development needs
5
Architecture
management,” respectively.
Modeling
Storage
Configuration data
Migration and data migration
Integration Once you have a good understanding of your business requirements,
project scope, and have set up proper data governance, data architecture,
Quality and data models, you need to start the preparations for importing data to
prepare for your solution to be ready for go live(s).
There are two types of data we are talking about: Configuration data
and migrated data.
Configuration data
Configuration data is data about different setups needed to prepare your
production environment for operational use, for example, currencies and
tax codes. Besides importing master data like customers and products, you
need to import configuration data. Managing configurations is an activity
that needs to happen throughout the lifecycle of the implementation. As
the team understands the business requirements and designs required,
they understand different setups and parameters needed to enable the
business processes required by the business.
Since the Dynamics 365 offerings are highly configurable, you need
tight control over the different configurations required to be set. With
This plan also represents the functional scope for the solution. As
described in Chapter 7, “Process-focused solution,” you start with your
process definition and scope for the implementation. This process
requires enabling functionality to be enabled in the application, and
this functionality requires configuration and master data and even-
tually some transactional data in connection to the first use of the
For example, flagging the setups that only need to be imported once
as they are shared across business units or legal entities versus set-
ups that need to be imported for each phase as they are not shared.
A configuration plan can help you be organized and consider the
requirements for all the subsequent business units or legal entities.
Another example, in the case of Dynamics 365 Finance, Supply Chain
Management, and Commerce, can be whether you want to use data
sharing features to copy the configurations across companies instead
of manually importing those configurations into each company.
Data migration
The second type of data we are dealing with is migrated data. Data migra-
tion, in simple terms, is the process of moving data from one data model
to a different data model for future use. In the case of Dynamics 365, data
migration refers to moving data from a legacy system to a Dynamics 365
application. Whether your legacy system was another application or dis-
parate Excel spreadsheets, you may have been capturing data to run your
business and you need that data when you move to your new Dynamics
365 application.
Explore the parts of the data migration lifecycle in Figure 10-5. Take time
Fig.
to discover the mandatory data required by your solution and analyze
10-5
the data types and systems that
source the data to be imported.
migration
When building a plan, keep in
lifecycle mind that data migration activities
can be a disruptive task and should
not co-exist with other testing
activities, so it is advised to procure
Implement Plan a dedicated high tier data
migration environment.
It is recommended to implement
at least one round of system
▪ SQL database
▪ Third party database
▪ External webservices
▪ Access
▪ Flat files/Excel
242
needed during migration. The more logic required to transform, the
slower the migration takes to load.
Fig.
10-6
Environments
Another missed factor is sizing of the import and staging databases
Extract required for running migration tooling and cleansing of data. You need
to make sure environments used are sized appropriately to handle the
Data mapping
The process of data mapping can start once the solutions for data
modeling have been defined. You should be organized to keep the
nsform
Tra
process systematic and simple.
Target data into the target, as illustrated in Figure 10-6. You can use many
types of tools and approaches when migrating data to your Dynamics
database 365 solution. Some of the very standard options include:
▪ Data import/export wizards
▪ Azure Data Factory
▪ SQL Server Integration Services (SSIS)
▪ Third-party integration products
Migration
Data integration
Integration Integration is the connecting of one or more parts or components
Quality
of a system to create a more unified experience or to ensure a more
consistent outcome of a process. Integration allows leveraging of
existing services both internal and external without having to rebuild
or migrate existing functionality.
Data integration is done to bring data into the system or out to other
Fig.
10-7 systems. Typically, this happens through an event or in a batch on a
schedule. For example, when a
record is created or updated it
Role Responsibility
would be event driven and the
Assist in designing, planning, and managing the data nightly scheduled transmission of
migration process. data would be a batch.
Data migration
analyst Work with subject matter experts and project team to
identify, define, collate, document, and communicate the
data migration requirements.
Refer to Chapter 16, “Integrate with
other solutions” for more details.
Maintain data and manage it according to data properties
as required by administration.
Data quality
Data steward
Coordinate with stakeholders and provide all definitions
for data.
Conclusion
Data has taken center stage in all enterprise transformation projects.
Businesses want to be data driven. Having the right data flowing in
your business means you have set up a well-functioning business in
which you have the most up-to-date and accurate information about
your customers and products and can provide a superior experience to
your staff and customers.
The benefits of data do not just stop there. Having the right data
means you can start leveraging ML and AI today to predict what your
customers need tomorrow.
Customer Engagement
This section includes a number of recommendations and resources
provided in Customer Engagement to help manage modeling, storage,
migration, and archival.
Data modeling
Data modeling is a science, and there are data modeling professionals and
established standards for data modeling. To be effective with Dynamics
365 data modeling, you do not have to be a professional data modeler
or use any special tools. Popular tools like Microsoft Visio can be used to
quickly create a basic ERD diagram that visualizes the relationships and
flow of data between tables. In this section, we discuss some general best
practices for data modeling for Dynamics 365 deployments.
246
▪ Do not include every table. Some core tables, such as activities,
notes, and users (record owners), are related to nearly every entity
in Dynamics 365. If you include every relationship with these tables
in your data model, the result is unreadable. The best practice is to
include only the primary tables used in your configuration in your
data model diagram, and include only custom relationships with the
user and activity tables to maximize readibility.
▪ Data models should include tables outside of Dataverse. If you
are integrating with other systems via Dataverse data connectors
or virtual tables, or if data flows outside of the Dataverse via an
integration, this data should also be represented in your data
model diagram.
▪ Start simple with the standard tables, then add custom entity
relationships to your data model.
▪ User experience should influence your data model. Sometimes it is
easy to overnormalize your data, but the process could make the
application more cumbersome to use.
Start with what you need now but design the data model in a way that
supports what you are going to be doing in the future. For example,
if you know that down the road you need to store additional details
about sales territories, using a text field for territory now makes it more
difficult to implement than if you use the territory entity relationship.
Plan for what is coming.
Data storage
This section provides product-specific guidance you can use while
implementing or maintaining your solution.
Storage capacity
The storage capacity is a standard calculation within the Power
Platform that is easily managed by the system administrator. The Power
Platform admin center is the tool you should use to maintain visibility
of storage and consumption. Within the Power Platform admin center,
go to Resources > Capacity > Dataverse for more details about your
capacity entitlements, as shown in Figure 10-8. You can access this by
going to the Power Platform admin center.
Storage segmentation
To better understand how capacity is calculated within Customer
Engagement, the following provides the breakout based on storage
type and database tables.
Dataverse files: The following tables store data in file and database
storage:
▪ Attachment
▪ AnnotationBase
▪ Any custom or out-of-the-box entity that has fields of datatype file
or image (full size)
▪ Any entity that is used by one or more installed Insights applications
and ends in “-analytics”
Power Apps Excel add-in This add-in can be used to open entities
directly in Excel and create and update records. Records are updated or
created directly in the Dataverse. Not all entities support updates from
Excel, and lookup fields must be manually edited to correctly match.
249
is correct, you can export a template from a CDS entity, populate your
data, then import it. Alternatively, you can import a custom file and
provide mapping. Lookup fields must include the primary key or entity
alternate key valued to match correctly.
Legacy Dynamics 365 data import utility You can import data to
Dynamics 365 entities from CSV, .xls, .xml, and .zip While the Dataverse
API Get Data option is recommended for most flat file imports, the
legacy data import option has several unique capabilities that might
be useful in some cases.
▪ Legacy data import can be used to create new entities and fields
and option set values in Dynamics 365. While this is convenient,
the best practice is to add these items from a solution rather than
from data import.
▪ Legacy data import can import multiple files at the same time
when multiple files are added a zip file.
▪ Legacy data import can resolve lookup fields using values not
contained in the primary or alternate keys.
251
developer to define record matching logic and update, insert,
or upsert records based on any matching criteria. This is also
helpful for making sure that duplicate records are not being
created in the target.
▪ More flexibility in mapping lookup fields. Lookups to other
entities can be challenging for data imports, especially fields like
“customer” that are polymorphic, or fields like “owner” that have
special properties. If your legacy data has owning users that have
a different format, the ETL tool can transform the values into a
format that can be imported.
▪ Many-to-many (N:N) relationships. ETL based imports can easily
import data to N:N relationships and entities like marketing list
member, which is not available from some of the flat file
import options.
▪ Faster import of large data sets. Good ETL tools can use
multi-threading to import data quickly into the common data service.
For example, you can set up data export schedules to replicate data
to Azure Data Lake, which is comparatively cheaper than Dataverse.
From a best practice’s perspective, you do not want to keep old
historical data in Dynamics 365, data that the business does not need
for day-to-day operations. Once data is moved to the data lake, you
can setup Azure Data Factory to create dataflows, transform your
data, and run analysis and you can use Power BI to create business
reports and produce analytics.
Store
Data maintenance
When it comes to data storage, an important factor to consider is
that not all data needs storing. You have lots of logs and staging data
generated that can be safely truncated after a certain time.
Database
operations You can use this workspace to run a number of data-related scenarios
Backup / restore like copying configurations between environments or loading data as
/ refresh / point in
time restore part of data migration.
References
and all testing cycles have been completed and signed off in SIT and
UAT, you can choose “Sandbox to Production” type database request to
restore this database to production for go live.
Data Management/Data
Warehousing information, news
For more info go to:
and tips - SearchDataManagement
(techtarget.com) ▪ Database movement operations home page
▪ Submit service requests to the Dynamics 365 Service
Insights-Driven Businesses Set
The Pace For Global Growth
Engineering team
(forrester.com)
Data sharing framework
DMBoK - Data Management Body
of Knowledge (dama.org) Cross-company sharing is a mechanism for sharing reference and
group data among companies in a Dynamics 365 Finance, Supply
Chain Management, and Commerce deployment.
This framework is introduced to allow sharing setups and master data across
multiple legal entities. This facilitates master data management when you are
dealing with multiple legal entities and want to designate one legal entity as
master for some setups and parameters data. For example, tax codes may
not change from company to company so you can set up in one legal
entity and use cross company data sharing framework and its policies to
replicate the data across rest of the legal entities.
256
Configuration data and
Checklist data migration
Create, maintain, update, and test a configuration plan
throughout the project lifetime. It accounts for all the
Data governance required configuration data you import to support go live.
and architecture Ensure the data migration analyst, data migration archi-
Establish data governance principles to ensure data tect, and data steward create a plan for data migration
quality throughout the business processes lifecycle, that includes identifying data sources, data mapping,
focusing on the data’s availability, usability, integrity, environments, ETL, testing, and cutover planning.
security, and compliance.
Focus on maximizing the data throughput during
Appoint a data steward to ensure data governance migration by following Customer Engagement apps
principles are applied. best practices.
Define proper use cases and make data available to Optimize for network latency by staging in the cloud
support the business processes. and batching requests.
The system was hard to maintain and fraught with issues as customer
data was divided in a number of applications and databases.
The first phase of the project targeted the UK business and involved
moving over 150 of their business processes to Dynamics 365.
The company now has a unified view of their customers, which helps
them provide better customer service and allows marketing efforts to
be more targeted.
Introduction
Generally, everything has a life and its own lifecycle.
Your solution goes through the same process, starting
with conception, moving through implementation,
then continuous operation, and finally to transition.
A solid application lifecycle management (ALM) strategy brings a
successful solution to customers with complete visibility, less manual
interaction with automation, improved delivery, and future planning.
In this chapter, we talk about ALM in the context of the overall solution
lifecycle. Your solution may use one or more of the Dynamics 365
Business Applications such as Finance, Supply Chain Management,
Sales, Field Service, or Commerce.
What is ALM?
the entire lifecycle of a solution, such as
governance (decision-making), project
management, requirement management,
architecture, development, test management,
maintenance, support, change management,
ALM is managing the end-to-end lifecycle of your solution, starting
and release management.
from procuring the Dynamics 365 license, to mapping your business
It’s very important that the implementation team has the appropri-
ate ALM tooling. ALM tools (such as Microsoft Azure DevOps) are
required to manage all aspects of the solution, including application
governance, requirement management, configuration, application
development, testing, deployment, and support. The ALM tool should
be well connected with all team members as well as all processes. For
Fig.
11-1
Planning example, when a developer checks in the code, they
should mark the changes with a particular work
Maintenance Requirement item, which connects the development work item
with the business requirement work item to the
requirement validation work item.
Deployment Design
Application vs. development
Testing Configuration lifecycle management
Some people may define ALM as improving de-
velopment processes such as version control, build
Development
automation, release deployments. Although this is
Why have an
ALM strategy?
From the time you conceptualize your Dynamics 365 solution, you start
the application lifecycle: from the project Initiate phase, to the Implement
phase, Prepare phase, and finally the Operate phase (Figure 11-2).
During the lifecycle of the Dynamics 365 solution, you identify partner
teams, require project management, gather and map business process-
es, develop new processes, perform testing, deploy code, and finally
maintain the solution in production.
Think about taking a trip. You set dates, book flights and hotels, and
plan places to visit. All that planning will likely result in a great vacation.
In the same way, a well-planned ALM will lead to a solution that grows
Fig. your business. With ALM recommended practices, you’re set for success
11-2
Your ALM may not be perfect right from the start. But it’s a foundation—
you can refine your ALM practices over time.
Like a poorly planned trip, if you don’t have effective ALM, you can
expect a negative impact on your implementation, solution quality,
and business satisfaction.
264
implementation might not require development or integration, but it
will still have configurations and application version updates.
With effective ALM and well-defined practices, you can keep your
solution healthy and up to date with the latest application releases.
During implementation
While you’re implementing the solution, you go through multiple
phases: Initiate, Implement, and Prepare. ALM applies to all these
aspects in your implementation:
▪ Project management
▪ Business process management
▪ Application configuration
▪ Development
▪ Testing
▪ Bug tracking
▪ Ideas, issues, risks, and documents
▪ Release management
After implementation
When you’re live in production, you’re in the Operate phase. ALM
continues with the following aspects:
▪ Continuous updates
▪ Independent software vendor (ISV) updates
▪ Maintenance and support
▪ New features
▪ Next phase
For example, a functional team member can gather and define business
requirements in a given template and track them in Azure DevOps. They
can have it reviewed by assigning a work item to business users and then
store the business requirement document in a repository.
Development
Having an efficient development lifecycle is one of the mandatory
aspects of ALM. In general, the development lifecycle consists of the fol-
lowing: Design, Develop, Build, Test, and Deploy. Continuous integration
and continuous deployment (CI/CD) is one of the best and latest practices
to enable delivering code changes more frequently and reliably.
After the business requirements are defined and identified, the devel-
opment team should get involved to work on any gaps in the solution.
The development team analyzes the requirements, reviews the functional
design, prepares the technical design, and gets the technical design
reviewed. The development team should use version control, create
development tasks, link check-ins with work items, and prepare a
unit-testing document.
Successful Product
Developer B testing in Yes Solution A
Developer B Code
Developer B Code sandbox Solution B
Testing
Testing is an integral part of ALM. Under ALM, test management
processes and tools should be defined with templates to help manage
test preparation, implementation, and reporting. It should also include
what tools to use for these steps. Chapter 14, “Testing strategy,”
provides more information about test management.
Chapter 20, “Service the solution,” and Chapter 21, “Transition to support,”
cover maintaining the solution and the support process in detail.
Azure DevOps
Microsoft recommends using Azure DevOps as the tool for managing
or maintaining your ALM practices and processes. For some areas of
Dynamics 365 (such as Finance and Supply Chain Management), Azure
DevOps is the only version control tool.
A standard lifecycle is available for each work item, and you can build
your own custom states and rules for each type of work item.
You can also use Azure DevOps for project management, test manage-
Take some time to learn more about ment, bug tracking, release management, and many more aspects of
DevOps tools on Azure. your implementation.
Operations
In this section, we provide some Finance and Supply Chain
Management recommendations to achieve an efficient ALM in your
The goal of LCS is to deliver the right information, at the right time, to
the right people, and to help ensure repeatable, predictable success
with each rollout of an implementation, update, or upgrade.
270
You can use the BPM tool to define, view, and edit the Finance and
Supply Chain Management out-of-box business processes (in the form
of libraries), which you can use in future implementations. The tool
helps you review your processes, track the progress of your project,
and sync your business processes and requirements to Azure DevOps.
Every business is different in some ways; this tool helps you align your
business processes with your industry-specific business processes and
best practices. BPM libraries provide a foundation for your business
process, and you can add, remove, and edit your processes according
to your solution requirements.
Development
In Finance and Supply Chain Management, Microsoft Visual Studio
is used as the development environment. The development lifecycle
includes the following steps (Figure 11-4):
▪ Each developer uses their own development environment
▪ Developers write source code and check in their code to
Azure DevOps
▪ Developers also sync code from Azure DevOps to get the source
code from other developers
▪ The build takes the source code from Azure DevOps, uses the build
Fig.
11-4 definition, and creates a deployable package
Environment
Developer Developer
Build User acceptance Product
virtual machine virtual machine
testing
1 2
Source code Source code Source code Application Application Release Release
deployable deployable candidate candidate
package package
Tool
Implementation Guide: Success by Design: Application lifecycle management 271
▪ The build pipeline also pushes the deployable package to the LCS
asset library
▪ Azure release pipelines work with Visual Studio to simplify deploying
packages to UAT
▪ When UAT is complete, the deployable package is marked as a
release candidate to deploy to production
▪ The Dynamics Service Engineering (DSE) team deploys it to
production using a service request from LCS
Version control
The primary purpose of version control is storing and maintaining
source code for customizations, as well as ISV solutions. You develop
against local, XML-based files (not the online database), which are
For more information, take a look at our
document on how to develop and customize
stored in version control tools such as Azure DevOps. The following are
your home page. recommendations for version control branching:
▪ Consider a using minimum branching option
▪ Consider the following recommended branching strategy:
▫ Development Developer check-in and testing with develop-
ment data (Trunk/Dev)
▫ Test Deploying to the tier 2+ and testing with current pro-
duction data (Trunk/Main)
▫ Release or Release XX Retesting in the tier 2+ and deploy-
ing to production (Trunk/Release) or v-next (Trunk/Release XX)
Fig. ▪ Use the shelve command or suspend changes to keep work safe
11-5
▪ Request a code review to ensure code quality
▪ Check in code when a feature is complete, and include changes
Release Release XX
branch branch from one feature in each changeset
▪ Merge each feature in a separate changeset
▪ Don’t check in code directly to the test or release branches
▪ Don’t check in changes for more than one feature in a single
Test (main) branch changeset
▪ Don’t mark deployable packages from the development and test
branches as release candidates
▪ Don’t merge untested features into the release branch
Code
Development upgrade
branch branch
Figure 11-5 illustrates a recommended branching strategy.
The Azure DevOps build system provides the following triggers for builds:
▪ Scheduled builds, such as nightly at 6 PM
▪ Continuous integration, such as:
▫ Starting a build as soon as code is checked in
▫ Gated check-in
▪ Manual builds (on demand)
Automated testing
As part of development ALM, testing automation should be in place
Customer engagement
When organizations implement their software as a service (SaaS) cloud
solution, some level of customization and extensibility typically exists,
with the goal to provide additional value and adjust the functionalities
to specific business or industry needs.
Solutions
Solutions are the mechanism for implementing Dev ALM in Power
Platform apps. They’re the vehicle that distributes components across
the different environments.
Tools
Several tools are available to help automate the process of managing
and shipping solutions, which can help increase efficiency, reduce
Workshop scope
FastTrack ALM workshops are designed for implementers who want to
make sure that their development approach meets the requirements
of the implementation and is aligned with typical best practices. The
workshop could cover several topics:
▪ Development work management Focuses on high-level,
day-to-day developer activities. It reviews that the team has devel-
opment guidelines, development best practices, and work items
such as requirement, tasks, and bugs management.
▪ Code management Reviews your version control, branching,
An ALM workshop code merging, and code reviews strategy. Branching can be simple
or complex depending on the project and phases. It also looks at
provides guidance how customization is checked in, such as gated check-in.
and best practices. ▪ Build management Looks at your build strategies, such as
277
manual build or automated build. It mainly reviews your build
References plan for build definitions, different variables, and build triggers for
your implementation. It also reviews whether you’re using a build
environment or Azure-hosted builds.
DevOps and ▪ Release management Assesses your deployment plan for
Azure Pipelines customizations, ISVs, and continuous (one version) updates. It also
reviews how hotfixes and your code release, or one version updates,
DevOps solutions on Azure
are applied to production and nonproduction environments.
Release pipelines
Business process modeler (BPM) in Lifecycle You should complete both workshops either in the Initiate phase or
Services (LCS) when starting the Implement phase. If you plan ALM workshops too far
Dynamics 365 Finance and Supply Chain into implementation, any findings and recommendations could cause
Management Tools
significant rework.
Customer Conclusion
Engagement ALM is the management of your solution lifecycle from conception to
Application lifecycle management (ALM) with operation. It includes governance (decision-making), project manage-
Microsoft Power Platform
ment, managing requirements, solution architecture, development,
Create your first pipeline
CI/CD, test management, maintenance and support, change manage-
Microsoft Dynamics CRM SDK Templates
ment, release management, and many more areas.
Solution Lifecycle Management: Dynamics 365
for Customer Engagement apps
Your solution lifecycle may go through several evolutions. Each evo-
lution may use the same SDLC methodology. Basically, SDLC is part of
ALM, ALM is far bigger than Dev ALM, and Dev ALM is part of SDLC.
278
Checklist
As the project team prepared for the go live, the seemingly independent
decisions made during the initial phases resulted in deployment issues
that eventually stalled the go live. Investigations by Microsoft Support
confirmed that there was no ALM strategy in place, and identified the
key issues:
▪ Unmanaged solutions caused solution-layering issues for testing and
production, affecting the sanctity and integrity of the environments
▪ A prototype solution employed for production purposes
introduced quality issues
▪ Failure to use ALM practices such as DevOps caused traceability
issues and prompted developers to build functionality that wasn’t
aligned with customer requirements
▪ Suboptimal code was implemented because tools such as solution
checker weren’t used to enforce code quality
▪ Testing without traceability was insufficient, and buggy code was
deployed to other environments
As the old saying goes, “By failing to prepare, you are preparing to fail.”
The MVP go-live date was delayed by 12 weeks and Microsoft worked
alongside the project team to determine the root cause of each issue.
The team eventually acknowledged that a series of seemingly unre-
lated decisions affected the MVP go live, and they sent every team
member working in a technical role to an ALM refresher training. The
project team also confirmed plans that should have been solidified at
the beginning of the project, including an environment strategy and a
solution management and ALM approach.
After the refresher training, the project team started with a “crawl to
walk” approach. During the “crawl” stage, they implemented mandatory
ALM practices with these governance elements:
Introduction
In this chapter, we look at the fundamental
security principles applicable to Microsoft
Dynamics 365 implementations.
Next, we discuss in more detail how some of these principles apply
differently to Dynamics 365 Customer Engagement, Dynamics 365
Finance, and Dynamics 365 Supply Chain Management applications.
We then address the importance of making security a priority from
day one, with specific examples from each product that build upon the
concepts we’ve discussed. Finally, we look at how to avoid common
mistakes by examining some key anti-patterns.
Security overview
Security is the protection of IT systems and networks from theft or damage
to their hardware, software, or data and disruption of the service.
Microsoft Customer
Fig.
12-2 Microsoft takes its commitment seriously to safeguard customers’
data, to protect their right to make decisions about that data, and to
Security Implement
be transparent about what happens to that data. On our mission to
strong security
measures to
empower everyone to achieve more, we partner with organizations,
safeguard your empowering them to achieve their vision on a trusted platform. The
data Microsoft Trusted Cloud was built on the foundational principles of
security, privacy, compliance, and transparency, and these four key
Privacy Provide you with principles guide the way we do business in the cloud (Figure 12-2). We
control over your apply these principals to your data as follows:
data to help keep ▪ Security Implement strong security measures to safeguard
it private
your data
▪ Privacy Protect your right to control and make decisions about
Compliance your data to help keep it private
Help you meet
▪ Compliance Manage your data in compliance with the law and
your specific
help you meet your compliance needs
compliance needs
▪ Transparency Be transparent about our enterprise cloud services
and explain what we do with your data in clear, plain language
Compliance
Every organization must comply with the legal and regulatory stan-
dards of the industry and region they operate in, and many are also
subject to additional contractual requirements and corporate policies.
Figure 12-3 lists some standard compliance goals and their implemen-
tation in Dynamics 365.
Customer responsibility
As a customer, you’re responsible for the environment after the service
has been provisioned. You must identify which controls apply to your
Refer to Microsoft compliance business and understand how to implement and configure them to
offerings for more information
about regulatory compliance manage security and compliance within the applicable regulatory
standards and Microsoft products. requirements of your nation, region, and industry.
Privacy
You are the owner of your data; we don’t mine your data for advertising.
If you ever choose to end the service, you can take your data with you.
Fig. Figure 12-4 lists some standard privacy goals and their implementation
12-4
in Dynamics 365.
Privacy goals
How we use your data
You own your data Your data is your business, and you can access, modify, or delete it at
any time. Microsoft will not use your data without your agreement, and
You know where your data is located
when we have your agreement, we use your data to provide only the
You control your customer data services you have chosen. We only process your data based on your
agreement and in accordance with the strict policies and procedures
that we have contractually agreed to. We don’t share your data with
Implementation details advertiser-supported services, nor do we mine it for any purposes like
marketing research or advertising. Learn more about how Microsoft
All data is classified
categorizes data in the delivery of online services.
Role-based security puts the
customer in charge We believe you should have control over your data. The Trust Center can
Figure 12-6 lists some standard compliance goals and their implemen-
tation in Dynamics 365.
Fig.
12-5
Microsoft
protecting you
Intelligent
Security
Graph
Industry
Partners
Antivirus
Network
Certifications ...
Cyber Defense
Opperations Center
Malware protection center Cyber hunting teams Security response center Digital crimes unit ...
Conditional
access
Cloud app
security
Event
management
Rights
management
Key
vault
Security
center
... Active
protection
service
Windows
update
Microsoft 365
advanced threat
protection
Smart
screen
PaaS IaaS
Advanced
threat Terminal
analytics app
Core security Establish security Establish design Use approved tools Perform dynamic Create an incident Implement incident
training requirements requirements analysis response plan response plan
Create quality gates Perform attack Desperate unsafe Perform fuzz Conduct final
and bug bars surface analysis functions testing security review
and reduction
Azure has a defense system against DDoS attacks on its platform services.
It uses standard detection and mitigation techniques, and is designed
to withstand attacks generated from outside and inside the platform.
Data segregation
Dynamics 365 runs on Azure, so it’s inherently a multi-tenant service,
meaning that multiple customers’ deployments and virtual machines
are stored on the same physical hardware. Azure uses logical isolation
to segregate each customer’s data from others. This provides the scale
and economic benefits of multi-tenant services while rigorously pre-
venting customers from accessing one another’s data.
Encryption
Data is an organization’s most valuable and irreplaceable assets,
and encryption serves as the last and strongest line of defense in a
Fig. multi-layered data security strategy. Microsoft business cloud services
12-8
Barriers Fencing
Perimeter
Building
Fig.
keys for your Dynamics 365 deployments. Finance and Supply Chain
12-9
Management apps use server-side encryption using service-managed
Data protection
Windows OS level Capabilities
API (DPAPI)
DPAPI encrypts SMK
SQL TDE performs real-time I/O encryption and ecryption of the
data and log files to provide data encryption at rest
SMK encrypts the DMK Microsoft manages the keys and handles management
for master DB of encryption
Database encryp-
Content DB level Availability
tions key
Certificate encrypts DEK
in content DB Available now for all Dynamics 365 online environments
Organization DB
Secure identity
Identity and access management is critical to every organization. Azure
Active Directory (Azure AD) is a complete identity and access man-
Fig. agement solution with integrated security that connects 425 million
12-10
people to their apps, devices,
and data each month. Dynamics
365 applications are safeguarded
using Azure AD as a seamless
identity solution.
Authentication: Users
Authentication is the process of
proving an identity. The Microsoft
identity platform uses the OpenID
Connect protocol for handling
End-to-end
Data in transit Data in transit
encryption of authentication. By default, only
between a user between
communications
and the service datacenters authenticated users can access
between users
Dynamics 365.
Protects user from Protects from bulk Protects from
interception of their interception of data interception or loss of
communicatin and helps data in transit Azure AD is used as a centralized
ensure transaction between users
integrity
identity provider in the cloud.
To access the system, users must
Fig.
12-11 With conditional access, you can implement automated access control
decisions for accessing your cloud
Access control: Authentication for online services apps that are based on conditions
PC and device (Figure 12-12).
Express route
Authorization
Authorization is the control
Multifactor
authentication
of access to the Dynamics 365
Customer responsibility
As a customer, you’re responsible for:
▪ Account and identity management
▪ Creating and configuring conditional access policies
▪ Creating and assigning security roles
▪ Enabling and configuring auditing and monitoring
▪ Authentication and security of components of the solutions other
than Dynamics 365
Fig.
12-12
Conditions Actions
Cloud
User / group Allow access applications
Cloud application
Device state On-premises
Enforce MFA
Location (IP range)
User per user/per
Client application application
Sign-in risk
Block access
Transparency
Microsoft is transparent about where your data is located. You know
where your data is stored, who can access it, and under what conditions.
Dynamics 365 customers can specify the Azure datacenter region where
their customer data will be stored. Microsoft may replicate customer
data to other regions available within the same geography for data
durability, except in specific scenarios, such as the following:
▪ Azure AD, which may store AD data globally
▪ Azure multifactor authentication, which may store MFA data globally
▪ Customer data collected during the onboarding process by the
Microsoft 365 admin center
Fig.
12-14 Microsoft imposes carefully defined requirements on government
and law enforcement requests for customer data. As described at
Transparency goals the Microsoft Privacy Principles, if Microsoft receives a demand for
a customer’s data, we direct the requesting party to seek the data
Choose where our data is stored
directly from the customer. If compelled to disclose or give access to
Transparent about how we respond any customer’s data, Microsoft promptly notifies the customer and
to government requests for your data provides a copy of the demand unless legally prohibited from doing so.
Figure 12-14 lists some standard transparency goals and their imple-
Security features
We use three main categories of security features to provide appro-
priate end-user access (Figure 12-15): fundamental security controls,
additional security controls, and manual sharing. Most of the security
requirements should be addressed using fundamental security con-
trols; other options should be used to manage the exceptions and
edge scenarios.
Fig.
12-15 Record ownership
Dataverse supports two types of
record ownership:
▪ Organization owned
Manually sharing records Manually handles exceptions to the model When a record is assigned to
Organization, everyone in
Additional controls the environment can access
Hierarchy security Handles exceptions to the the record
Field-level security fundamental security
Access teams controls more easily ▪ User or Team owned
Table relationships behavior
If not assigned to the organi-
zation, a record is assigned to
Fundamental security controls Business Unit, Child Business
Business units hierarchy
Security roles
Generally covers Unit, Team, or User
most requirements
Users and teams
Record ownership
Some out of the box tables are
exceptions to the above 2 types
Environment-level security group configuration such as system user record is
owned by a Business Unit.
The real power of business units comes from their hierarchical nature.
Users can be given access to records just in their business unit, or their
As a best practice, minimize the number
business unit and the business units under their unit. For example, the
of business units in your implementation
based on access control requirements hierarchical nature of business units can allow you to limit access to re-
instead of mapping the organizational
chart into business units.
cords at the site, district, region, and corporate levels. Business units are
useful to segment data into ownership containers for access control.
Security roles
A privilege is permission to perform an action in Dynamics 365. A
security role is a set of privileges that defines a set of actions that can
be performed by a user. Some privileges apply in general (such as the
ability to use the export to a Microsoft Excel feature) and some to a
specific table (such as the ability to read all accounts).
Teams
Teams provide a straightforward way to share records, collaborate
with other people across business units, and assign security roles.
While a team belongs to one business unit, it can include users from
other business units. You can associate a user with more than one
team, which is a convenient way to grant rights to a user that crosses
business units.
Field-level security
You can use field-level security to restrict access to high business im-
pact fields to specific users or teams. For example, you can enable only
certain users to read or update the credit score of a business customer.
Sharing
Record sharing lets a user give access to a table record to other users or
team. The user must have a share privilege on the record to be able to
share it. Sharing should be seen as a way for users to manually manage
exceptions to the default security model.
Hierarchy security
You can use a hierarchy security model for accessing data from a user
or position hierarchy perspective. With this additional security, you
gain more granular access to records, for example by allowing man-
agers to access the records owned by their reports for approval or to
perform tasks on reports’ behalf.
Audit
Auditing helps you comply with internal policies, government regula-
tions, and consumer demands for better control over confidential data.
Organizations audit various aspects of their business systems to verify
Dataverse audit
Audit logs are provided to ensure the data integrity of the system and
to meet certain security and compliance requirements. The auditing
feature logs the changes that are made to records and user access so
the activity can be reviewed later.
Fig.
12-16
Don’t enable auditing for all tables and columns. Do your due dili-
gence to determine which tables and fields are required for auditing.
Excessive auditing can affect performance and consume large volumes
of log storage.
303
▪ Auditing The system logs when a user signs in or out of
the application
Role-based security
In Finance and Supply Chain Management apps, role-based security
is aligned with the structure of the business. Users are assigned to
security roles based on their responsibilities in the organization and
their participation in business processes. Because you can set up rules
for automatic role assignment, the administrator doesn’t have to be
involved every time a user’s responsibilities change. After business
managers set up security roles and rules, they can control day-to-day
user access based on business data. A user who is assigned to a security
role has access to the set of duties that are associated with that role,
Fig.
12-17 which is comprised of various granular privileges. A user who isn’t
Users
Authorization
Security roles
Duties Privileges
User interface elements Tables and fields SSRS reports Service operations
Data security
Data security policies Table permissions
framework
Database
Database
Segregation of duties
You can set up rules to separate tasks that must be performed by
different users. For example, you might not want the same person to
acknowledge the receipt of goods and to process payment to the ven-
dor. This concept is named segregation of duties. Segregation of duties
helps you reduce the risk of fraud, enforce internal control policies, and
The standard security roles included in
Finance and Supply Chain Management detect errors or irregularities.
apps incorporate segregation of duties. If
a user needs full access, they often need
a combination of roles (such as an HR When an organization defines rules for segregation of duties, you should
assistant or HR manager for full access to
HR features). evaluate existing role definitions and role assignments for compliance,
thereby preventing role definitions or role assignments that don’t
Security reports
Finance and Supply Chain Management applications provide a set of
rich security reports to help you understand the set of security roles
running in your environment and the set of users assigned to each role.
In addition to the security reports, developers can generate a work-
book containing all user security privileges for all roles.
System log
System administrators can use the User log page to keep an audit
log of users who have logged on to the system. Knowing who has
logged in can help protect your organization’s data. The user logging
capability allows administrators to identify roles that provide access to
sensitive data. The sensitive data identifier enhances the user logging
experience by letting your organization produce audit logs that show
who in your system has access to sensitive data. This capability is help-
ful for organizations that might have multiple roles that have varying
degrees of access to certain data. It can also be helpful for organiza-
tions that want a detailed level of auditing to track users who have had
access to data that has been identified as sensitive data.
Database logging
Database logging provides a way to track specific types of changes to
the tables and fields in Finance and Supply Chain Management apps,
including insert, update, delete, and rename key operations. When you
configure logging for a table or field, a record of every change to that
table or field is stored in the database log table (sysdatabaselog) in the
environment database.
Make security
a day one priority
Security should be at the forefront when you start a Dynamics 365
project, not an afterthought. Negligence of security requirements can
lead to significant legal, financial, and business risks. It can also impact
the overall scalability and performance of the solution. In this section,
we discuss some of the security impacts on Dynamics 365.
Impact on scalability
Security design can have a significant impact on scalability. Security
design should consider scalability especially for multi-phased and
global deployments.
Security should
be at the forefront Customer Engagement examples
when you start In multi-phased global deployments, short-sighted security design can
a Dynamics 365 affect the rollout to other regions. The different regions can have their
own compliance, business processes, and security requirements. The
project, not an same model may not work for all the regions. Therefore, you should
afterthought. think about the security model holistically at the beginning of the project.
308
One such example can be a global deployment where in the first
region (Phase 1) you create all users in the root business unit with
business unit-level access in security roles. Because Phase 1 users were
placed in the root business unit, when you roll out the solution to the
next region (Phase 2), Phase 2 users have access to the users in the first
region. This may cause some regulatory or compliance issues.
Impact on performance
Security design can impact system performance. Dynamics 365 has
multiple access control mechanisms, all with their own pros and cons.
The ideal security design is a combination of one or more of these
controls based on your requirements and the granularity, simplicity, and
scalability they provide. The wrong choice can lead to poor performance.
Sharing can also trigger cascading rules that can be expensive if not set
appropriately. Sharing should be for edge cases and exception scenarios.
You can get more information on this topic from the Modelling
Scalable Security Modelling with Microsoft Dynamics CRM whitepaper.
It’s an older document, but still relevant for Dynamics 365 security.
The volume of data will impact performance in the long run. You
should have a well-defined archival strategy. The archived data can be
made available through some other channels.
310
Finance and Supply Chain Management examples
Database logging in Finance and Operations can be valuable from a
business perspective, but it can be expensive regarding resource use
and management. Here are some of the performance implications of
database logging:
▪ The database log table can grow quickly and increase the size of
the database. The amount of growth depends on the amount of
logged data that you retain.
▪ When logging is turned on for a transaction type, each instance of
that transaction type causes multiple records to be written for the
Microsoft SQL Server transaction log file. Specifically, one record
is written for the initial transaction, and one record logs the trans-
action in the database log table. Therefore, the transaction log file
grows more quickly and might require additional maintenance.
▪ Database logging can adversely affect long-running automated
processes, such as inventory close, calculations for bills of materi-
als, master planning, and long-running data imports.
▪ When logging is turned on for a table, all set-based database op-
erations are downgraded to row-based operations. For example, if
you’re logging inserts for a table, each insert a row-based insert.
Unless carefully designed and tested, policy queries can have a significant
performance impact. Therefore, make sure to follow the simple but im-
portant guidelines when developing an XDS policy. Performance can be
significantly impacted by poorly performing queries. As a best practice,
Governance takes time to understand, roll out, and adopt. The earlier
you begin the process, the easier compliance becomes.
Impact on rollout
In many projects, security design is created late in the implementation
process. It can lead to a lot of issues, ranging from system design to in-
adequate testing. Keep the following recommended practices in mind:
▪ Test the system using the right security roles from the very beginning.
▪ Use the correct security profile during test cycles. Don’t provide an
Governance administrator role to everyone.
takes time to ▪ When training people, make sure they have access to make all the
required actions. They shouldn’t have access to more information
understand, roll than required to complete those actions.
out, and adopt. The ▪ Validate security before and after data migration occurs.
earlier you begin
Finance and Supply Chain Management examples
the process, the
Design your security roles with both compliance and scalability in
easier compliance mind. By engaging your security and compliance team early in your
becomes. implementation, you can use the segregation of duties framework to
312
identify conflicts and create rules that enforce separation of duties that
should be separated.
However, in smaller legal entities, users may need to accomplish the same
tasks with a smaller staff, which makes it challenging to construct declara-
tive rules that enforce segregation of duties. In this case, modular security
roles allow you to enforce the same segregation of duties rules but when
assigning multiple roles to some users, and apply documented overrides
to the detected conflicts. In this manner, your global rollout can reuse
modular security components, and you have a robust segregation of du-
ties framework in place for entities of many sizes, allowing them to grow
without introducing the need to redesign the security implementation.
Impact on reporting
Security design can have a significant impact on reporting. For example,
your security design identifies sensitive data that requires restricted
access to be enforced and monitored. You need to create auditing re-
ports related to access and changes on sensitive data, and designated
personnel responsible for oversight need to review them periodically.
You can use Tabular Data Stream (TDS) endpoints with Customer
Engagement for direct query scenarios, which honor the Dynamics 365
security context.
Organizational changes
What happens if a user moves to another team or business unit? Do
you need to need to reassign the records owned by the user? What
should be the process?
Do you reassign all records or just active record? What is the effect of
changing the ownership on the child records, especially closed activities,
closed cases, and won or lost opportunities? Based on these questions,
you can come up with a process and cascading rules in Dynamics 365.
Reassigning a large volume of records can also take a long time and affect
the system’s performance. Learn more about assigning or sharing rows.
Maintenance overhead
Plan out how to assign security roles to new users. Are you going to
automate a process? For example, you can use Azure AD group teams
in Dynamics 365 to assign roles to new team members. This can make
it very easy to assign license and security roles to new team members.
Security anti-patterns
An anti-pattern is a frequently implemented, ineffective response to a
problem. Several security anti-patterns should be avoided for scaling,
performance, and security reasons. We shared a few in the previous
314
section, such as using Organization owned entities for reference data
tables in Customer Engagement, or indiscriminately logging transac-
tional tables in Finance and Operations apps. In this section, we discuss
a few more anti-patterns to avoid.
Conclusion
In this chapter, we introduced how the Trusted Cloud is built on the
foundational principles of security, privacy, compliance, and trans-
parency, and outlined basic concepts of information security as they
apply to Dynamics 365. We then took a closer, product-specific look
at how these security concepts are applied to Customer Engagement,
Finance, and Supply Chain Management applications. With that as a
foundation, we then examined why it’s crucial to the success of your
implementation to make security a priority from day one, citing some
General Equipped with this information, you can be confident in the security
Microsoft Dynamics CRM Online considerations and capabilities of Dynamics 365 products and better
security and compliance planning
guide ensure the security of your implementation.
Compliance
An Introduction to Cloud
Computing for Legal and
Compliance Professionals
Privacy
Privacy at Microsoft
Security
Dynamics 365 Security Assessment
Security architecture
316
Checklist
Identity and access Ensure the security model is optimized to perform and
provides the foundation for further expansion and scale
Establish an identity management strategy covering by following the security model best practices.
user access, service accounts, application users, along
Have a process to map changes in the organization
with federation requirements for SSO and conditional
structure to the security model in Dynamics 365. This
access policies.
needs to be done cautiously in a serial way due to the
Establish the administrative access policies targeting downstream cascading effect.
different admin roles on the platform, such as service
admin and global admin.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 319
Group, wrote in the Microsoft Dynamics 365 blog, “There is a
fundamental change occurring across industries and organizations:
Data now comes from everywhere and everything.” As an essential
part of a business solution, data represents a sizable portion of each
user’s engagement. The solution processes and analyzes the data to
produce information, which allows an organization to make informed
decisions, and determines actions that can come from it.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 320
it. Data silos make the goal of having a 360-degree view of each user
even more challenging. Successful organizations are able to digitally
connect every facet of their business. Data from one system can
be used to optimize the outcomes or processes within another. By
establishing a digital feedback loop (Figure 13-1) that puts data, AI,
and intelligent systems and experiences at the core, organizations can
transform, become resilient, and unlock new values for users.
Evolution of business
intelligence, reporting,
and analytics
Practices for gathering, analyzing, and acting on data have evolved
significantly over time. Traditional methods of standardizing and
generating static reports no longer give businesses the agility to adapt
to change. New technology and secure, highly reliable cloud services—
which are poised to meet the needs of organizations that must quickly
manage increasing amounts of data—have given rise to a new era of
digital intelligence reporting.
Traditional reporting
The early generations of business intelligence solutions were typically
Fig.
13-1 centralized in IT departments. Most organizations
had multiple data repositories in different formats
Engage Empower and locations that were later combined into a single
customers people
repository using extract, transform, and load (ETL)
l ig
Intel ent tools, or they would generate reports within siloed
sources and merge them to provide a holistic view of
Data & AI the business.
s
sy
te
ce
ms n
ie
s
an d ex per
Once the data was unified, the next steps were
Optimize Transform deduplication and standardization, so the data could
operations products
be structured and prepared for reporting. Business
users who lacked the expertise to perform these tasks
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 321
would have to rely on the IT team or specialized vendors. Eventually, the
business would receive static intelligence reports, but the entire process
could take days or even weeks, depending on the complexity of the data
and maturity of the processes in place. Data would then undergo further
manipulation, when required, and be shared across different channels,
which could result in the creation of multiple versions that would be
difficult to track.
Self-service reporting
The evolution to a more agile approach favored self-service capabilities to
empower users. More user-friendly solutions reduced the IT dependency
and focused on providing business users with access to data and visualization
tools so they could do their own reporting and analytics. This accelerated the
speed of data analysis and helped companies make data-driven decisions
in competitive environments. However, in this model, reporting was
unmanaged—increasing the number of versions of reports, as well as
different representations of the data—which sometimes prevented
organizations from using a standardized method to analyze data and
inhibited effective decision-making.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 322
Reporting from anywhere
The growth of IT infrastructures, networks, and business usage of
devices such as mobile phones and tablets launched a digital trans-
formation from legacy systems into more modern solutions for most
organizations. Business intelligence apps allowed reporting from
anywhere at any time, giving users access to data while they were out
of the office or on business trips. These apps improved how organiza-
tions could respond to customers, and gave a 360-degree view of each
customer interaction. They also provided more succinct visualizations
of data, with better features to interpret it.
323
While the data may not yet be unified with such an approach, customers
and organizations can use insights provided by products and services
from the earliest phases of usage or production, which increases their
return on investment (ROI).
Your analytics strategy can help transform data collected from different
processes and systems into knowledge to help you stay competitive in
the market.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 324
A successful strategy caters to user requirements and how those require-
ments must be fulfilled. For example, some business users may require
printed reports, which can be sent to customers and vendors. Others
may need data to be available for summarized, detailed, or ad-hoc
reporting. Business management may require financial or operational
reports to understand the health of the organization and determine an
overall strategy. Serving such varied user needs often requires different
tools to manage data coming from multiple sources. For example,
financial and ad-hoc reporting requirements may rely on data extracted
and stored in a staging location utilizing Microsoft Azure data services
such as Azure SQL Data Warehouse, as well as manufacturing or sales
information in Dynamics 365 apps.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 325
scalable solutions. To choose the best technology to meet requirements
today and in the future, the implementation team must understand their
organization’s current state and the future vision for enterprise business
intelligence and analytics.
Organizations must also identify any critical reports that require data
mash-up with other sources, such as front-office apps or transportation
systems, to develop an appropriate data integration and solution strat-
egy. Data-volume reporting requirements help determine how reports
will be designed and shared with users.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 326
comply with local laws, businesses typically must submit regulatory
and compliance documents in a pre-defined printed or electronic
format provided by government agencies. The data required for
these documents often resides in enterprise resource planning
(ERP) systems.
Financial reporting
Finance and business professionals use financial reporting to create,
maintain, deploy, and view financial statements. The Finance app’s
financial reporting capabilities move beyond traditional reporting
constraints to help you efficiently design distinct types of reports.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 327
It includes complex currency-reporting requirements and financial
dimension support, and makes account segments or dimensions
immediately available—no additional tools or configuration steps
are required.
Dynamics 365 apps deliver rich, interactive reports that are seamlessly
integrated into application workspaces. By using infographics and visuals
supported by Microsoft Power BI, analytical workspaces let users explore
the data by selecting or touching elements on the page. They also can
identify cause and effect, perform simple what-if operations, and discover
hidden trends—all without leaving the workspace. Power BI workspaces
complement operational views with analytical insights based on near-real-
time information. Users also can customize embedded reports.
Reporting categories
Reporting needs for an organization can be classified into two
categories: operational reporting and business reporting.
Operational reporting
Examples of operational reporting include orders received in a day,
delayed orders, and inventory adjustments in a warehouse. This kind of
reporting supports the detailed, day-to-day activities of the organiza-
tion. It is typically limited to a short duration, uses real-time, granular
328
data, and supports quick decision-making to improve efficiency. It also
helps organizations identify their issues and achievements, as well as
future strategies and actions that may affect the business. Operational
reports can empower businesses to determine their next steps for
improving organizational performance. Organizations can fulfill oper-
ational reporting requirements using elements such as native controls,
SSRS reports, dashboards, and business documents.
Business reporting
Business reporting refers to reports detailing operating expenses and
financial key performance indicators (KPIs) to business stakeholders so
they can understand the organization’s overall health and make more
informed decisions. This kind of reporting delivers a view of current
performance to enable the stakeholders to identify growth oppor-
tunities and areas for improvement, and track business performance
against the planned goals for the organization.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 329
Build a data strategy
For business solutions to deliver the intended results, an organization’s
data strategy must articulate a clear vision that effectively aligns busi-
ness intelligence investments with business goals to maximize impact,
drive growth, and improve profitability.
To define your data strategy, start with the business questions you
need to answer to meet your organization’s goal of using the data to
make more effective decisions that improve the business outcome.
Some organizations emphasize
collecting data more than analyzing it to
drive insights. It’s also important to identify
gaps and blockers for business objectives
With customers now having access to more information and channels,
and outcomes, and not focus solely on the an organization’s data strategy should reflect the customer journey.
data, structure, analytics, tools,
or technologies. From a data management perspective, all channels and checkpoints
across the customer journey should be represented.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 330
Empower people
To do their jobs more efficiently, employees need tools and resources,
as well as timely access to information. Utilizing AI to further automate
business processes contributes to better and faster results, which then
empowers your people (Figure 13-2) to make the best decisions and
deliver value to customers.
Engage customers
Modern applications are already capable of delivering insights by
using AI and data analytics to optimize business processes and shed
light on customer activity. For example, Dynamics 365 apps can pro-
vide a customer service agent with insights into a customer’s interests
and purchasing behavior information in real time, allowing the agent
to make suggestions tailored to the customer. These types of insights
help businesses intelligently engage customers (Figure 13-2) to
Fig.
provide a superior customer service experience.
13-2
60 30 50
of customer service adoption solutions will
% % %
organizations will be white-labeled in
deliver proactive 50 percent of customer-
customer services by facing software as a
using AI-enabled service (SaaS)
process orchestration applications, increasing
and continuous customer satisfaction
intelligence. and loyalty.
By 2022, 60 percent
of organizations
will utilize
packaged AI to
automate processes
70
30
in multiple %
functional areas. %
60 %
50
organizations will of organizations will
Transform products Optimize operations
%
rigorously track data exceed their data and
quality levels via analytics return on
By 2025, 50 percent
of all enterprise
business-to-business
(B2B) sales technolo-
gy implementations
50 %
5X 90 %
30 %
30 %
Source: “Predicts 2020: Data and Analytics Strategies – Invest, Influence and Impact.” Gartner.com. Gartner, Inc., December 6, 2019.
Transform products (and services)
Products and services have become sources of data that provide
better insights into lifecycle-related information that can be used to
benefit organizations and their customers. By analyzing those in-
sights, organizations can in turn transform their products and services
(Figure 13-2) to take advantage of opportunities and expand into
new channels and markets. For example, a smart air conditioner that
allows you to set an optimal temperature via an app can also apply
what it learns about your household’s daily routine to make sure
the house is cool when everyone gets home. The air conditioner’s
manufacturer then uses data collected from all users to enhance the
product’s smart features, leading to a continuous improvement cycle.
Optimize operations
Cloud-based AI usage is an increasingly common investment area for
most organizations—and not just for customer-facing technology.
For example, Dynamics 365 for Field Service can use AI to anticipate
hardware failures on a manufacturing floor and automatically dispatch
technicians before the malfunction. Organizations that optimize their
operations (Figure 13-2) with augmented analytics, AI, and embedded
intelligence will be more competitive in the marketplace.
Components of the
modern data estate
The Dynamics 365 platform can be an essential part of your modern data
estate architecture. Your business data is securely stored within Dynamics
365 as a function of your day-to-day operations. In addition, Dynamics 365
can export data to or ingest data from various sources to be used in re-
porting, workflow, applications, or in any other way that is required by your
business.
You can also generate insights and analytics from data created and managed
inside each Dynamics 365 application. Apps using embedded intelligence,
such as the audience insights capability inside the Dynamics 365 Customer
Insights app, enrich your data and allow more informed decision-making.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 332
Common Data Model
Dynamics 365 data estate components can ingest, store, prepare,
model, and visualize data to produce insights that will support,
boost, and even transform operations (Figure 13-3). The Common
Data Model (CDM) is one of the key technologies enabling access to
many kinds of information from heterogeneous services and
data stores.
Common
Data
Model
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 333
CDM—to store and secure app data. The CDM structure is defined
in an extensible schema, as shown in Figure 13-4. This allows
organizations to build or extend apps by using Power Apps and
Dataverse directly against their business data.
Fig.
13-4
CDM Schema
Core
Customer relationship management (CRM)
Account
Account Sales Service Solutions
Activity
Appointment Competitor Case
Marketing
Contact
Campaign Discount Contact Account
Currency
Contact Invoice Entitlement Contact
Email
Lead Opportunity Resource Event
Goal
Marketing list Order Service Marketing email
Letter
Phone call Order product Schedule group Marketing page
Note
Social activity Quote Task
Owner
Organization
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 334
Data unification components
Services and applications that ingest data from multiple sources
serve a vital role in a modern data estate. Aggregation from data
stores and services provides users with business-critical information
supplied in dashboards and reports. The resulting data and events
can also be used to trigger workflows and act as inputs to the apps
running on the Dataverse platform. Many data unification
components are built into Dynamics 365 applications—and
organizations can design their own integrations to fit their business needs.
Campaigns Geo
Beh
Social
act
av
Tra
io
l AI and
Bots
Email Social machine
Conflate Enrich learning
Mobile LinkedIn Power Apps In-person
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 335
The addition of Azure Cognitive Services provides text, speech,
image, and video analysis, and enriches data via Microsoft Graph.
Dataverse applications
Building an app typically involves accessing data from more than one
source. Although it can sometimes be done at the application level,
there are cases where integrating this data into a common store creates
an easier app-building experience—and a single set of logic to maintain
and operate over the data. Dataverse allows data to be integrated from
multiple sources into a single store, which can then be used in Power
Apps, Power Automate, Power BI, and Power Virtual Agents, along with
data that’s already available from Dynamics 365 apps.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 336
Data Export Service
The Data Export Service replicates data from the Dataverse database
to an external Azure SQL Database or an SQL Server on Azure virtual
machines. This service intelligently syncs all data initially, and thereaf-
ter syncs the data on a continuous basis as delta changes occur in the
system, enabling several analytics and reporting scenarios on top of
Azure data and analytics services.
Embedded intelligence
Dynamics 365 apps with embedded intelligence, such as Sales
Insights and Customer Service Insights, allow organizations to
use AI without depending on highly skilled resources. These apps
continuously analyze your data and generate insights to help you
understand business relationships, evaluate activities and
interactions with customers, and determine actions based on
those insights.
Power Platform
The Power Platform (Figure 13-6) enables organizations to analyze, act
on, and automate the data to digitally transform their businesses. The
Power Platform today comprises four products: Power BI, Power Apps,
Power Automate, and Power Virtual Agents.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 337
Fig.
13-6
Microsoft Power Platform
The low-code platform that spans Office 365, Azure, Dynamics 365, and standalone applications
Dataverse
Power BI
Power BI is both part of the Power Platform and stands on its own
by bridging the gap between data and decision-making. Power BI
lets business analysts, IT professionals, and data scientists collaborate
seamlessly, providing a single version of data truth that delivers in-
sights across an organization.
Power BI helps you analyze your entire data estate within the Dynamics
365 or Azure platforms, or external sources. Power BI can connect indi-
vidually with siloed sources to provide reporting and analytics, or it can
connect with data stores within or outside of Dynamics 365. As data can
come from multiple sources, organizations should analyze how Power BI
will connect with those sources as a part of their data estate pre-planning.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 338
Organizations can also get accurate insights by adding low-code AI tools
to their process automation via Power Automate. Power Virtual Agents
help you create and manage powerful chatbots—without the need for
code or AI expertise—and monitor and improve chatbot performance
using AI and data-driven insights.
Microsoft Azure
With Dynamics 365 at the center of the data estate, Azure provides an
ideal platform (Figure 13-7) for hosting services for business workloads,
services, and applications that can easily interact with Dynamics 365.
Built-in services in Dynamics 365 let you export data as needed or
scheduled. Power BI can aggregate information from Dynamics 365
and Azure sources into an integrated view, and Power Apps can access
both Dynamics and Azure sources for low-code, custom applications
designed for business use.
Fig.
13-7
Other data
source
Prep and train
Azure Data Factory Power BI
Logs and files Code-free data transformation and integration Azure Databricks Leader in the Gartner Magic Quadrant for Analytics
from more than 90 built-in connectors and Business Intelligence Platforms
(unstructured)
Store
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 339
Fig.
13-8
Hotpath
Real-time analytics
Coldpath
History and
Nonstructured Azure Event Hubs trend analytics Azure Cognitive Services Power BI Premium Analytics
V=Variety Azure Machine Learning
images, video, audio, free
text
(no structure)
Semi-structured Azure Data Factory Azure Data Lake Storage Gen2 Azure Databricks Azure Cosmos DB Application
V=Volume
Scheduled/event-triggered Fast load data
images, video, audio, with PolyBase/ Integrated big data
data integration ParquetDirect scenarios with traditional
free text data warehouse
(loosely typed)
Enterprise-grade
semantic model
Store Server
Azure Stack
Azure Stack is a portfolio of products that allows you to use
embedded intelligence to extend Azure services and capabilities to
your environment of choice—from the datacenter to edge locations
and remote offices. You can also use it to build and deploy hybrid
and edge computing applications, and run them consistently across
location boundaries.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 340
Data store
Azure Blob Storage offers massively scalable object storage for any type of
unstructured data—such as images, videos, audio, and documents—while
Azure Data Lake Storage eliminates data silos with a single and secured
storage platform.
The Azure Machine Learning service gives developers and data scientists
a range of productive experiences to build, train, and deploy machine
learning models faster.
Synergy
Getting maximum value from your data requires a modern data estate
based on a data strategy that includes infrastructure, processes, and people.
341
Your data can flow inside a cloud solution or via synergy with other
components and platforms to provide an infrastructure and processes
for analyzing data and producing actionable outcomes.
People are a big part of the process, and the data journey will often
start and finish with them. The insights and actionable outcomes will
allow them to make better decisions—for themselves and the business.
Conclusion
In this chapter, we discussed how organizations are becoming more
competitive, expanding their global reach to attract customers, and
using business intelligence solutions to make the most of their data.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 342
Checklist
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 343
Case study
Premier yacht brokerage
cruises into smarter
marketing and boosts
sales by 70 percent with
Dynamics 365
To differentiate itself from the competition, a large company that supplies
luxury vessels decided to invest in reporting and intelligence. In an
industry where relationship-building efforts must be incredibly precise
and personal, any insight into a customer’s mindset is invaluable.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 344
Dynamics 365 apps made it possible to separate the features each
department would use while accessing the same data. The marketing
and sales teams started customizing their content and campaigns to
nurture leads toward a sale, and used Power BI reports to generate
insights that helped them identify the best prospects and create
winning proposals.
Implementation Guide: Success by Design: Business intelligence, reporting, and analytics 345
14
Implementation Guide: Success by Design: Testing strategy
Guide
Testing
strategy
346
Overview
During the implementation of a solution, one of the
fundamental objectives is to verify that the solution
meets the business needs and ensures that the
customer can operate their business successfully.
The solution needs to be tested by performing multiple business execu-
tion rehearsals before the operation begins and the system is taken live.
Focus on quality
and keeping scope
Testing is a continuous task under application lifecycle management.
Not only is it critical during the implementation of the solution, but
also during the Operation phase. It is a continuous evolution that
keeps bringing fixes or extending the solution. In the beginning testing
is completed manually, but over time with automation we can make
the process far more efficient. Our objective is to ensure the quality of
the solution is always meeting customer expectations.
Defining the
testing strategy
After observing thousands of Dynamics 365 deployments, we can see
that most customers do a reasonable level of testing before go live.
But the difference between each of those implemen-
Fig.
14-1 Test strategy tations is how thorough the implementation team is
at defining their strategy, and the quality and depth of
the testing.
Dee
of the process, for example testing quality of data, testing latency in the
pness of the test
connectivity, etc.
Writing a good test case requires a series of tools and tasks that help
you validate the outcome. Dynamics 365 Business Applications provides
some of the tools to build test cases faster, for example, the task recorder
in the Operations apps where you can dynamically capture the steps
needed to execute a task in the app.
Test cases should reflect the actual business execution in the system
and represent how the end user ultimately operates the system, as
illustrated in Figure 14-3.
Fig.
14-3 Components of the test case
+ + +
What Get ready How Quality check
Linked business Prerequisites Reproduction steps Expected outcome
process requirements
Planning
We have defined what is needed to determine the scope of the testing
using the processes as a foundation. With this valuable information, we
can determine how to execute the testing. This is where the planning
becomes important.
Planning for testing is a fundamental part of the testing strategy. The next
section describes the minimal components needed to define a recom-
mended testing plan. This strategy can be used to implement any of our
Fig.
14-4 business applications.
Description
Dealership request parts for recreational vehicles out of warranty
Test case ID Test steps Test data Expected results Actual results Pass/Fail
PC-20 1. Create sales order header Customer: Sales order cannot Sales order is Fail
2. Check credit line in order CU050 - Great Adventures be posted posted
3. Create order line Dealership
4. Validate and post sales order
Part:
P001 - All weather speakers
Qty: - 4 pieces
Tester notes P001 - All weather speakers
Credit check is not stopping the order to be posted. Customer setup with wrong credit limit. Data quality issue.
This helps to plan the quality control portions of the solution and must
happen at the start of the Initiate phase of the project.
The plan for testing must be documented and signed off on by the
business team prior to its execution. This is important because it
leads into other types of planning, like environment planning, that
determines where the test is done.
353
Fig.
14-5 Components of the test plan
The test plan brings transparency and helps keep your goals and
objectives on securing a quality solution. It contains important
information that guides the team on how to conduct the testing.
Do not wait too long, or until the end of the implementation when you
think you have built the entire solution, to start testing. Testing late
adds serious risk to your implementation by having issues and gaps
that are hard to fix in a short period of time. This becomes a constraint,
especially when you have reduced time available for time sensitive
projects tasks like preparing to go live. Always plan to avoid testing
close to your go live date since it leaves no time to resolve issues. Poor
planning or no testing can ruin a project.
Documentation
Documenting testing has two angles, the test cases and tracking the
outcome of the test.
Having a model to document the outcome of a test case and test cycle
Planning to document the results of allows the implementation team to validate the performance of the
your testing cycles helps to highlight
test cycle. Preparing to document and report the outcome highlights
the wins, but also the bugs and
patterns. This report determines other challenges as well, though not necessarily ones connected to the
the next course of action and can
impact future milestones. It allows
test cases itself. These issues can be solution performance, connectivity
stakeholders to make decisions to issues, usability, continuity of the transaction, gaps in the product, etc.
correct the paths if necessary.
Use Azure DevOps to build Other important benefits of documenting the outcome are that it
dashboards that can help to report
progress and quality of the testing. tracks the progress of the testing and keeps the big picture in sight.
It allows us to detect and build the backlog for new non-high-
priority opportunities that are discovered during the testing. It can
also trigger a change of the statement of work when the newly
discovered requirements are critical. This is highlighted as project risk.
The business team involvement during testing is critical, they own the
solution and plan for enough time to allow for proper testing by the
key business users. One common pattern of failure is poor involvement
from this group of testers.
Test types
In the previous section, we covered the different roles and environment
types that may be needed for testing, but we mentioned the need to
consider the test type. Let’s now combine the concepts in this section so
you can see how to use different test types to implement Dynamics 365
under the Success by Design framework.
358
the different Success by Design implementation phases. Note that
some test types are more relevant to specific phase than others.
Fig.
14-6 Testing throughout the solution lifecycle
Unit test
Functional test
* Go live
End-to-end test
Performance test
Security test
Regression test
Interface test
Mock go-live
Functional test
See it in action
Functional tests can be either a manual test or an automated one. They
Following the previous example are done by the functional consultants, customer SMEs, or testers. The
regarding the sales order validation, the purpose of functional tests is to verify the configurations of the solution
functional testing of the interface with
the transportation system focuses on the or any custom code being released by the developers. The primary
configuration and setup, like customer
objective is to validate the design as per the requirements. This is
data, products, pricing, warehouse setup,
and the dependencies with payment done in a test or developer environment during the Implement
processing and other module-related
configuration and parameters.
phase of the Success by Design framework. At this point, testing
automation can also be introduced.
This test type is the first test to be done by consultants and customer
SMEs. Consultants still need to do it, as it is important that the first
line of testing is done by the consultant’s previous customer testing
at least to verify stability of the feature.
At this point the consultants need to map the test to the requirements
and processes and the test case is detailed. The link with process is
agreed upon with the business so the test case is not focused only on
the gap, but also to fit in the whole picture of the solution.
Process tests
The solution continues to be built, with unit testing being done by
developers and functional testing by consultant and customer SMEs.
The work from the development team is validated, and bugs and
corrections are completed while the test is mapped to the process.
This is the point at which running connected test cases is ideal. The
point where we raise the bar in our testing approach by connecting a
collection of test cases that are all connected to our process.
See it in action
Tracking the outcome becomes more critical at this point, so bugs
At this point we know how our sales
fixes and other dependent improvements must be tracked. During
order function works, but now we want this time, tracking tools like Azure DevOps become crucial as part of
to test how the sales order works in
connection with test cases preceding your application lifecycle management.
and following the process. Let us call
this process prospect to cash. We want
to test the collection of money after the For this test type, role security becomes crucial. We want to make sure we
shipment of goods happen. We involve
other teams in the implementation team
have the right segregation of duties and that the different personas can
that handles that part of the process, operate as they should when they are running the solution in production.
and we validate that we can generate the
invoice and collect money while closing
the cycle. From the business point of view, This testing should look for unexpected outcomes of the process,
the customer can confirm they can sell,
ship, invoice, and collect the money so the commonly known as negative testing, and not just the happy path.
process prospect to cash works.
End-to-end tests
After validating all the individual processes, it is time to connect all of
them and increase their complexity with new process variations. This is
the first test cycle that looks like a complete operation.
The test is done by functional consultants who are preparing the cycle and
guiding the team, but the overall cycle is mainly executed by customer
SMEs and testers.
The main objective of this test type is to validate all full business processes
in scope and it needs to be done in an integrated test environment since
now it is connecting to other systems that interact with the solution. It is
See it in action iterative and the prerequisite to being able to execute your user
acceptance test (UAT).
On previous test cycles, we were able to
collect the cash of our sale. Now we want
to connect other processes dependent on
this one; for example, accounting process
This test type starts in the Implement phase and goes through the
to report taxes, update and interact with Prepare phase as per the Success by Design framework.
other systems like the transportation
system, optimize inventory, and
introduce new products. This allows us to It is important to execute as many of test cycles with end-to-end
combine different Dynamics 365 apps to
work together. testing, doing only one and at the end of the build of the solution is
not recommended since it add risks to confirming readiness by having
less time to react to final fixes.
Another important aspect here is that you run this test not just with
Plan for this test by making sure customer real data but with migrated data coming from the legacy
you include real customer data and
migrated data.
solutions as soon as possible.
Performance tests
Successful testing is not complete until we not only make sure we
can run the business on top of the solution, but also that we can do it
at scale. We are implementing Dynamics both for today and for the
future, and we need a solution that lasts and performs.
We put special emphasis on this test type since there are many
misconceptions defining whether this test needs to be executed.
Our experience has shown that performance is one of the most
common reasons for escalation since teams often miss this test
when it is needed.
The objective of this test is to ensure the solution performs while focusing
on critical processes that require scaling with load and growth. Not all the
processes are involved in this test.
363
The basic version of performance testing starts during unit testing
See it in action so developers can influence proactive improvement of performance.
Regular dedicated test environments can also be used depending on
In previous example we involved
invoicing and the integration with the the load to be tested.
transportation system. Now we want
to test if we can process the operation
peaks considering the seasonality of the Performance testing happens as soon as the critical processes are
business. Invoice posting operation is
crucial since we need to have almost available to be tested throughout the implementation.
real-time interaction with the
transportation system as per the business
needs, so we need to test a day in the life Performance testing needs to be visible at the start of the imple-
with relative volumes across key business
mentation and be part of the test plan. It requires agreement on the
processes, not just prospect to cash.
performance test objectives, and it can require proper environment
planning to provide the necessary resources.
Remember, you are not just testing the standard product in this
test type; you are testing the solution that you are implementing in
User acceptance testing (UAT) can be combination with your business operation needs. Some very basic im-
a great opportunity to start building
automated testing for recording the tests
plementations can justify not executing a performance test, but before
so you can have repeatability on testing. deciding you can learn more about what you need to have a performing
This sets the path for regression testing,
optimizing the investments to build solution in Chapter 17, “A performing solution, beyond infrastructure.”
that automation.
See it in action This test type needs to be executed by users from the business, the
implementation team is just a facilitator. The business team is the main
The business team now brings in a select
group of people who run the operation. owner of this test, failing to fully participate or test thoroughly is a risk.
This group consists of order processors,
the accounting team, the accounts
receivable team, shippers, and others. The The business users must be trained prior to UAT, not just on the
selected group of people run a real-life
operation simulation. All the processes are solution but also on how the pending process works once the solution
executed in parallel and in-sync between
is deployed to production; otherwise this group causes false error
teams. For the prospect to cash process,
they involve a selection of different end reports because the lack of training.
users connected to this process to run the
test. The team tests all the variations of the
processes in scope. At the end of the successful test, the end user connects the new
solution to its reality of their daily tasks and confirms the readiness of
the solution, but also their own readiness at being familiar with the
new system after being trained. The new solution fulfills the business
operation need.
A regression test happens when you have change in the code, or any
configuration or new solution pattern that can impact different processes.
See it in action
This test type is done by testers, developers, and end users.
Microsoft has released new functionality
that enriches the order processing feature It is important to perform this test type before change is introduced to
in Dynamics 365 Finance and Operations.
When the new update becomes available, the production environment.
teams can execute regression testing
to key business processes connected to
this new feature. Testing the order and As we realize that this necessary quality gate is needed, there are tools
warehouse processes following a new
configuration change is done because it
available to help you to automate. You can find links to more information
can impact the picking process. Once the about these tools at the end of the chapter in the “References” section.
update is done in the test environment,
the team runs an automated test to
confirm the solution produces the There are different techniques you can follow to run your regression test.
expected outcomes.
You can opt to test almost 100 percent of the processes, which can
provide you comprehensive coverage, but it is expensive to run and
maintain especially if there is no automation.
You can prioritize the test based on business impact. This technique
ensures you have coverage of the processes that are mission critical for
the operation, it is also a more affordable approach and the only risk is
that you cannot guarantee a solution 100 percent free of regression.
You can also combine all the previous techniques, making sure you
test critical business processes, and you can target more testing on
the features being changed.
Every time you test successfully, you put money in a trust. Every time
you have a failed test or fail to properly test, you deduct money from
that trust leaving a debt in the solution completeness. At the end,
you need to reach the end goal with the trust full of money.
Again, automation is key, and you should plan for testing automation.
Your solution is alive and ever evolving so change is a constant, not just
coming from the application, but from the customer business needs.
367
Finally, keep in mind that finding bugs during regression testing could
be due the change of the solution design, a standard product update,
or just the resolution of previous bugs redefining how the solution
works, which can require recreating the test case.
Mock cutover
This is a special test since it is executed in a special environment, the
production environment. This type of test is especially important when
you test Dynamics 365 for Finance and Operations apps.
This test brings important value because it helps to validate aspects like
connectivity, stability of the access by users and data, integration end
points configuration, device connectivity, security, network, and many
of the test cases in your processes may be environment dependent.
While some projects are too small to justify separate planning to execute
some of these types of testing, the concept and meaning of the tests
should be folded into other types of testing.
Always test under the umbrella of the processes, in the end the processes
are the language of the business and the testing is proof that the solution
“can speak” that language.
Executing testing
In a previous section, we focused on the importance of defining a testing
strategy. Now we are going to explore the minimum components you
need to keep in mind to execute the testing.
370
Now we focus on the tactical components for prepping the execution,
what to keep in mind to communicate the scope, and how to track the
progress and outcome.
Part of a successful test cycle is to set the
During the communication effort you share the test plan for the test cycle.
You need to communicate the following with the team during testing.
▪ Scope Before you start testing, you describe the scope and purpose
of the testing cycle, and what processes, requirements, and tests
cases are included in that test cycle. Every new testing cycle requires
detail on how the iteration has been designed in terms of incremental
testing and what is expected to be tested. The scope of the test cycle
is aligned to the solution version being used.
▪ Schedule The time expected to be used to run the test cycle.
▪ Roles Who is testing and how are the test cases distributed?
How do they report test case execution so dependent teams can
continue the testing? Who is the orchestrator of the test? Who
resolves questions to test cases per area?
▪ Resolution process One of the objectives to test is to identify
any defect in the solution. The communication plan needs to
specify how those bugs are reported but also how to document
the feedback from the tester.
▪ Progress How is the progress recorded and communicated so
everybody can see the current state of the testing cycle?
▪ Resources The communication needs to specify where the
testing happens and how to access the apps. It determines any
additional equipment needed for testing, for example printers, bar
scanners, network requirement, etc.
▪ Test sequence Especially on process test, end-to-end test, and
user acceptance test types. You need define and align how the
different teams interacts.
▪ Test objectives Here you explain the purpose of the testing cycle.
372
A meaningful process test type makes sure the process being tested
has continuity, from the order to the invoice and shipment and later to
the collection. If you are testing a sequence of processes, you do not
go ahead of the game and straight to the collections part. If you do,
you lose the opportunity to properly test the larger process sequence.
Solution acceptance
and operation
Finally, the last test cycle takes the main objective of why you started
implementing the solution to confirm the readiness to run the business
using the new solution.
This confirms the readiness of the solution but also the readiness of the
production environment if you run a mock cutover test. It is important
that for any final test the business team signs off and confirms they can
operate the business with the new solution.
Accepting the solution does not mean it is 100 percent free of bugs.
You need to assess the value to bring a fix, at this point if the issue
is very low risk for the operation it is often better to go live with
known nonblocking bugs and fix them later than to introduce risk
Sign off by the business team on the
by making an unnecessary rushed fix and not having time to
overall pass of the final test cycle is a re-test properly.
required step prior to executing cutover
preparation for go live. This creates
accountability on the acceptance of the
From here you move to the maintenance mode of the solution if
solution by business. Any non-blocker bug
needs to be document and they should be you are implementing only one phase, or you continue adding new
low risk for the operation at not bringing a
solution at going live without them.
workloads or expanding the solution. Keep the discipline for testing
and scale using the tools for automation. Testing is a continuous
practice, the difference is the frequency, scope, and tools used
to test.
Testing during implementation is the step that builds the trust for
the business to run their operation. Always test and do so as early as
possible. It is never too early to start testing.
References
Finance and Operations Customer Engagement
Test Strategy TechTalk FastTrack automated testing in a day workshop offering
Performance troubleshooting tools Automated and manual testing with Azure Test plan in
Azure DevOps
Performance testing approach
The team included the most common test types except for performance
testing under the assumption that the first rollout would not require it
since the extension of the system was low, and the customer will be imple-
menting just manufacturing business unit. Next rollouts will include the
servicing business, including the warranty management of recreational
vehicles plus other operations. The servicing operations will continue to be
executed as a third-party solution for now and their dealership network
will be using this system to order parts and honor warranties.
Very soon, during the first testing cycle, the team started to test across
the first wave of the solution where there were no integrations, so
completing testing was satisfactory in terms of the cycle performance.
As the team got ready for their second wave of testing, they started
to introduce some of the integrations as part of the scope for the test
cycle, but those integrations were emulated with dummy data and the
team introduced some migrated data. Integrations worked for their
purpose and the volume of migrated data was small.
At combining the testing for having a larger volume of users for the
test and integrations running with similar volume of operations to what
they expected in production, the team hit the first major blocker. The
solution was not keeping up with the needs of the test cycle, and they
were not even in production. The first reaction from the team was an
underperforming infrastructure, which raised the concerns of the business
stakeholders upon learning the test cycle outcome.
The team was ready to prepare for UAT and decided to continue ex-
pecting that this would not be an issue in production due to it having
higher specs. They assumed the production environment would be
able to solve this performance challenge, so they decided to continue,
complete UAT, and move to production. The customer signed off and
preparation moved to the next stage to get ready for go live.
The big day came and production and all the departments switched to
new system. The first day was normal and everything was working great.
The team decided to turn on integrations for the servicing solution on
the second day. When the second day came, they were ready to go in-
terconnect with the service department, and integrations started to flow
into Dynamics 365. This is when they had their first business stopper: they
had a sudden decrease in performance, users were not able to transact in
Dynamics 365, service departments from the dealerships were not able
to connect effectively, shipments started to slowdown, and the shop floor
was having a hard time to trying to keep inventory moving to production.
For the next phases, this customer included performance testing. They
dedicated an environment to stress test the solution with their own
business needs and they executed these tests earlier with additional
scenarios and UAT that included parallel processing of people testing
and integrations running. They were able to have second go live
to include their accessories manufacturing business and it was a
breeze in comparison.
Introduction
Business solutions offer a rich set of capabilities to
help drive business value.
Still, in some situations, you need to extend the solution and adjust off-
the-shelf functionality to accommodate organization or industry specific
business processes. These adjustments can change how a feature works or
bring additional capabilities to meet specific requirements.
While business solutions natively offer rich capabilities, they also offer
powerful options to customize and extend them. Extending the solution
can open even more opportunities to drive business value. It is important
to note, however, that extending should not compromise the fundamental
advantages of an evergreen cloud solution, such as usability, accessibility,
performance, security, and continuous updates. These are key to success
Complex business requirements lead to
highly advanced solutions with customi-
and adoption.
zations and extensions to applications.
strategy
performance, stability, maintainability,
supportability, and other issues.
What is extending?
When organizations implement a solution, there typically is some degree
of customization and/or extensibility, which we refer to as extending.
Extending can vary from minor changes to the user interface of a particular
feature to more complex scenarios like adding heavy calculations after
certain events. The depth of these extensions has important implications
on how much the off-the-shelf product needs to change to meet specific
business requirements.
Legacy solutions may have taken years to develop and evolve and may
not use the latest and greatest functionality available on the market. As an
example, Dynamics 365 natively uses artificial intelligence and machine
learning to provide insights that help users make the best and most
informed decisions.
383
Leveraging ISV solutions
Leveraging independent software vendor (ISV) solutions from the app
marketplace instead of extending the solution to achieve the same
results may save development cost and time, as well as testing and
maintenance resources. ISVs typically support and maintain the solution
at scale for multiple organizations. Their experience can be an advantage
for organizations that require additional functionalities that are already
provided by ISVs.
Extensibility scenarios
Extending off-the-shelf solutions occurs when functionality is changed to
fulfill an organization’s requirements.
App configurations
Configurations are the out-of-the-box controls that allow makers and
admins to tailor the app to the needs of a user. These setting changes are
low effort, requiring no support from professional developers. They are a
powerful way to make the application your own, for example changing a
App settings are the safest and the least impactful way of tailoring the
solution to your needs and should be the preferred approach before
exploring another extensibility technique.
Low-code/no-code customizations
A differentiator for Dynamics 365 and the latest generation SaaS products
is the powerful customization capabilities made available through “what
you see is what you get” (WYSIWYG) designers and descriptive expression
based languages. This paradigm helps significantly reduce the
implementation effort and enables businesses to get more involved
with design and configuration.
385
organization requirements that are specific to an industry or unique
business processes, including specific functionality focused on specific
roles or personas. This allows personalization that streamlines the user
experience so a user can focus on what is most important.
This no-cliffs extension provides the best of both worlds. The SaaS
application provides the off-the-shelf functionalities as well as the
options and methods to extend them. The PaaS extensions further
enrich the solution architecture by providing rich and powerful
mechanisms that scale and allow heavy processing of operations outside
of the business solution.
Connected Data
technology ingestion
Alerts
Automation
Power Automate
One example of this approach is when organizations leverage Azure
Logic Apps. Logic Apps provide a serverless engine to build automated
workflows to integrate apps and data between cloud services and
on-premises systems. Logic Apps provide the ability to trigger
workflows based on events or timers and leverage connectors to inte-
grate applications and facilitate business-to-business (B2B)
communication. Logic Apps are integrated seamlessly with Azure
Dynamics 365
Field Service Functions, as illustrated in Figure 15-3.
A multitude of examples
demonstrate where PaaS can
be leveraged to further extend
SharePoint Structured query IBM DB2 BizTalk server the capabilities of the solution.
language
Leveraging a platform helps
Considerations
Every piece of an extension should focus on bringing efficiency or
value to the organization. Inherent costs do exist when implementing
changes and can vary from the cost of building, testing, maintaining,
and support. These should be taken into consideration when planning
to extend a solution.
In this section, we delve into the key considerations and impacts that
extensions can have on a solution.
While the main purpose of extending may not be to improve the user
experience, it should not negatively impact the user experience and
how the solution behaves across different devices and platforms. In
addition, extending should not negatively impact the responsiveness
and performance of the solution.
The same happens with compliance, for example the General Data
Protection Regulation (GDPR) policies. Compliance functionality
implemented natively also need to be inherited in any customizations of
extensions. Failing to do so may have consequences for organizations that
do not comply with regulations.
GDPR is just one set of regulations. Regulation of privacy and data use
exists in many different forms across several markets. While there is a
great deal of overlap in terminology, privacy and security are not identical.
Security is about preventing unauthorized access to any data, while privacy
is ensuring, by design, proper acquisition, use, storage, and deletion of
data defined as private under local regional and global regulations.
Performance
Although cloud solutions provide a high degree of scalability and
performance, when extending a solution it is important not to
compromise performance.
When extending the user interface and/or the business logic, additional
efforts are added to create, retrieve, update, or even delete data. Those
additional efforts may have an impact on the user experience, depending
on the amount of extended functionality added.
Scalability
The scalability of a business solution is also a key consideration to determine
how you extend it. While the cloud platform includes scalable servers and
micro services, other aspects of the platform need to be considered to
determine the impact of your business solution architecture.
391
Service protection and limits ensure consistent solution availability and
performance, as well as a level of protection from random and unexpected
surges in request volumes that threaten the availability and performance
characteristics of the platform or solution.
This means that all extended functionalities are added on top of ALM
practices, which also increases the complexity. As an example, if the
extended features required different configurations, or are only applicable to
specific business units, countries or regions, there needs to be a separation at
the ALM process, which may just be that they are shipped in different
packages (or solutions).
Supportability
Extending a business solution can also complicate the support requirements
of the solution. Normally, the first line of support is at the organization itself
or a vendor. Support resources must have expertise on the business solution
and extensions built on top of off-the-shelf capabilities. This requires a
specialization of resources to support the extended business solution.
Product-specific guidance
In the following sections, we will look at Dynamics 365 Customer
Engagement and Finance and Supply Chain Management individually.
Visit the Extensibility home page for ref- The extension model itself, the toolset, ALM in Azure DevOps, and the
erence information or follow the learning SDK is well described in Microsoft Docs, Microsoft Learn, and in multiple
path at Introduction to developing with
Finance and Operations apps - Learn | community sources. There are two great entry points for learning about
Microsoft Docs.
extensions and development in Finance and Supply Chain Management
apps in the links below and a list of additional relevant links at the end of
the chapter.
Introduction
In this section, we first look at what we can do to enhance functionality
and UI without extending the application. Then we give a very brief
overview of the extension model, and finally highlight some of the tools
and practices that are available for professional developers when they
extend the application.
Requirement: Create a custom Requirement: Add a table to hold a list of Requirement: Add code to automatically change
workspace with tiles and lists from food allergens. Add the allergen field to the the status of the new production order to “started”
multiple forms and queries across the sales line record for the user to indicate that a when the user firms a planned production order.
manufacturing module. product contains the specific allergen.
Tools and components: Use the Tools and components: Use Visual Studio in a Tools and components: Use Visual Studio in a
personalization feature and “add to developer environment to add the extend- developer environment to extend the X++ business
workspace” functionality in the UI to ed data types, the tables and fields and the logic, add appropriate event handlers, class, methods,
put together the desired workspace extensions to the sales line table and the Sales and X++ code to catch the event that the planned
and components. No change to code Order form. This change requires new and order is firmed, and execute the correct code pattern
or application components is needed. changed application components and must to bring the new production order to started status.
follow software development lifecycle (SDLC), This change requires new and changed application
build, and deployment guidelines. components and X++ code, and must follow SDLC,
build, and deployment guidelines. Considerations
about scalability and performance when large
numbers of planned orders are firmed using
appropriate developer tools should be considered.
This requires user level experience with Requires entry level professional developer Requires medium- to expert-level professional
navigation of the relevant module, experience in Dynamics 365 Finance and Supply developer experience in Dynamics 365 Finance and
navigation, and personalization. Chain Management and familiarity with Visual Operations and familiarity with Visual Studio, build
Studio, building procedures, and best practices. procedure, best practices, and, ideally, frameworks
like SysOperations Framework, Multithreading, and
performance-checking tools and patterns.
As the table below shows, many options are available to find alternative
approaches to extending the application. The below is not an
exhaustive showing. Additional options for low code and no code
options are mentioned in Chapter 16, “Integrate with other solutions.” The
Fig.
decision of whether to extend comes down to user efficiency and value for
15-5
your customers.
The application remembers the While this is hardly considered a way of extending, it Personalizations can be shared
Restricted last settings for sorting, column does give the user the opportunity to store selections in by users or managed centrally
width, criterial values in queries dialogs, expansion of relevant fast tables, and aligning by an admin from the
personalization
and dialogs. the column width so that more columns are visible on a personalization page.
given form.
Personalization bar in forms and Personalization allows users or admins to add or hide Personalizations can be shared
Personalization workspaces. fields or sections, change labels, change the order by users or managed centrally
of forms/UI of columns, and edit the tab order by skipping fields by an admin from the
when pressing tab. personalization page.
Saved views is a combination of Saved views is a powerful tool that allows the user Saved views can be shared by
personalization of the UI for a the ability to quickly switch between tailored views of users or managed centrally by an
form and filtering and sorting of columns, filtering and sorting on the same screen admin from the
the form data source. depending on the specific task at hand. For example, Personalization page.
Saved views
a buyer in a pharmaceutical company may need a
simple view of the purchase order screen for May require the Saved views
nonregulated materials purchasing regulated materials feature to be turned on.
used for manufacturing
Users can use personalization This functionality allows users to tailor their experience Custom and personalized
and the “add to workspace” but- to their needs. Workspaces provide glanceable workspaces can be shared by
Custom ton to add tiles, views, or links to information about the most important measures, users or managed centrally by
Workspaces an existing or custom workspace. actionable items, and relevant links to other pages. an admin from the
personalization page.
Users with certain security roles Finance and Supply Chain Management have a rich set Custom fields and the
can add up to 20 custom fields of features that apply across a wide set of industries. personalized forms to show the
to tables Some organizations require additional fields on certain fields can be shared by users or
tables; for example, the item or customer master or managed centrally by an admin
Custom fields the sales order header. This feature allows the user to from the personalization page.
create these fields. Note that these fields are specific to Deleting a custom field is
the environment that they are created in and cannot irreversible and results in the loss
be referenced by developer tools. of the data in the custom column.
The grid on forms in the system The grid offers the following capabilities. The admin can enable the
have some extended features New Grid Control feature from
▪ Calculating totals for columns in the grid footer
that may eliminate the need for feature management. Note that
an extension. ▪ Pasting from Excel there are certain limitations, see
▪ Calculating math expressions. For example, the reference at the bottom of
if the user enters 3*8 in a numeric field and the section.
Grid capabilities presses tab, the system calculates and enters the
result of 24
The user can add a canvas app to The ability to embed a canvas power app enables citizen Please see more about UI
a form or workspace as embed- developers to use low-code/no-code options for inter- integration for Finance and
ded into the UI or as a menu item acting with data in Dataverse directly from the UI in the Supply Chain Management in
Embedded that can pull up the app from the Finance and Supply Chain Management apps. Chapter 16, “Integrate with
power apps menu. other solutions.”
canvas apps
It is important to note that if the app must interact with
Finance and Supply Chain Management data, that inte-
gration to Dataverse must be in place and the app must
of course support the required actions.
Users can view, edit, and act on IT admins can build and publish mobile workspaces that Simple actions can be done from
business data, even if they have have been tailored to their organization. The app uses the mobile workspaces. Most
intermittent network connec- existing code assets. IT admins can easily design mobile more advanced actions require
Mobile tivity on an app for iPhone and workspaces by using the point-and-click workspace extension.
workspaces Android. designer that is included with the web client.
Users can extract or edit data The Excel integration allows for a wide variety of With great power comes great
on most forms in the system by scenarios for entering, presenting, and interacting with responsibility. While it is easy to
clicking on the office icon. data in way that is not possible from the UI. In addition change a whole column of data
Excel to the export-to and open-in Excel option, the user can and publish that data into the
integration create workbooks and templates for specific purposes. system, it is equally easy to make
Excel has many features for presentation and offers a mistake.
data manipulation capabilities for larger datasets that
users cannot do in the system UI.
Graphical editor in Visual Studio Forms must adhere to patterns. The editor in Visual Deviate from the predefined
with preview. Studio has a rich modeler that will help the developer patterns and create monster all-
applying the right structure to the screens. This will in-one screen style forms. They
User interface
ensure performance when loading the form, adapt- are often a result of the designer
ability across different resolutions and screen sizes trying to replicate a legacy
and consistency across the UI. system experience.
Metadata editor in Visual Studio Follow best practices and frameworks. Apply indexes Create redundancy or replicate
Data model for Tables, Fields, Extended Data and define delete actions. Normalize. Use effective poor modeling from legacy apps.
and queries Types, Enums Queries and Views date framework when applicable. Keep performance
in mind. Use field lists when performance is a risk.
X++ Editor in Visual Studio. Adjust compiler setting to alert about best practices, Ignore best practices, write long
code with the goal of zero deviations. Use code patterns methods or over comment
and frameworks. Practice good developer the code.
Business logic
citizenship. Write clean easily readable code.
Run CAR report and use compatibility checker.
Unit test the code.
SSRS report editor in Visual SSRS reports are good for certain precision Do not reach for the SSRS report
Studio designs and tabular lists. See Chapter 13, option if there is a better way.
“Business intelligence, reporting, and analytics,”
Reporting
for more information.
Metadata editor in Visual Studio The out-of-the-box data entities are general purpose Do not create data source
entities are built to support a wide variety of features proliferation. See Chapter 16,
surrounding business entity. In scenarios where a high “Integrate with other solutions,”
volume, low latency interface is required it is recom- for more information.
Data entities mended to build custom data entities with targeted
and specific features needed to support high volume
interfaces in the implementation.
Development architecture
The architecture of the development environment, as shown in Figure
15-7, includes the software development kit (SDK), which consists of Visual
Studio development tools and other components. Source control through
Azure DevOps allows multi-developer scenarios, where each developer
uses a separate development environment. Deployable packages are
Fig.
15-7
Batch
Storage
Project system Cloud manager
service
X++ code editor instance service
Designers
Application explorer
Design-time meta-model Deploy
Application object server Service
Best practice integration
hosted by Internet information service
Package
(metadata, binaries, and data)
Business
database
Build
Model binaries Runtime
(file system)
Model store
Metadata API
(file system)
Development environment
Advanced practices
Finance and
Model-driven Tightly-coupled, real-time, and
Supply Chain
applications in bidirectional integration for Management
Dynamics 365 documents, master, and
reference data.
applications and tools
Common
As we mentioned earlier, it is
data always sound advice to be good
service
citizen developers. In Dynamics
365 for Finance and Supply
Chain Management specifically, you can ensure that you follow that
principle by understanding, learning, and using the following tools
and best practices as shown in Figure 15-10.
For more on these topics, see the “Reference links” section later in this
chapter.
Additional components
It is important to note that the extended product architecture contains
several additional components. Along with that are multiple avenues
and tiers for approaching a requirement for extension. It depends on the
Best practice checks for X++ and Developers should strive for zero deviations. The best You can stop developers from
Best practice application components are built practices are made to ensure and updatable, performing checking in code with best
check into Visual Studio. They can be and user friendly solution. practice deviations.
errors, warnings, or informational.
The compatibility checker tool can The compatibility checker tool is available as one of Not all breaking changes can
detect metadata breaking changes the dev tools in Platform update 34 and forward. be detected by the tool. See the
Compatibility
against a specified baseline release You can use it to ensure that your solutions are Compatibility checker docs page
report or update. backward-compatible with earlier releases before for specifics.
you install or push updates.
The users can take a trace of You can use the trace parser to consume traces and The trace parser tool can be
runtime execution directly from analyze performance in your deployment. The trace found in the PerfSDK folder in
Traces and
the UI. The trace parser to read parser can find and diagnose various types of errors. your development environments.
trace parser the trace. You can also use the tool to visualize execution of X++
methods, as well as the execution call tree.
Performance timer is a tool in To open the Performance timer, open your web page The tool itself has a performance
the web client that can help you with the added parameter debug=develop. You can impact.
to determine why your system's see counters for client time and server time, and the
Performance performance might act slow. total time. Additionally, you can see a set of performance
timer counters, a list of expensive server calls, how many SQL
queries were triggered by this individual call and which
SQL query was the most expensive.
Under Environment Monitoring The logs provide for example: The tools under Environment
in LCS there a comprehensive Monitoring are very effective at
▪ Activity Monitoring: A visualization of the activity
collection of tools and information diagnosing potential or growing
that has happened in the environment for given
that you can use to analyze and performance issues. Keeping
timeline in terms of user load, interaction and
diagnose your cloud environment. an eye on these metrics can
LCS Logs activity.
help pinpoint problems with
▪ SQL Insights: Logs that include advanced SQL extensions.
troubleshooting.
The CAR report is an advanced The CAR report can be run by command line in a A clean CAR report is a require-
Customization best practice check tool. development environment. The output is an Excel ment for the go-live readiness
Analysis Report workbook with recommendations issues review prior to enabling
(CAR Report) and warnings. production.
A breaking change is a change that Breaking changes are, for example, changes to data That although the application is
can break the code consumers of model and extended data types, changes to access massive, we tend to only extend
your code and components make. modifiers on classes and methods and many others. the same relatively small subset
of elements. It is not as unlikely
Understand and
This is especially important in heavily extended solutions, that you may think that other
avoid breaking
if you are making a basis for a global rollout, if you developers use your components
changes
are making an ISV, or if you have multiple developers or code.
sharing common custom APIs or constructs but should
always be considered.
If you find a need for an exten- Extensibility requests are logged to a backlog. Micro- Extensibility requests are
sion point that is currently not soft engineers prioritize all requests, and then work following the same cadence as
Log extensibility available, log the request early. on them in priority order. Please note that Microsoft the platform updates.
requests early This is done via an extensibility is not guaranteeing that all requests will be fulfilled.
request in LCS. Requests that are intrusive by nature will not be sup-
ported, as they will prevent seamless upgrade.
Sometimes developers put a Developers are the first line of defense against bugs, It is a lot easier and cost effective
lot of effort into building the performance issues and semantic issues that may for the developer to find and fix
extension, but little effort into exist in the specification. By simply going through the a problem before it is checked in,
unit testing it before determining intended functionality from perspectives such as: built and deployed.
Proper unit whether it is ready to deliver. ▪ Will this code scale with high volumes?
testing ▪ Does it do what I expect?
▪ Can I do things I am not supposed to?
▪ Could the user accidently do something unwanted?
▪ Does the requirement make sense or force me to
break best practices or patterns?
Reference links
▪ Extensibility home page - Finance and Supply Chain Management
▪ Application stack and server architecture - Finance and Supply Chain Management
▪ Microsoft Power Platform integration with Finance and Supply Chain Management
▪ Take traces by using Trace parser - Finance and Supply Chain Management
Customer Engagement
The Power Platform, shown in Figure 15-11, provides the ability to use
configurations, low-code/no-code customizations, and still allows devel-
opers to extend programmatically first-party customer
engagement apps like Dynamics 365 Sales. Custom business
applications can also be created. You can even mix the two approaches to
provide applications adjusted to specific organizational needs.
Fig.
15-11
Microsoft Power Platform
The low-code platform that spans Office 365, Azure, Dynamics 365, and standalone applications
Power Apps makers can also create external facing Portals that allow users
outside their organization to create and view data in Microsoft Dataverse.
The Portals experience reuses components in other apps to determine the
experience. It is also possible to customize the Portals experience, similar
to how other apps are customizable.
406
code for form events or plug-ins that apply business logic to data
transaction.
Data
The entity designer and option set designer determine what data the app
is based on and allow changes to the data model by adding additional
tables and fields as well as relationships and components that use
predetermined options for users to select.
Logic
The business process flow designer, workflow designer, process designer,
and business rule designer, determine the business processes, rules, and
automation of the app.
You can create a range of apps with Power Apps, either by using canvas or
model drive apps to solve business problems and infuse digital
transformation into manual and outdated processes.
Solution analyzers
Solution checker The solution checker can perform a rich static
analysis of the solutions against a set of best practice rules to quickly
identify patterns that may cause problems. After the check is completed,
a detailed report is generated that lists the issues identified, the
components and code affected, and links to documentation that
describe how to resolve each issue.
Power Apps checker web API The Power Apps checker web API
provides a mechanism to run static analysis checks against customizations
and extensions to the Microsoft Dataverse platform. It is available for
makers and developers to perform rich static analysis checks on their
solutions against a set of best practice rules to quickly identify prob-
lematic patterns.
Conclusion
Cloud-based solutions offer ready-to-use applications solutions
that can be easily delivered, reducing the time required for an
This means that organizations can start using the solution as-is. Still,
in some scenarios, additional requirements are needed to add value
to the business or to empower users and drive adoption of the new
solution. Thus, modern SaaS solutions also provide rich capabilities
to further extend it. These can range from simple configurations us-
ing a low-code/no-code approach or an extension by using custom
code by professional developers.
References
Business Apps | Microsoft Power Apps
Ensure the solution doesn’t mimic the ways of achieving Ensure the extensions honor the security mechanism,
the same results as the legacy solution or the system privacy, and compliance requirements.
being replaced.
Ensure extensions are scalable, tested for high volume,
Understand the platform capabilities and use its and capable of handling peaks like holiday seasons.
strength to simplify and optimize the overall process to
Align extensions with ALM automated processes to
get the most of the out-of-the-box experiences.
build and deploy them in an efficient and fast-paced
Review if any potential ISVs were considered before approach.
deciding to extend the solution. The AppSource market-
Ensure code and customizations follow only the
place contains ISV-managed solutions that may replace
documented supported techniques, and don’t use
the need to create a fully custom solution.
deprecated features and techniques.
The off-the-shelf functionalities gave the ability for the users to achieve
a single view of the customer lifecycle across all products. Through
customizations and further extensibility of the solution, the company
added special wealth management services and fully customized the
user experience to better serve their customers.
Choosing a platform
In line with the design pillars, we look at how the choice of integration
Choosing a design platform should fit into the architectural landscape.
Choosing a pattern
We also look at how to choose an integration design that offers users
Challenges in integration
the capabilities they desire in the short and long term.
Product-specific guidance
And finally, before diving into the product specifics, we walk through
some of the common challenges people face when integrating systems.
Choosing a platform processes against the overall goals of the project and the business. To
accomplish this, begin your integration work by defining goals that
Choosing a design map to the business perspective.
Choosing a pattern
Multi-phased implementation
You might be implementing Business Applications in a planned
multi-phased approach, starting with a geographic location, a single
division, or a single Dynamics 365 business application—Finance, for
example. In this scenario, some level of integration with the parts of
the legacy solution that will be implemented in future phases could
be necessary.
Financial consolidation
Perhaps your organization is a subsidiary of a larger operation that
requires data to be consolidated and reported in a corporate parent
entity. This often requires extracts of data to be transformed and
loaded into the corporate consolidation system. In some cases, it’s
the other way around: your organization might expect integration
of consolidation and other data from your subsidiaries into your
new system.
Many more scenarios are not described here. Some might even be
combinations of several of the examples.
Conceptualizing
Creating a blueprint and thoroughly planning will also help you
Success by Design highly recommends that
formulate the testing and performance testing of the solution later in
you approach integration work the same the implementation. That’s why integration architecture is a key part
way you would an extension project, by
following a defined software development of the Success by Design solution blueprint.
lifecycle (SDLC) that incorporates business
stakeholders and collects their buy-in.
The SDLC should include requirements, To create a blueprint, you can leverage several types of diagrams,
design, development, and user acceptance
testing, as well as performance testing,
which we describe here.
deployment, and application lifecycle ▪ Application integration diagram The application integration
management (ALM). The work-to-define
requirements should be business driven diagram is often a high-level representation of solution architecture.
and process focused. For more information, An example of one is shown in Figure 16-1. Many styles exist, but in
refer to Chapter 7, “Process-focused
solution,” and Chapter 11, “Application its basic form it should provide an overview of which systems in the
lifecycle management.”
solution need to be integrated and, ideally, what data is exchanged
and the direction of the data flow. Once the overview is established,
details can be added about the specific interface touchpoints,
frequency, and volume information, and perhaps a link to ALM
information, such as a functional design document (or FDD)
number or link.
Fig.
16-1
App 2 App 4
Process start
Message Response
User action User trigger event
Telephone1 PrimaryContactPhone
about the expected effort and duration. Calculating the volume of
transactions that flow through the interface is important because
EmailAddress1 CustomerGroupID
it helps you decide what patterns to use, and ultimately the size
CreditLimit PrimaryContactEmail
of the different platform components and services needed for
CreditLimit the integration.
Fig.
16-4 Simple cloud Simple hybrid
System 1
System 1 System 2
Cloud
Cloud
On-premises
On-premises
System 2
Whether in cloud or hybrid, be sure to not fall for the temptation of building
tightly coupled point-to-point, custom synchronous service integrations.
They will likely not scale well, and future expansion will likely make the
system more fragile. It’s good practice to lay a solid foundation early.
Organizations that choose not to do that often find themselves having to
make tough and expensive adjustments to make the architecture scalable
as the business grows.
▪ Primarily on-premises In the scenario depicted on the left side of
Figure 16-5, the organization has in place a rather complex on-premises
solution with multiple line-of-business systems. The organization is
using a middleware platform that’s based on-premises to connect
systems and might be integrating its first SaaS cloud application.
▪ Hybrid The hybrid example on the right side of Figure 16-5
represents large, diversified enterprises or organizations that are
halfway through their cloud journey. The organization is using
connected cloud and on-premises middleware platforms that can
integrate in hybrid scenarios, and it might add an additional cloud
application to the solution architecture.
Cloud
System 5
Middle-
ware
Middle- Middle-
ware ware
On-premises
On-premises
System 1 System 2 System 3 System 4 System 4 System 5 System 6 Storage/DW
Fig.
16-6 Hybrid, no middleware Primarily cloud
Cloud
Middle-
ware
On-premises
On-premises
System 4
System 4 System 5 System 6 Storage/DW
Middleware
Integration middleware is software or services that enable communication
Middleware provides specialized and data management for distributed applications. Middleware often
capabilities to enable communication,
transformation, connectivity, orchestration,
provides messaging services on technologies such as SOAP, REST, and
and other messaging-related functionality. JSON. Some middleware offers queues, transaction management,
Dynamics 365 Finance and Dynamics 365
Supply Chain Management are business monitoring, and error logging and handling. Different middleware
applications that provide integration
platforms can support on-premises, cloud-based, or hybrid scenarios.
capabilities to support interfaces, but they
are not designed to replace middleware. The following are characteristics you should consider when choosing a
middleware platform.
Key characteristics
When deciding whether to integrate to an existing system, you should
consider several important characteristics of the solution, the system,
and its context.
▪ Scalability and performance The planned platform, middleware,
Business Applications and Power Platform
and supporting architecture must be able to handle your
allow users to design, configure, and organization’s expected persistent and peak transaction
customize an application to meet a
customer’s business needs. In doing so, it’s volumes in the present, the short term, and the long term.
important to consider performance and
▪ Security Authentication defines how each system confirms a user’s
scalability. Ensuring a performant system is
a shared responsibility among customers, identity, and you should consider how that will work across
partners, ISV providers, and Microsoft.
systems and with middleware. Authorization specifies how each
system grants or denies access to endpoints, business logic, and
data. It’s important to ensure that an integration platform and
middleware are compatible with system security and fit into the
solution architecture landscape.
▪ Reliable messaging Middleware typically provides messaging
services. It’s important that the messaging platform supports the
architecture of the integrated systems and provides a reliable
mechanism or technology to ensure that messaging across systems
is accurately sent, acknowledged, received, and confirmed. This is
especially important in situations in which a system or part of
the supporting architecture is unavailable. Error-handling concepts
Dataverse
Let’s take a look now at how you can use Power Platform and
Dataverse with Business Applications.
Choosing
a design UI integration
Choosing a pattern In UI integration, the primary point of integration is centered around an
Challenges in integration
action that’s performed on the UI. The integration might or might not
trigger business logic or cause anything to be written to the system. UI
Product-specific guidance integration creates a seamless user experience even though the data
and process might exist in separate systems, as shown in the example in
Figure 16-8.
Data integration
Data integration, shown in Figure 16-9, is integration between systems
that takes place at the data layer, and data is exchanged or shared
between systems. Data integration is different from process integration
in that both systems work with a representation of the same data,
Fig.
whereas in process integration the process starts in one system and
16-9
continues in the other system.
Keep in mind that often data doesn’t originate from a system within
your own organization; it can come from an external source to upload.
Conversely, data might be extracted from your system to be sent to an
auditor, a regulatory body, or an industry data exchange.
Look up status
Whenorder
Invoice designing data integration, you should consider which system
Update status will be the system of record, or owner, of the information. There are
scenarios in which this is clear cut, for example, uploading worker data
from Dynamics 365 Human Resources into Finance and Supply Chain
Order paid
Update status Management apps (the former is the system of record). But there are also
scenarios in which each system owns a separate part of the overall entity,
Look up status for example, the integration of Accounts to Customers between Sales
Look up status
Systems may have separate processes and even different
business logic, but if the underlying data layer is the
Invoice order same, the need for transfers of data, synchronization,
and programming to transform the data is completely
Look up status eliminated. This kind of integration is now possible
because the development of shared data stores such as
Dataverse set standards for how a certain kind of data
is defined in the data layer.
Process integration
Process integration refers to when a business process is designed to span
A key characteristic of process integration multiple systems. There are many varieties of process integrations, such
design is that it’s event driven. as from a plan-to-produce workflow in which production forecasting or
The benefits of process integration are block scheduling occurs in a Production Scheduling System and the rest of
accuracy, efficient timing of information
the manufacturing and supply chain management process (production,
and activities in the organization, and
reduction of manual errors. order management, and fulfillment) and the billing process occur in an
ERP system. Figure 16-12 shows this type of integration.
With Dataverse, you can leverage Power Platform to integrate apps. You
should consider several components and features of Azure and Power
Platform when designing an overall solution as well as individual integration.
Power Automate
Power Automate provides low-code/no-code solutions to help you
automate workflows and build integration between various applications.
Power Automate automates repetitive and manual tasks and seamlessly
integrates business applications inside and outside Power Platform.
Choosing a pattern
Defining business goals Choosing the right pattern is a critical part of successfully implementing
integration between systems. When choosing an integration pattern, you
Choosing a platform
should consider factors such as what its main functionality is and how it’s
Choosing a design built, including platform, language, user interface, and the connectivity
type it handles. We recommend that you also consider what type of
Choosing a
actions you need the integration to perform, such as the following:
pattern
▪ Data types and formats What types of data are you sending—
Challenges in integration
transactional, text, HTML?
Product-specific guidance ▪ Data availability When do you want the data to be ready, from
source to target? Is it needed in real time, or do you just need to
collect all the data at the end of the day and send it in a scheduled
batch to its target?
▪ Service protection and throttling When you use certain
integration patterns, service protection might be built in so
that there’s a maximum number of records allowed because the
performance decreases with quantity. Sometimes the provider
Pattern directions
Let’s take a closer look at some common patterns for individual
integrations and the pros and cons for each. This table is generalized; for more
information, refer to the “Product-specific guidance” section in this chapter.
Push One system puts Originating system user or Pros: If technical expertise lies For reactive scenarios.
(pushes) data into system event. within the pushing system, the
Receiving system provides
another. Information custom effort lies here. Good for
a turnkey API and
flows from the originator reactive scenarios.
organization’s developer
to the receiver.
Cons: Pushing system might not skillset is with the
have visibility into availability and originating system.
load and idle times in the
receiving system.
Pull Receiving system requests Receiving system request Pros: If technical expertise lies For proactive scenarios.
data from the originator— based on schedule. within the pulling system, the
We might not have the
a subtle but significant custom effort lies here. Good for
option to add triggers or
difference from the proactive scenarios. Clear visibility
events in the originating
Push pattern. into availability and load and idle
system. Originating system
times in the receiving system.
provides a turnkey API and
Cons: Originating system might not organization’s developer
have the APIs needed to pull from. skillset is with the
receiving system.
One-way sync Data from one system is Data state, system, or Pros: Establishes a clear system of One system is owner or
synchronized to another by user event. record. Simple conflict resolution. system of record and
one or more trigger events. other systems consume
Cons: Sometimes the receiving
system or non–system of record that data.
Bidirectional sync Data from two or more Data state, system, Pros: Data is kept in sync across For dual-write
systems are synchronized. or user event. applications. Acquired divisions on integration patterns.
multiple platforms can continue to
Scenarios in which there
use their existing systems. Users can
isn’t a clear system
use their system to make changes.
of record.
Cons: Complex conflict
Data from best-of-
resolution. Redundant data is
breed systems should be
replicated for each system.
available in Dataverse for
Synchronized data might be a
Power Apps and other
subset of data in systems. The rest
tools and services.
must be automatically given values
or manually updated later.
Implementation Guide: Success by Design: Integrate with other solutions 435
Pattern Mechanism Trigger Considerations Use when
Aggregation Data from a specialized Any. Pros: Detailed data is kept in the Aggregates are needed for
system is integrated to system where it’s used. Aggregation calculating or processing,
another system on an can derive a small dataset from a for example, on-hand
aggregated level for large one, thus limiting traffic inventory by warehouse,
processing or reporting. across platforms. revenue by invoice header,
Embedding Information from one User event. Pros: Simple because the data A mix of information from
system is seamlessly remains in the originating system. first-party applications (for
integrated into the UI of example, Bing, Power BI,
Cons: Difficult to use the data for
another system. and Exchange), third-party
calculations for processing.
components, canvas apps,
or other information
embedded in the UI of
an application.
Batching Batching is the practice of Any. Pros: Great for use with messaging Whenever it isn’t
gathering and transporting services and other asynchronous necessary to transmit
a set of messages or integration patterns. Fewer individual records.
records in a batch to limit individual packages and less
chatter and overhead. message traffic.
Data logging
439
it’s important to incorporate the following:
▪ Error logging Select the platform (such as a log file or database) to
log the errors.
▪ Error monitoring and notifications
▫ Define the process and technical approach to monitor errors.
▫ Select an error notification approach, which is the process to
notify administrators and other stakeholders if there’s an error.
Business-critical errors might need a different notification
approach than non-critical errors.
▪ Business continuity It’s important to plan for business
continuity with minimal disruption to business in the event of errors.
For more information, see Chapter 20, “Service the solution.”
The more integration touchpoints there are, the greater the potential for
errors. Therefore, error handling also needs to be planned in accordance
with the integration scenario. For example, in the scenario of a synchronous
integration pattern, an error might require a complete rollback of the
entire process, whereas in an asynchronous data integration scenario, it
might be acceptable to fix a data issue just by notifying the administrator.
(See Figure 16-14.)
Let’s now discuss error management for two key patterns of synchronous
and asynchronous integration.
Fig.
16-14
As a reminder, in a synchronous integration pattern, each step is
dependent on the completion of a preceding
System 1 System 2 step. In the event of an error, consider a retrial or
Quote Enterprise rollback—which option to use is dependent on
Resource Planning
whether the error is transient or persistent. In the
scenario of a transient error, the normal behavior of
Create quote
the application will resume after a few retries. Note
Enter item that retrial limits should be predefined to avoid a
situation in which all resources are blocked. Once
Look up price
Get price the retrial limit has been crossed, the entire process
will need to be rolled back, and appropriate error
?
Return price
messages should be logged.
Transient errors such as network timeouts get fixed after a few retries.
However, persistent errors require intervention to be fixed.
The following are some of the most common error scenarios in any
integration between Business Applications and other applications:
▪ System becomes unavailable
▪ Authorization and authentication errors
▪ Errors caused by platform limits
▪ Errors caused by service protection limits applied to ensure
service levels
▫ API limits (limits allowed based on the user license type)
▫ Throttling
▫ Process errors
▪ Runtime exceptions
Return price
16-15 shows a persistent blocking error. The
Notification platform supports logging and notifications
so that an administrator can be notified and
Log
resolve the issue.
Consider using tools such as Azure Monitor that help maximize your
applications’ availability to collect, analyze, and act on telemetry.
Business
Each integration scenario has a direct impact on the application you’re
integrating with and on its users. Any downstream applications might also
be indirectly impacted. For integration to be implemented in a successful
manner, you should address the following during the Initiate stage:
▪ Identification of application owners and stakeholders
Application owners need to identify downstream applications that
might be impacted in an integration. However, a common scenario
is to bring in these owners after the planning is complete. This often
results in misaligned timelines and scope and in turn creates project
delays and poor quality. Integration design needs to take every
impacted application into consideration.
▪ Alignment between business owners Business stakeholders
have different needs for their organization’s various applications.
Unless there is a collective understanding of integration scenar-
ios and approaches, requirements might be mismatched among
the various owners. This in turn often results in delayed timelines,
cost overruns, and a lack of accountability. System integrators
should consider the following:
▫ Identify the key owners and bring them together to walk
through the scenarios.
▫ Differentiate between process, data, and UI integration to
simplify and streamline the integration scope.
▫ Outline the impact on business groups affected by the integration.
▫ Highlight issues and risks in the absence of following a
consistent approach.
A transparent conversation enables business stakeholders to
understand the underlying risks and benefits and thus build a
common perspective.
▪ Ambiguous and unrealistic expectations Integration
requirements can sometimes be ambiguous or incorrectly
Technology
Most enterprises have legacy applications with traditional, on-premises
architecture. The move to cloud applications requires consideration
of the patterns they support and the best practices when planning for
integration. A low-code/no-code pattern should now be at the forefront
of any integration architecture. Engaging in conversations about not
just the current setup but also about future planning for performance,
extensibility, and maintenance plays a key role in choosing the right
444
technology. When choosing the appropriate technology, consider
the following.
▪ Does one size truly fit all? Many enterprises have an enterprise
architecture approach that might or might not be aligned with
the modern approaches for cloud applications. Prior to investing
in a specific approach, evaluate whether the existing architecture
aligns with cloud patterns. Sometimes, a generic approach is
taken—this can result in inefficiencies in integration, unscalable
architecture, and poor user experience and adoption. Therefore,
it’s crucial to consider design paradigms such as the following:
▫ Definition of the integration approach based on
multiple parameters
▫ Benefit of a proof of concept to determine the pros and cons
of one approach over another
▫ Synchronous versus asynchronous integration
▫ Process, UI, and data integration
▫ Single record or batch
▫ Frequency and direction of the integration
▫ Message reliability and speed
▫ Data volume
▫ Time expectations (some scenarios require a batch integration
to be completed during a specific time window)
▫ Error management and retrials
▪ Will sensitive data be exposed? System integrators must
understand IT and customer concerns around security, especially in
the context of integrating on-premises applications with Business
Applications. Categorizing security concerns as follows aids in
identifying who and what is required to help address them:
▫ Access control
▫ Data protection
▫ Compliance and regulatory requirements
▫ Transparency
For more information, refer to Chapter 12, “Security.”
▪ Storage costs and platform limits To ensure service quality
and availability, Business Applications and Power Platform
enforces entitlement limits. These limits help protect service
quality and performance from interference by noisy behavior
that can create disruptions. System integrators must incorporate
446
technology world, architects sometime choose an approach due
more to its familiarity than its applicability. Customers and system
integrators must evaluate whether to request additional resources
specialized in the specific technology who will be a better fit for
their current and future needs.
Project governance
The initial stage of a project should include a defined project
governance model. Integration between on-premises and Business
Applications can range from simple to complex, and the lack of
well-defined project governance areas results in gaps and issues in the
smooth implementation of a project. Following are common project
governance concerns specifically for the integration components:
▪ Has the impact of the integrations been identified for the end user,
process, and reporting? This might require planning for change
management activities, including communication and training.
▪ Making a solution performant should be at the forefront of
any design decisions made by the implementation team. This
applies equally to the integration layer and the integration layer.
Is performance testing planned and does it cover integration
components? Performance testing is another activity that tends to
be considered optional. However, architects and project managers
must consider embedding this in their Business Applications
implementations. This will help identify any performance
bottlenecks prior to deployment for end users.
▪ Are development and test environments available for all applications
for thorough system integration testing? Is a plan for stub-based
testing during the unit testing phase required?
Asking these questions during the initial stages of the project enables
both the implementation partner and customer to proactively identify
and plan for any dependencies and risks.
Choosing a pattern
Data entities
In Finance and Supply Chain Management, a data entity encapsulates a
business concept, for example, a customer or sales order line, in a format
that makes development and integration easier. It’s a denormalized view
in which each row contains all the data from a main table and its related
tables instead of the complex view of the normalized data model behind it.
Custom Services— A developer can create User actions and Pros: Easy for developers to add Used when invoking an action or
SOAP, REST, and external web services by system events. and expose service endpoints to use update, for example, invoicing a
JSON extending the application with integration platforms. sales order or returning a specific
with X++. Endpoints are value. We recommend using
Cons: Requires ALM and SDLC for
deployed for SOAP, REST, REST Custom Services in general
coding and deployment of
and JSON. because it’s optimized for
extensions into Finance and Supply
Chain Management. The payload is the web.
Consuming A developer can consume Scheduled and user Pros: Easy for developers to add Use when consuming services
web services external web services by initiated. Can wait for and expose service endpoints to use from other SaaS platforms or
adding a reference in X++. off hours or idle time. with integration platforms. products, for example, com-
Data The REST API helps Originating system Pros: On-premises and large- Large-volume integrations.
Management integrate by using data users or system events. volume support. The only interface Scheduling and
Framework REST packages. that supports change tracking. transformations happen
Package API
Cons: Supports only data packages. outside Finance and Supply
(asynchronous, Chain Management.
batched, cloud, or
on-premises)
Cons: None.
Electronic Reporting A tool that configures Data state, system, Pros: Data extracts and imports are Electronic reporting to
formats for incoming or user events. configurable in Finance and Supply regulatory authorities and
and outgoing electronic Chain Management. It supports similar entities.
documents in accordance several local government formats
with the legal requirements out of the box. It can be scheduled
of countries or regions. for recurrence.
Excel and Microsoft Office Data state, system, Pros: Out-of-the-box integration Extracts for ad hoc reporting
Office integration integration capabilities or user events. (export and edit) on almost every or calculations.
enable user productivity. screen in the product.
Fast editing of column
Cons: Performance decreases with values and entry of data
the size of the dataset. from manual sources.
Business events Business events provide a User or system events. Pros: Provides events that can be To integrate with Azure
mechanism that lets captured by Power Automate, Logic Event Grid, Power
external systems, Power Apps, and Azure Event Grid. Automate, or Logic Apps.
Automate, and Azure To notify of events
Cons: Extensions are needed to
messaging services receive inherently driven by a
add custom events.
notifications from Finance single data entity, for
and Supply Chain example, an update of a
Management. document or a pass or fail
of a quality order.
IFrame (UI) The Website Host control Users. Pros: Seamlessly integrates UI from When information from
enables developers to other systems or apps without loosely coupled systems
embed third-party apps integrating the backend. can be displayed within
directly into Finance and Information goes directly into the the Finance and Supply
Supply Chain Management Finance and Supply Chain Chain Management UI.
inside an IFrame. Management UI without the need The experience is
for updates or compatibility. enhanced if the external
Customer Engagement
In this section we talk about frameworks and platforms to use when
integrating from Customer Engagement.
IFrames
IFrame is a popular approach commonly used for hosting external URL-
based applications. Consider using the Restrict cross-frame scripting
options to ensure security.
Canvas apps
Canvas apps is a cloud service that enables citizen developers to
easily build business apps without the need to write any code. These
apps can use connectors from other cloud services and be
Virtual tables
Virtual tables pull data on demand from external data sources. This
approach is implemented as tables within the Dataverse layer but doesn’t
replicate the data because the data is pulled real time on demand. For
more information, read about the limitations of virtual tables.
Webhooks
Commonly used for near real-time integration scenarios, webhooks
can be invoked to call an external application upon the trigger of a
server event. When choosing between the webhooks model and the
Azure Service Bus integration, consider the following:
▪ Azure Service Bus works for high-scale processing and provides
a full queueing mechanism if Dataverse pushes many events.
453
▪ Webhooks enable synchronous and asynchronous steps,
whereas Azure Service Bus enables only asynchronous steps.
▪ Both webhooks and Azure Service Bus can be invoked from
Power Automate or plug-ins.
Azure Functions
Azure Functions uses serverless architecture and can be used to extend
business logic, including calling external applications. It runs in Azure
and operates at scale. Azure Functions can be called through Power
Automate and Azure Logic Apps. For more information, read about
“Azure Functions” and “Using Azure Functions in Power Apps.”
Virtual tables
The virtual table option enables you to connect Dataverse to Finance and
Supply Chain Management Apps entities as virtual tables that offer the
same full CRUD (create, retrieve [or read], update, delete) capabilities as the
entity endpoint in the app. A benefit is that we can access the data in Finance
and Supply Chain Management Apps in a secure and consistent way that
looks and behaves the same as any other table or construct in Dataverse. We
can also use Power Automate to connect to almost anything.
Dual-write
The second option is dual-write integration, shown in Figure 16-16.
Because Dataverse is cloud based, this Dual-write also provides synchronous, real-time integration between
direct integration option isn’t available Finance and Supply Chain Management Apps and applications in
for on-premises implementations of
Finance and Supply Chain Management Dataverse. Dual-write even offers offline capabilities.
Apps. You can use the Data Management
REST API instead.
A common scenario for the dual-write option is one in which both
For more information, read the “Finance
and Operations virtual tables FAQ.”
Customer Engagement and Finance and Supply Chain Management
Apps are working on data that is fundamentally the same, for example,
customer and account, product and item, and projects.
Conclusion
Implementing Business Applications is often a building block into an
organization’s larger solution landscape. In that case, the organization can
benefit from automation to gain streamlined and effective cross-system
processes and avoid manual errors.
Integration is also key in closing the digital feedback loop and making
all the organization’s data available, not just for reporting and visibility
References
REST vs CRUD: Explaining REST & CRUD Operations
Messaging Integration Patterns
Open Data Protocol (OData)
Consume external web services
Custom service development
Data management package REST API
Recurring integrations
Electronic messaging
Office integration overview
Business events overview
Embed canvas apps from Power Apps
Analytical Workspaces (using Power BI Embedded)
What’s new and changed in Platform update 31 for Finance and Operations apps (January 2020)
Checklist
Align the designs of each integration with the overall
integration architecture.
Define business goals Clearly state the options and benefits of each of the
following: UI, data, process integration, and Dataverse.
Document and define goals and expected benefits of
integrations being implemented in a business-centric way.
Choose a pattern
Align the planned integration’s purpose with short- and
long-term organization goals. Design integrations to favor robust, asynchronous
messaging-based patterns.
Ensure the overview of the integration architecture, systems,
and integration points is clear and understandable. Align patterns used for each integration with expectations
for volumes, frequency, and service protection limitations.
Ensure that stakeholders have a shared understanding of
the purpose and scope of the integrations that are being Set realistic estimates of the operating costs for services,
implemented. platforms, and storage involved and be aware of how
scaling affects them in the future.
Choose a platform
Project governance
Ensure the organization understands the concept of
cloud versus on-premises platforms and the boundary Plan each integration for user and performance testing
between them. under realistic loads, as well as the end-to-end process
leading up to the integration, across the system
Plan to use either an integration middleware or
boundary, and after the point of integration.
messaging service.
Plan for testing the end-to-end process patterns used
Ensure the integration architecture, platform, or
for each integration in line with recommendations for
middleware supports the expectations for monitoring,
volumes, frequency, and service protection limitations.
audit, notifications, and alerts.
Have change management activities related to integrations
Ensure the integration architecture supports the expected
that reflect and support overall business goals.
level of security, availability, and disaster recovery.
Complete the impact analysis on upstream and
Ensure all components of the integration architecture
downstream processes.
support ALM and version control.
The organization was working with an IT vendor that had several years
of experience building complex integrations using technologies such
as IBM MQ and Microsoft BizTalk.
As the team started building and testing the initial components, they
identified some challenges due to the architecture:
▪ They experienced slow performance with batch-integration scenarios
because they called the services as they would have in a point-to-
point integration.
▪ They couldn’t use standard functionalities that would have been
available with out-of-the-box approaches such as SharePoint
Online integration with Power Platform.
▪ For an aggregated view, they decided to replicate all data into
the Dynamics 365 Customer Service app, which led to additional
storage costs.
▪ They encountered throttling and API limits issues, which prevented
successful completion of the integration.
Introduction
When a new Dynamics 365 implementation is
delivered, users typically expect an improvement in
system performance.
Good performance is often assumed as a given, a default experience. The
The following are performance reality is that although Dynamics 365 products are scalable and powerful,
factors to consider when
designing solutions: various factors are involved in achieving a high-performance solution.
• Solution performance is critical for user
These include defining and agreeing to performance metrics, testing
adoption, customer experience, and throughout the phases of a project, and taking design and build consider-
project success and necessary to enable
businesses to achieve their goals. ations into account, particularly for customizations and integrations.
• Clear goals and realistic expectations
are vital to developing a solution that
performs well. In this chapter, we explore various aspects of performance and how
• Scalable solutions begin with the correct
use of the right software.
they should be addressed throughout the stages of implementation. We
• Customizations increase the risk of discuss why performance is directly related to the success of a project and
performance issues, but these risks can
be mitigated with the right mindset and why it must be prioritized early. We also discuss how to align expectations
planning. with business users to enable meaningful performance discussions so that
• Performance testing needs to be realistic
to be meaningful. challenges are identified and translated into design requirements.
• Performance issues are complex to
resolve and therefore are better avoided
than fixed. Finally, we cover performance testing strategies to protect organiza-
• Ensuring performance is an iterative
process that involves incremental im-
tions from common performance risks and anti-patterns as well as how
provements throughout the to approach resolving any issues that do occur.
solution lifecycle.
People expect superior response times and a smooth experience from the
organizations they choose to provide products and services. If customers
don’t get this, they tend to look elsewhere. Let’s go back to our street
design. Because of poor street design, people might decide not to live in
or visit that city. Residents might even elect different leadership because
of their poor experience. The takeaway is that system performance affects
people in numerous ways and can have serious consequences.
System success
As businesses evolve and optimize, the ambition remains to achieve
more with less. The need to sell more products or service more cus-
tomers is always accompanied by the need to reduce the expenditure
of money, effort, people, and time, at all levels of the organization.
This constant need to achieve more with less creates pressure on em-
ployees to work in the most efficient way possible. Time spent waiting
for a system to perform an action is wasted time, and employees who
rely on these systems to do their jobs quickly realize this.
User adoption
User adoption is a critical factor in the success of any software project.
Any business case (and projected return on investment) depends on
the system being used as intended. Poor performance directly drives
user dissatisfaction and can make user adoption incredibly challenging.
Users are keen to adopt systems that increase their productivity, which
essentially means minimizing wasted time. Business users achieve their
goals when solution performance is optimal and as expected. A poorly
performing system wastes users’ time and therefore reduces productivity.
System reputation
Even before go live, performance can help or hinder user adoption.
During the development phase, the application typically is presented
to a set of key users in different areas of the business to collect feed-
back. These users then talk to colleagues about the implementation.
In this way, the reputation of the application spreads throughout the
business long before most users touch the system. Keep in mind that
performance impressions tend to spread quickly. For example, if a
demonstration goes well, a wave of excitement might flow throughout
the company. This positivity can help increase user adoption because
of the anticipated improvement in productivity.
466
different network infrastructures. Project teams therefore need to build
solutions that accommodate users who have a variety of hardware,
increased network latency, and a range of network quality.
We look at more examples later in the chapter, but for now it’s impor-
tant to be clear that most performance issues are best solved by cor-
rect implementation decisions and not by adding hardware. Moreover,
it’s crucial to acknowledge that performance is not guaranteed simply
because the software is running in the cloud. It’s still the responsibility
of the project team to deliver a well-performing solution.
Prioritize performance
Given that performance is so important to users, customers, and ulti-
mately the success of the overall system, let’s look at how performance
relates to project delivery.
Data strategy
▪ Is the volume of data stored likely to cause performance issues
for users?
Integration strategy
▪ Are real-time integrations feasible given the performance expecta-
tions of the users?
▪ Can overnight batch integrations complete within a given timeframe?
Data modeling
▪ Do we need to denormalize for performance reasons?
Security modeling
▪ Will our security model work at scale?
▪ Are there bottlenecks?
These aren’t the types of questions the delivery team should be asking
when a performance issue surfaces, especially when approaching the
go-live deadline. A successful project will have answers to these ques-
tions early on. At the least, the delivery team should identify risks and
seek possible resolutions in the early stages of delivery. This might lead
to proof-of-concept work to test performance.
Let’s put this into the context of our street design scenario and consid-
er the questions that need to be asked and answered to maximize the
design. For instance, how many residents currently live in the city? How
much traffic can we expect to be on the roads? What’s the projected
population growth and how long will the design support it? Will traffic
lights be installed? If so, how many and how will that affect traffic?
What’s the speed limit and are there risks associated with that limit?
Each of these questions helps determine the best street design before
we even start the project.
469
Resources
Considering performance from the early stages also ensures that the
correct expectations are set in terms of time, money, effort, and peo-
ple. For example, for performance testing, a dedicated performance
test environment is needed as well as the people to do the testing.
Business stakeholders might need additional time with the delivery
team to understand, agree with, and document performance require-
ments. It might even be necessary to allocate more development time
and resources for code optimization.
Fig.
17-1 Go live
Identify and fix performance issues in these phases Do not wait to fix issues in these phases
User confidence
Performance is always linked to perception. It’s important to be aware
of user feedback during the implementation of the project because
it can affect users’ engagement during testing and ultimately help or
hinder user adoption at launch. Projects that prioritize performance
early on tend to present better-performing solutions during imple-
mentation. This early planning helps reassure users that they’ll receive
a solution that enables them to become more productive—and this
leads to better engagement and adoption overall.
Establish requirements
Acceptable performance is the goal of every project, but the definition
of “acceptable” is often vague, if defined at all. To successfully deliver
acceptable performance, we need to be clear on what that means and
then be able to track progress against our performance goals.
This approach is the same for other system requirements. It’s vital that
performance be considered like other requirements gathered in the
initial stages of implementation.
472
More specifically for performance, it’s also important to know when
to stop optimizing. For example, developers might apply perfor-
mance tweaks to a piece of code, optimizing with no goal in mind.
Any optimization of tested code comes with some risk of regression
issues; therefore, this practice should be performed as infrequently as
possible. However, without a clear understanding of when enough is
enough, it’s difficult to gauge what level of optimization is sufficient
and when further optimization is unnecessary.
This doesn’t mean users are solely responsible for deciding perfor-
mance requirements. It’s important to be clear that performance
comes with an associated cost, and business stakeholders need to
assess requests coming from users within the context of the wider
project. Aggressive performance requirements might be achievable,
but they require additional development, testing effort, and people.
With this in mind, it’s important to understand the underlying need
for each performance requirement and for business stakeholders to
be prepared to consider a compromise where it makes sense to do
so. Performance for the sake of performance is expensive and unnec-
essary. Communicate this to users and take a pragmatic approach to
focus on what specific performance requirements are important.
Spend time with users to understand the activities for which per-
formance plays a critical role. Agree on these areas with project
stakeholders and then focus performance-related work on these
activities to maximize the value of the efforts. Consider performance
testing for each system area, including the following:
▪ Functional processes
▪ Background operations (for example, batch and workflows)
▪ Integrations
▪ Data migration
▪ Reporting and analytics
SL
A
“I need to load this record “I need to be able to load “I have to be able to do “This overnight process
quickly for a customer on the products into the this within a few minutes; needs to happen within the
the phone; otherwise, I delivery truck to meet my we have a service-level time window; otherwise, the
might lose the sale.” shipment schedule.” agreement to meet.” users won’t be able to work.”
Anticipate growth
When discussing performance requirements, consider the roadmap of
the business as well as design requirements for an increase in demand
on the system. Data and user volumes play an important part in how a
system performs, so it’s important to anticipate any growth expected
in the near future and design for that rather than focus on the current
requirements. Along with future growth, also plan for seasonality in the
system load, for example, during the end of the year.
Document requirements
It’s crucial that performance requirements be documented (the same
as for other system requirements). Documenting requirements pro-
vides visibility to all parties about the expectations of the software and
provides clear goals for the implementation team. Additionally, any
performance risks identified during discussions with users should be
documented in the project risk register to ensure they’re tracked and
mitigated as much as possible.
Assess feasibility
The implementation team should review performance requirements
Looking at this from the perspective of our street design discussion, there
are different ways to tackle a situation like traffic. We could design our
roads to be 10 lanes; that would handle a lot of traffic. But this creates
other complications. Do we have land that will support that infrastructure?
What are the relative costs associated with that design? How easy will it be
for residents to cross the street? How long will a light need to be red for
them to cross it? Will the street need additional lighting?
476
Use the right tool for the job
Dynamics 365 products are flexible and can be changed to suit the
needs of many businesses. However, exercise restraint when consid-
ering incorporating this flexibility into your design. Just because we
can adapt the products to achieve something functionally, it doesn’t
guarantee they will do them well.
For example, the xRM concept, which became popular around the
release of Dynamics CRM 2011, spawned many systems to manage any
type of relationship in the CRM product. The ease of development for
basic data access, including a built-in user interface and security model,
combined with its rich extensibility features, made it a popular choice
to begin system development. Although this proved successful in
many situations, many projects ran into trouble because they used the
product in a way unsuited to its strengths. Dynamics 365 products are
designed for specific use within a business context. They’re designed
and optimized for users to access master and transactional business
data, not for keeping high-volume historical transactions.
Many areas of this book discuss what to consider when making system
design decisions. Often the consequence of making the wrong deci-
sions during these stages are performance issues. It’s important that
use of Dynamics 365 products is in line with their intended purpose.
Environment planning
Chapter 9, “Environment strategy,” discusses environment planning in
detail. From a performance perspective, consider the following as the
team moves towards implementation:
▪ Performance testing typically occupies an environment for a
significant amount of time, so a separate environment is
usually advisable.
▪ Latency adds overhead to every operation. Minimize overhead as
much as possible by ensuring that applications are located as close
to each other as possible.
▪ Choose a performance test environment that’s representative of
the production environment whenever possible. For example,
for Finance and Operations, the implementation team should
User personas
The organization should have a sufficient number of user licenses to be
able to realistically model the anticipated behavior during performance
testing. Teams also need an understanding of the user personas for
testing expected usage. Keep the following in mind for user personas:
▪ User locations
▪ User security configurations
▪ Data typically associated with each type of user
▪ Expected number of concurrent users divided by persona
Proof-of-concept
development
Teams sometimes run an initial proof-of-
Customization and
concept project before implementing
the main project to prove out ideas and
gain understanding of product suitability.
performance
Custom functionality built during proof-
of-concept projects is often carried over
The extensibility of Dynamics 365 applications provides the powerful
into the main project.
ability to tailor software to meet individual business needs but also
Although this approach can significantly
accelerate development during the main
introduces risk—performance issues commonly involve product cus-
project, it’s worth bearing in mind that tomizations and integrations, particularly those involving code. This
there’s often little in the way of develop-
mental governance for proof-of-concept section discusses these issues and provides guidance on how to avoid
projects, which are usually undertaken common problems.
without any performance requirements
or considerations in place. These proj-
ects can therefore become a source of
Chapter 15, “Extend your solution,” discusses how to approach custom-
performance problems unless the team
takes the time to review the outcomes izing Dynamics 365.
along with any new requirements prior to
further implementation.
Retrofitted performance
Sometimes developers focus on getting code working correctly before
working quickly. Although this can be a reasonable approach, the
pressure of deadlines often means that the optimization planned for a
later date doesn’t get done, leading to technical debt and a need for
rework. A better approach is to be clear on any performance constraints
from the beginning and then implement the solution accordingly.
Requirement evolution
A change in functional requirements can be another reason for cus-
tomization-related performance problems. A developer decides how
to implement code based on the given requirement at a point in time.
A change in requirements might invalidate some or all of these deci-
sions and cause the implementation to become unsuitable.
Common mistakes
Code can become suboptimal from a performance standpoint for a
number of reasons, and specific guidance is beyond the scope of this
chapter. However, the following factors are often involved in perfor-
mance challenges, so we recommend that you understand and avoid
them during implementation.
Chatty code
One of the most common causes of code-related performance issues is
excessive round trips. Whether between the client and server or between
the application and database, an excessive number of requests for an
operation can really slow it down. Every request carries latency and pro-
cessing time overhead, and it’s important to keep these to a minimum.
It’s common for developers to write logic that iterates across collec-
tions of data that are dynamic in size, due to the data-driven nature
of code within Dynamics 365 implementations. It’s also common for
developers to work with low volumes of data within development en-
vironments. However, that means that these types of issues often aren’t
identified until the latter project stages, which include meaningful data
volumes. Low data collection can be avoided by prioritizing minimal
Fig.
data retrieval during code design, retrieving each piece of data only
17-3
once, and identifying any devia-
tions from these steps as part of a
foreach (var a in collection1)
code review process.
{
(foreach var b in collection2) Retrieving too much data
{ Another common perfor-
ExecuteRequest (); mance-related issue is retrieving
} more data than necessary, often in
} columns. For example, the practice
of selecting all the columns in
Dynamics 365 apps to avoid speci-
Size: Size: Total requests Total execution time fying individual columns can cause
collection1 collection2
performance issues. The number
2 2 4 0.2 seconds
of joins executed on the database
5 5 25 1.25 seconds server to provide lookup names
10 10 100 5 seconds and option set values can be a
significant overhead, particularly
50 50 2,500 2 minutes, 5 seconds
when querying large volumes.
Unintended execution
Performance issues sometimes happen because customizations are
executed accidentally or multiple times in error, for example, duplicate
plug-in steps or duplicate method calls. Project teams should ensure
that developers are aware of exactly how their customizations are trig-
gered and mindful of circumstances that might inadvertently trigger
their code. Background processes or batch jobs recurrence should be
set according to business needs.
Performance testing
approach
The project team and users need to be confident that requirements
identified earlier in the project are achieved when the system is
implemented. A performance testing strategy ensures that system
performance is measurable and provides a clear indication of whether
performance is acceptable.
Realistic
Being realistic means understanding the quantity and personas of
users using the system at a given time and defining day-in-the-life
activity profiles for the personas to understand the actions they’ll
perform. If the performance test incorrectly assumes that all the users
will run the most complex processes in the system concurrently, the
projected demand placed on the system will be far higher than reality.
Strive to model the users in an accurate way to get the most meaning-
ful results from the test.
Keep in mind that user interaction with the application is fairly slow.
For a given process, users don’t typically click buttons as quickly as they
can; they take time to think between actions, and this can be incorpo-
rated into a performance test as a think-time variable. This can vary
from user to user, but an average figure is sufficient to model behavior.
485
The key point here is to develop a performance test that represents a
number of users working concurrently and place a realistic amount of
demand on the system.
Isolated
A dedicated performance testing environment is generally recom-
mended for two main reasons.
Performance test results are meaningful only if testers are aware of the
activities occurring in the system during the test. A performance test is
worthless if an unaccounted-for process places additional demand on
the system during test execution.
Business data
Key to good performance testing is using business data such as setups,
configurations, masters, and transactions. It’s recommended to use
the configurations and data to be migrated that will ultimately go live
in production. Additionally, all the data preparation activities must be
ready in advance—for example, data from 100,000 customers or sales
orders should be available via an import file.
Functionally correct
A performance test result is meaningful only if the system functions
correctly. It’s tempting to focus on the performance metrics of success-
ful tests. However, if errors occur, they should be corrected and the
test should be executed again before any analysis is performed on the
results. Deviations in behavior between test runs can significantly skew
a performance test result and make any comparison meaningless.
Document results
The output of performance testing activities should be documented
clearly so that interpretation is straightforward. The results should
be mappable to performance testing criteria and enable the team to
quickly assess whether the requirements were achieved or whether
there are gaps between requirements and results. For example, page
load times can be captured for certain activities and compared to
acceptable page load time requirements agreed to by the business.
It should also be possible to identify when the performance test was
executed and against which version of code if applicable.
Deal in facts
When faced with a performance problem, implementation teams
often begin to frantically point fingers at certain parts of the system
and develop ideas to find the magical quick fix to solve the problem.
Unfortunately, this approach often causes more problems than it
solves because teams make significant changes based on instinct, often
degrading performance further or causing regression bugs as a result.
Expectations
Keep in mind that performance issues are rarely due to a single issue.
Suboptimal performance is most commonly the result of a number of
factors working together, so a single fix is typically unrealistic. Generally,
performance is an iterative process of applying incremental improvements.
489
a performance issue in the Dynamics 365 application or its underlying
infrastructure. Microsoft support teams are available to help with
performance issues, but keep in mind that when a performance issue is
isolated to a single environment, investigations typically start by ex-
amining anything unique in that environment, such as customizations,
rather than the application code or infrastructure.
Knowledge is power
Performance issues can be complex and difficult to resolve, so it’s vital
that the implementation team has sufficient knowledge to be able to
ask the right questions and analyze issues meaningfully. The imple-
mentation team is often able to assist with performance issues, but
issues can surface after go live and expiration of any warranty period.
It’s therefore crucial to transfer knowledge to allow business as usual
(BAU) teams to resolve performance issues.
Low-hanging fruit
It's advisable to identify smaller opportunities for performance gains,
rather than consider reworking large and complex areas of the system.
For example, for a poorly performing piece of code, there are usually
several options for optimizations with varying risk of causing issues and
varying performance gains. In some situations, it might be advisable to
make a number of low-risk tweaks; in other situations, it might be better
Workshop strategy
FastTrack runs a solution performance workshop focused on solution
design that covers the impact of additional configuration and cus-
tomization on the overall performance and end-user experience. The
workshop emphasizes the importance of performance prioritization,
goals, and testing during the stages of a project.
Workshop scope
The workshop includes the following topics that address how to
incorporate performance activities into the overall delivery plan and
allocate sufficient performance expert resources to the project:
▪ Data volumes Projected workload and integration volumes to
ensure expectations are within limits and aligned with intended
product usage
▪ Geolocation strategy Physical locations of users and servers to
identify any network-related challenges
▪ Key business scenarios Main areas of the business for which
performance is particularly important
▪ Extension performance Planned customizations to understand
how implementation is aligned with recommended practices
▪ User experience performance Modifications to the user expe-
rience in conjunction with best practices
Timing
We recommend that you conduct the performance testing strategy
workshop before solution design or as soon after as the team is able
to provide detailed information about performance requirements and
the performance testing strategy. Scheduling a workshop later in the
implementation is risky because any findings and recommendations
from the workshop could cause significant rework.
Product-specific guidance
Operations
Following are recommendations for achieving optimal performance in
your Finance and Operations solutions:
▪ Use Tier-2 or higher environments based on business objectives.
Don’t use a Tier-1 environment.
▪ Keep the solution up to date with hotfixes, platform updates, and
quality updates.
▪ Identify and maintain a log of performance-related risks.
▪ Use DMF to import and export large volumes. Don’t use OData for
large volumes because it isn’t natively built to handle large payloads.
▪ Use set-based data entities and parallelism to import and export
large volumes.
▪ Build your own data entities to avoid potential standard entity
performance issues. Standard entities contain fields and tables that
you might not need for your implementation.
▪ Configure a batch framework including batch groups, priority,
and threads.
▪ Define batch groups and assign a batch server to each batch
group to balance batch load across AOS servers.
▪ Design heavy batch processes to run in parallel processing.
Performance tools
▪ Trace Parser
▫ Diagnoses performance issues and various errors
▫ Visualizes execution of X++ methods as well as the execution
call tree
▪ Lifecycle Services Environment Monitoring
▫ Monitors server health metrics
▫ Monitors performance by using the SQL insights dashboard
▪ Query Store
▫ Reviews expensive SQL queries during defined intervals
▫ Analyzes the index used in queries
▪ PerfSDK and Visual Studio load testing
▫ Simulates single-user and multi-user loads
▫ Performs comprehensive performance benchmarking
▪ Performance timer
▫ Helps determine why a system is slow
▫ https://yoursite.cloudax.dynamics.
com/?cmp=USMF&debug=develop
▪ Optimization advisor
▫ Suggests best practices for module configuration
▫ Identifies obsolete or incorrect business data
Customer Engagement
The following are guidelines taken from the Fast Track Performance
Optimization Tech Talk for optimizing solution performance:
Conclusion
This chapter discussed why performance is expected and critical for
user adoption, the customer experience, and project success. We noted
that although Dynamics 365 projects are built to perform well at scale,
their flexibility means it’s crucial that implementation teams consider
performance as an iterative process throughout the solution lifecycle.
References
Product-specific guidance (Operations)
Monitoring and diagnostics tools in Lifecycle Services (LCS)
Performance troubleshooting using tools in Lifecycle Services (LCS)
Work with performance and monitoring tools in Finance and Operations apps
Query cookbook
Take traces by using Trace parser
Diagnose issues and analyze performance by using Trace parser
Performance timer
Query Store Usage Scenarios
Monitoring performance by using the Query Store
Optimization advisor overview
Performance focus
Establish that the responsibility to deliver a performant
solution on the SaaS platform is shared by the cloud
service provider and the implementor who is customiz-
ing and extending the out-of-the-box application.
While still in the design stage, the project team decided to use the
best practices from the Success by Design framework and their own
experiences to highlight the risks of leaving performance testing out
of scope. The project team pointed out potential negative outcomes if
performance wasn’t considered:
500
18
Implementation Guide: Success by Design: Prepare for go live
Guide
Prepare
for go live
501
The final countdown for
the start of a journey.
Introduction
Go live is the process through which a new solution
becomes operational. It’s a critical milestone during
the deployment of a new business solution.
This is the stage in which all the parts of the project come together and
are tested and validated to ensure everything works as expected, not
just on their own, but as a whole.
When going live with a new solution, there’s often a transition period,
also known as cutover. The cutover process involves several steps that
need to be planned, executed, and monitored to ensure the completion
of all the essential activities critical to the transition to the new solution.
Go-live readiness
All the tasks and efforts undertaken during an implementation project
are preparation for the biggest milestone of the project: go live. In
the Success by Design framework phases, completion of these tasks is
when the project reaches the Prepare phase.
At this point, if you followed all the previous guidance and recommend-
ed practices, the solution should have sufficient maturity for going live.
You should perform an evaluation on the preparedness of the people,
processes, and systems for the solution to ensure no critical details have
been overlooked for go live. While you’ll never be in a position of zero
risk for go live, the go-live readiness review is a qualitative method to
determine how fully prepared the new solution is to run your business.
When the go-live strategy is aligned with best practices, requirements
for going live have been successfully completed (including testing, code,
and configurations), and there’s a concise and agreed-to plan for the
next actions required for the transition to the new system (cutover), then
the project is ready to go live. Figure 18-1 shows the Success by Design
implementation phases and when go live occurs.
The review uncovers potential risks and issues that could imperil go live
and provides a set of recommendations and actions to mitigate them.
Start early to prepare for the go-live review. Schedule time to complete
the go-live checklist and conduct the review, accounting for time to
mitigate possible risks and issues, especially go-live blockers. Keep in
Identify all the possible risks and issues and get the recommendations
on each key area. All issues and risks must have a mitigation plan.
Identify workarounds for any go-live blockers. Follow up on mitiga-
tions of issues identified for critical activities.
Acceptance of
the solution
SIT completion
The go-live checklist
The go-live checklist is a list of requirements for go live. Use it to assess
UAT completion
and validate the preparedness of your solution. The checklist includes
mandatory activities as well as recommended practices.
Performance testing
completion
In the next section we discuss in depth the main topics in the
Data migration readiness
and validation checklist. Refer to the “Product-specific guidance” section for specific
product requirements.
Confirm external
dependencies
Expected outcome
Solution Scope is aligned with the solution that’s going to be live. It has
been communicated, shared with stakeholders, and signed off on by
the business, agreeing that the expected scope is covered.
Mitigation plan
Compare the Solution Scope with the solution delivered for go
live, share the Solution Scope and comparison results with the key
stakeholders, and determine whether the solution is complete or if
functionalities are missing. If any are missing, prioritize them and
assign level of risks, owners, and expected completion date.
Performance testing As described in Chapter 14, “Testing strategy,” testing accurately gauges
completion the readiness of the solution. Testing measures a solution’s quality and
Data migration readiness effectiveness because testing simulates how the solution will operate
and validation in real life. As a result, testing builds a solid foundation on which to
Confirm external determine whether the solution is acceptable, enabling the business to
dependencies make a go/no-go decision. The business is required to sign off on the
solution when there’s objective and rigorous testing to prove that the
Change management for
initial operations readiness solution fulfills the business vision.
Operational support
readiness Expected outcome
Unit testing through end-to-end testing must be completed success-
fully and the results signed off on by the business. By doing so, the
business states that the solution as built meets their end-to-end pro-
cess needs and that those processes are ready to be executed on the
new solution.
Mitigation plan
Following the best practices in Chapter 14, “Testing strategy,” you’ll
ensure the integrity and effectiveness of the solution, minimize any
go-live blockers, and avoid rushing into fixing unexpected and critical
bugs close to go live. You’ll complete the testing and ensure the integ-
rity and effectiveness of the solution.
Several types of testing are covered in the chapter about testing. Some
elements from these testing types are included in the go-live checklist
Go-live checklist and need to be validated. The main testing types are SIT, UAT, and
performance testing.
Solution Scope
to be released
Acceptance of
the solution
SIT completion
What you check
SIT completion Validate that SIT is successfully completed. It’s important to test on
expected peak volumes and have the business sign off on it.
UAT completion
Going live without fully testing the integrations might result in un-
steady and ineffective system connectivity, performance issues, flawed
and weak interfaces, and data flow issues such as data inconsistency
and data not being available in real time.
Mitigation plan
Complete SIT with peak volumes that are close to actual peak volumes
and get sign-off from the business that the integration testing strategy
works for the go-live. For guidance on which integration pattern to
choose, see Chapter 16, “Integrate with other solutions.”
UAT completion
Go-live checklist What you check
Verify that UAT is successfully completed (or almost completed if there
Solution Scope
to be released aren’t any outstanding significant cases) and signed off on by the business.
Acceptance of
the solution Why you check it
UAT helps validate the inclusion of all the expected functionalities. The
SIT completion solution might still have some minor bugs and need some fixes, but
you can ensure that no critical part of the solution is missing.
UAT completion
Often during UAT, edge cases that were overlooked might be discov-
Performance testing
completion ered, resulting in continuous fixes and improvements to the solution
during this period.
Data migration readiness
and validation
UAT enables users to use the system and understand how to perform
Confirm external
dependencies activities. It also provides an opportunity for users to bring up scenarios
that might have been missed.
Change management for
initial operations readiness
UAT should include different types of personas, a variety of geographic
Operational support
locations, and different environments (such as virtualized, physical, and
readiness
remote) and devices (such as laptops, mobile devices, and tablets).
If the decision is made to go live with an incomplete UAT, the project team
might discover during go live that the solution doesn’t work properly, and
recovery would be difficult, costly, and take a lot of time and effort.
Mitigation plan
Typically, UAT is considered complete when all the cases are covered and
there are no blocking issues and any high-impact bugs are identified.
Establish a mitigation plan to address open items identified during
UAT, with an owner and an estimated time of completion. Without a
mitigation plan, there’s a substantial risk of delaying the go-live date.
Without a
mitigation plan,
Performance testing completion
there’s a substantial What you check
risk of delaying the Validate that performance testing is successfully completed and signed
go-live date. off on by the business.
511
Why you check it
Go-live checklist Overlooking performance testing might result in performance issues
post–go live. Implementations often fail because performance testing
Solution Scope
is conducted late in the project or not until the production environ-
to be released
ment is ready, when it’s used as a testing environment. A production
Acceptance of environment shouldn’t be used as a testing environment. It’s more
the solution
difficult to fix performance issues in a live environment than in a test
SIT completion environment and might also result in production downtime.
UAT completion Any open issues critical for production need to be addressed before
going live.
Performance testing
completion
Performance testing covers the following:
Data migration readiness
▪ Load times with peak volumes of transactions
and validation
▪ Data search performance
Confirm external ▪ End-to-end processes and specific business processes
dependencies
▪ Performance from different devices, browsers, and virtualized
Change management for environments
initial operations readiness
▪ Integrations
Operational support ▪ Environment for performance meets the standards to ensure it’s
readiness
reliable for testing
Expected outcome
The solution delivered is performant and meets the business perfor-
mance criteria: it supports the load of expected transactions and user
concurrency and usage, and the speed, system response time, stability,
and scalability are acceptable.
to replicate them even end-to-end performance of each business process, but it doesn’t
represent the full load and concurrency of actual usage post–go live.
in the production Therefore, it’s important to have a separate performance testing strategy
environment. that can simulate stress on the solution and concurrency usage.
Mitigation plan
Execute performance testing in parallel with UAT. Establish a mitigation
plan to address issues and assign owners and expected completion
dates. It’s important to identify the root cause of issues so as not to
replicate them in the production environment.
Acceptance of
the solution Why you check it
Data migration is a key activity during the cutover. You need accurate
SIT completion data on day one of your live operations.
UAT completion
Data quality is an important aspect of data migration; you don’t want
to import bad data from a system into a new system. It’s important to
Performance testing
completion have a plan to conduct data validation.
Operational support
All scripts and processes planned for the cutover migration are tested
readiness
and signed off on by the business.
Mitigation plan
Go-live checklist Execute dry runs of how to migrate the data and prepare the
migration plan. Follow more recommended practices to prepare your
Solution Scope
to be released data migration by reviewing Chapter 10, “Data management.”
Acceptance of
the solution
Confirm external dependencies
SIT completion What you check
Verify that external dependencies such as ISVs and third-party systems
UAT completion
and services are aligned with the timelines and scope for go live.
Performance testing
completion Why you check it
External dependencies are outside the direct control of the project team.
Data migration readiness
and validation This means it’s even more important for these dependencies to be ac-
counted for when building the project plan and managing the schedule.
Confirm external
dependencies
Expected outcome
Change management for
initial operations readiness If the scope of the solution includes external dependencies, the
implementation team should coordinate with them to ensure that
Operational support
expectations are aligned, requirements are met, and their solutions are
readiness
aligned with the roadmap timelines.
Mitigation plan
It’s good practice to have regular meetings to review dependencies
status because problems can cause delays and other project issues.
Expected outcome
Our goal here is not to explore the plethora of change and adoption
strategies but to highlight the key principles and activities that are
critical before the rollout of a new business solution.
Training For go live to be successful, it’s vital that users know how
to use the system on day one of operations. Therefore, successful user
training is key to go live.
Ensuring that users are trained helps them achieve the desired results from
the new solution and leads to higher productivity and adoption. Failure to
establish a training strategy can negatively impact usage and adoption.
For go live to be Your program must have an effective way to engage the end users of
successful, it’s vital the solution to help drive adoption and also eliminate the risk of build-
that users know ing a solution that doesn’t necessarily meet user requirements.
517
To help overcome adoption challenges, it’s crucial to get leadership
support to encourage innovative technology use. End users are less
resistant to adopt novel solutions when business sponsors serve as a
role model.
Solution Scope
to be released Mitigation plan
Reassess the change management plan throughout the implementa-
Acceptance of
the solution tion project.
SIT completion
Operational support readiness
UAT completion
What you check
Verify that there’s a monitoring and maintenance plan for the solution
Performance testing
completion in the production environment as well as for transitioning the plan to
support teams.
Data migration readiness
and validation
Why you check it
Confirm external
dependencies Verification ensures the health and availability of the solution once it’s
live. Before go live, it’s important to plan the transition to the solution’s
Change management for
initial operations readiness support teams.
Operational support
Expected outcome
readiness
Support teams can be from the partner or customer organization, and
Notify stakeholders when all parties agree that the system is ready to
go into production and send them a list of end users who will use the
new system.
519
For additional details about monitoring and Hypercare, visit Chapter
21, “Transition to support.”
There’s also a risk that the new solution could go live in a system ver-
sion that’s out of service. In such a scenario, if an issue is uncovered in
production, you’ll need to update the solution to the latest version to
be able to apply the hotfix. In addition, automatic updates that install
unexpectedly might affect the deployed solution and cause outages,
unavailability, and blockings.
Mitigation plan
As discussed earlier in this section, it’s important before go live to plan
the transition to the teams who will support the solution. A support
plan enables support teams to be more proactive and preventive
rather than reactive.
Production
environment readiness
It’s critical to prepare the production environment before go live. There
are different approaches to achieve this, depending on the applications
in the scope of the deployment.
A support plan For some applications, the production environment can be created
enables support from an initial phase without any dependencies. For other applications,
teams to be more a formal process needs to be followed to create or deploy the produc-
tion environment. Additionally, several milestones must be completed
proactive and prior to deploying the production environment to help with the sizing
preventive rather of the environment and impact on factors such as performance and
than reactive. scalability of the environment.
520
The “Product-specific guidance” section later in the chapter details
the steps for each application. For a discussion of production environ-
ments, see Chapter 9, “Environment strategy.”
Fig.
18-3
Rehearse
Communicate
Go/No-go
The cutover strategy ensures alignment of the cutover plan with orga-
nizational objectives and includes the following aspects:
▪ Timing, market conditions, and other environmental aspects
necessary to go live with the new business solution
▪ Organizational readiness requirements in terms of available com-
petencies and resources
▪ Setting up the communication plan
▪ Defining roles and responsibilities
To set the go-live date, consider the time required to complete testing
and resolve any issues that might arise, in addition to time for the
preparation of the production environment and the cutover activities.
The stakeholders should verify that all necessary resources are avail-
able not only for the requested duration of the cutover plan activities
but also to support post–go live.
Communications plan
The communications plan is an essential part of the cutover plan. This
plan identifies all the stakeholders involved in the cutover. The plan
Effective communication helps avoid
should include at least one communication channel and appropriate
uncertainty and provides visibility to
stakeholders about project status and the distribution lists, depending on the type of communication required.
results of each activity, which is important
for a successful cutover.
It’s important to identify and document the different communications
required for go live, who is responsible for triggering the communica-
tions, and the recipients list. Having a communication plan as part of
the cutover plan enables better visibility into who the stakeholders are,
at what point in time they should receive communications, and who
the points of contact are.
Cutover plan
The cutover process is a critical and complex step that must be planned
and practiced in advance. The center of the cutover process is the cutover
plan, which lists in detail every step that needs to be executed and moni-
tored to prepare the production environment for business processes to be
executed once the cutover is complete. Cutover activities include system
configuration, data migration, data validation, and decommissioning of
legacy systems when applicable. These steps must be precisely orches-
trated to minimize disruption and ensure a successful deployment.
When creating the cutover plan, it’s important to consider and document
all dependencies, the timing of each task down to the minute, who’s
responsible and accountable for each task, instructions, verification steps,
and any additional documentation associated with each task. To arrive at
the correct timings for the cutover plan, activities must be practiced and
tested multiple times. It’s recommended to perform a “mock cutover,” or
dress rehearsal, simulating the activities of the real cutover in sandbox
environments. Depending on the complexity of the solution, it might be
necessary to conduct multiple rounds of mock cutover. The goal is that
no matter how complex the cutover procedures are, the project team has
high confidence that final execution will work flawlessly.
Cutover plan The implementation team should rehearse all the cutover plan steps
example to ensure that the cutover strategy and communication plan are ready.
70
80
The cutover plan should also specify whether there are workarounds or
additional plans that might prevent any issues from delaying go live,
for example, if a blocking issue was identified during UAT but there’s
a workaround that can be implemented until a permanent fix can be
applied. The workarounds should take into consideration the impact to
end users. After all, users judge a solution by how well they can use it,
regardless of whether there’s a workaround. This is another reason why
all key stakeholders need to participate not only to define the success
criteria but also to make the final go/no-go decision.
Cutover execution
The cutover execution is the series of activities carried out in adherence
to the cutover plan. Cutover execution includes the following steps:
▪ Communicate activities, milestones, and results to stakeholders in
accordance with the communication plan.
▪ Ensure that all activities are executed and completed and that
any unforeseen issues are addressed or can be addressed after go
live, have been communicated, acknowledged by stakeholders,
and documented.
▪ Execute the activities.
▪ Monitor the execution of the cutover activities.
If you execute a mock cutover, it’s crucial to validate the duration time
of the different tasks and their sequences so that you can achieve a re-
alistic plan for the actual cutover. This validation also helps you identify
the need for mitigation plans and whether you need to run additional
mock cutovers.
528
Product-specific guidance
Operations
Operations must follow the go-live readiness review process to go live.
This review process acts as the quality gate for releasing the produc-
tion environment, which will be deployed only if the go-live readiness
review has been completed.
Fig.
18-5
Opening
Move Master data
Configurations balances in Mock go live Cutover
configurations in production Live
complete production complete complete
to production complete
complete
Configured Back up and Additional data, Load opening Simulate real-life After running From this point,
manually or with restore DB in like master data, balances with operations for a the mock go no more database
data entities/ sandbox and is added on data entities small period to live, restore movements
packages. production. top of restored to reflect a confirm solution production DB allowed, just data
configurations real operation stability in to any previous increments using
manually, with scenario or as production. reusable data entities or
data packages or final load. milestone. manually created.
integrations.
Configurations
environment
Production environment
Several tools are available to aid with building testing automation. One
such tool is Easy Repro, an open-source library intended to facilitate
automated UI testing.
531
Monitor API limits because after go live, there might be integrations or
bulk operations that cause peaks in the usage, resulting in API limit errors.
Data migration
You can use several tools—out of the box, ISVs, and custom built—to
migrate data. It’s important to follow best practices and include them
as part of your migration strategy.
Various factors can impact data migration activities, for example, the
service protection API limits that ensure consistency, availability, and
performance for everyone. Keep these in mind when estimating the
throughput and performing testing and the final data migration activities.
Requests limits and allocations can also have an impact and should be
taken into consideration.
Several tools help you prepare for go live, including the following:
▪ Power Apps checker helps validate model-driven apps as well as
canvas apps and should be used throughout the implementation.
▪ Microsoft Dataverse analytics enables access to metrics that help
you monitor the solution, especially during the operations phase.
Continuous updates
In the words of Mo Osborne, Corporate Vice President and Chief
Operating Officer of the Business Applications Group at Microsoft, “To
enable businesses everywhere to accelerate their digital transformation,
we are continuously enhancing Dynamics 365 with new capabilities.”
Service updates are delivered in two major releases per year, offering
new capabilities and functionalities. These updates are backward com-
patible so that apps and customizations continue to work post-update.
Conclusion
As mentioned at the beginning of this chapter, go live is a critical
milestone during a deployment—every plan and test is validated at
this stage.
This chapter isn’t intended to explain how go-live readiness and the
cutover plan should be built; there are other resources that cover these
topics. The aim of this chapter was to provide a summarized explana-
tion of the key activities for go-live readiness and their purpose and
the important takeaways, best practices, tips and tricks, and common
pitfalls that can happen during this phase and their impact on how the
new solution is received and adopted by end users.
Planning and preparing are key to avoiding delays that can impact
the deployment execution. All resources must be available in time for
activities to be executed and completed. Environments must be creat-
ed, configured, and operational on time.
Planning and Data migration can be complex. Realistic goals and good throughput are
important to ensure that there are no delays that might impact the go-
preparing are key
live date and that all tasks are executed according to the data migration
to avoiding delays plan. Ensure that the recommended practices to improve throughput are
that can impact in place. Validate that throttling limits aren’t being reached.
the deployment
There should be coordination between the implementation team and
execution. external dependencies to make sure that expectations are aligned and
533
requirements met. Any external solutions must work as expected with
the current version of the solution and platform and be aligned to
the roadmap.
Support during cutover, go live, and post–go live is essential for all
activities to run in the expected time. It’s crucial to have resources
available to mitigate any issues so that tasks can complete on time
or with as slight a delay as possible. A poor support strategy for a
successful rollout will impact the success of the rollout over time. It’s
important to create a list of responsibilities for first, second, and third
lines of support as well as an escalation path and to transition knowl-
edge from the implementation team to the support team. Ensure that
there’s coverage outside traditional working hours and on weekends
for critical situations.
References
GitHub - Microsoft/EasyRepro: Automated UI testing API for Dynamics 365 Dynamics 365 release schedule and early access
Use solution checker to validate your model-driven apps in Power Apps Prepare for go live
Create guided help for your Unified Interface app One Version service updates FAQ
Dynamics 365 Adoption guide Monitoring and diagnostics tools in Lifecycle Services (LCS)
Accelerate Dynamics 365 Adoption checklist Evaluate go-live readiness for Dynamics 365 Customer Engagement apps
At the end of October, the team was excited and a little nervous
about the approaching go live. The impact of changing from a system
in place for many years would be huge. The team was used to their
manual and inefficient processes but were ready to move to a system
that would automate and improve processes and increase revenue
growth and profitability along with scalability. Change management
was critical for success.
Concerns came up over risks and issues identified during the as-
sessment. For instance, they realized that the mail service and other
services were in another tenant, so they needed to perform a tenant
move to enable single sign-on for their users. In addition, Microsoft
was planning an automated version release by their go-live date, so
they needed to update their environments to the latest version. This
would necessitate conducting another smoke test and verifying that
the ISVs worked correctly with the version. They missed dates because
they were focused on UAT and making fixes and addressing new re-
quirements identified during the testing phase.
Would they need to change the go-live date that was so close to the
holiday season?
What could they have done better? They couldn’t have changed the
date because they needed to be ready for the holidays, but they could
have planned the go-live date earlier so that they had more time for
the ramp-up and to address any delays or issues. They could also have
had an earlier assessment review, with UAT almost complete. The time-
lines were tight, and they had the choice to go live with what they had,
which was risky because that might mean stopping operations in the
The team learned that it’s important to start the readiness review on
time and to schedule sufficient time and resources. Additionally, it’s
crucial to have a solid support plan for production. The importance
of the Prepare phase also shouldn’t be underestimated—plan with
enough time to mitigate any issues and risks.
Introduction
At its core, the goal of a training strategy—and
training—is to ensure that all necessary users of
your system are educated on the new application so
that their knowledge of how to complete their work
results in successful user adoption following go live.
Training is not the only factor in meaningful user education and adop-
In this chapter, we define a training
strategy and determine at a high level
tion; empowering users to perform their necessary tasks in the system
what components you need to embark correctly and efficiently should be the ultimate “why” in the develop-
on a methodical approach to a successful
training execution for your Dynamics 365 ment of a training strategy. As part of user adoption, organizations
implementation. Each of these areas is
should strive to train in a way that gives users confidence in the appli-
covered in detail in this chapter.
cation and inspires a sense of delight when using the system.
For a successful training
strategy consider these
In this chapter, we cover several high-level objectives, as well as exam-
main areas:
• Training objectives
ples of more organization-specific objectives, that should be included
• Training plan in your organization’s training strategy. A proper training strategy
• Scope
• Audience should center around the creation and execution of a comprehensive
• Training schedule
training plan. Furthermore, the training plan should align to the broader
• Training material
• Delivery approach training strategy of your Microsoft Dynamics 365 implementation. We
• Assumptions, dependencies, and risks
• Training as an ongoing process
discuss how to create an appropriate scope for your organization’s
• Training best practices training as part of this plan, as well as how to confirm the different
groups of users that need to be trained.
By following the guidelines laid out in previous chapters of this book, your
organization should be well on its way to rolling out a successful Dynamics
365 application that fulfills the needs of your business and users. Successfully
training your users on using modern technology, as well as understanding
business processes that they will need to incorporate to do their jobs safely
and efficiently, is integral to any organization’s training strategy.
Training objectives
One of the first things that your organization should consider when
beginning to develop a strategy surrounding training users, and a
plan behind it, is the objectives. Defining the objectives of a successful
training strategy is key—and it can help shape the crafting of a training
plan as well as lead to a more successful rollout of training itself.
541
At a high level, every Dynamics 365 application training strategy
should include some version of the following objectives. Many change
management objectives can also be addressed by ensuring that your
training strategy explicitly addresses these training-focused objectives:
What not to do
As a counterpoint to the recommended strategies, let’s spend a little
time discussing the training objectives that an unprepared organization
might create.
In this example, let’s say the organization creates the following objective:
“Objective: Dynamics 365 Field Service users should be trained on the
Ensuring that your users are comfortable with the most challenging
obstacles to user adoption is important to achieving training success.
It is significantly While your training objectives do not need to specifically reference
easier to judge tasks or challenging business processes directly (as these would be too
the success of specific and thus not useful), they should reflect knowledge of areas of
your application that may require additional attention from a learning
training in your perspective. The mobile objective in the previous paragraph is a good
organization if you example of this strategy because we recognize that Dynamics 365 mo-
can track against bile apps can represent a significant change in business processes for
users and, therefore, could require additional attention from a training
objectives that and user adoption perspective. Thus, a key objective should be that all
have been well- users receive proper training on this specific job function.
written.
543
Creating a training plan
Proper training is critical to user adoption. Organizations must develop
a training plan at the start of the project and engage resources from
the beginning.
▪ Training objectives
▪ Scope
Fig.
19-1 Training plan elements ▪ Audience
▪ Training schedule and re-
source availability (project
Training
objectives planning)
Training
Scope
resources ▪ Delivery approach
▪ Validation of training
Training as success/objectives
an ongoing Audience
process ▪ Assumptions/dependencies
▪ Risks
▪ Training environment
Training
schedule and management
Training
materials Training plan resource
availability
(project ▪ Training materials
planning)
▪ Training as an ongoing
process
Training Delivery ▪ Training resources
environment approach
management
Scope
A crucial element of ensuring that your organization’s project has a
proper training strategy is defining an accurate scope for the training.
Here are different areas that cover the breadth of scope that you will
need to build into your training plan:
There could be other areas of your application that should fall into
the scope of your training. If there is a task or process that your users
need to accomplish while working directly or tangentially with your
Dynamics 365 application, it should probably fall into the scope of your
training, as you should assume there are users who need to be trained
on that topic. An example of this would be training project team mem-
bers during your implementation who will need to work inside the
application from a development or configuration perspective.
The next step in setting the scope for your training is to categorize
and prioritize the list of training areas that you have created. As an
example, some business processes in your new application might
be different from the “as is” processes in your organization’s current
application. This is important for a few reasons:
▪ System training and business process training are two separate
efforts, even though training might address both types.
▪ Training surrounding changed business processes should be given
special attention in training sessions and in training material.
546
Also, you should prioritize, or at least highlight, any training related
to critical nonfunctional requirements that have impacts on legal
regulations or company policies. For example, if your organization is
legally required to fill out fields on your sales order form in a particular
manner, this section of training should be emphasized so that users are
aware of its importance.
Audience
Now that the objectives for your organization’s training strategy have
been defined, along with the scope of content that training should
contain, we can define the next piece of the training puzzle: who to
train, and what type or types of training they should receive.
Fig.
19-2
Field technicians are moving from a Using the new mobile 4 Because of the impact this change
pen-and-paper-based work order application to complete has, multiple live training sessions
management system to a mobile a work order will be held for technicians to ensure
application proper education
The following section identifies the core groups of people who should
be trained on your Dynamics 365 application. Note that each group of
users can be broken into subgroups. Breaking down these users into
personas is important to successfully executing training; if personas are
incomplete or incorrect, your organization might not complete prop-
er training on the correct business processes or, worse, it might not
complete training at all for specific user groups.
• Approving records
• Creating and As an example, consider a salesperson and a sales team leader (see Figure
viewing charts
• Dashboards 19-3). Even if your organization has defined that their required access in
your application is similar enough to warrant identical security roles, it
Sales team leaders, on the other hand, might work on similar tasks as
salespeople, and they might require some or all of the same training, but
they will also need additional types of training, such as reviewing and
approving records, and creating and viewing charts and dashboards.
Trainers
Trainers are a special category of users that require training. They
should be knowledgeable in all areas of the system and are respon-
sible for developing training material and training other users. “Train
the trainer” is a common approach to onboarding this type of user
to your application. Depending on the size of your organization, you
might have several trainers. Since the goal of your trainers is to create
material and conduct training for your other users, it is important that
at the end of “train the trainer” sessions, your trainers are aware of and
able to perform their job function—training other users—effectively.
The goal of training these users is different from that of your end users;
trainers should be part of the training strategy process and will be
advocates of the new application.
Super users
Super users are an elite group of (usually) end users who act as cham-
pions of the application. This group is key to driving excitement and
adoption of the new system. These users are often involved in early
feedback cycles with the project team. As subject matter experts (SMEs),
550
Training scope
Once you have defined the distinct groups (and subgroups) that
should receive training, you should work on matching the different
training areas defined in the Scope section of this chapter with the
groups of users defined earlier. Each group of users will require a spe-
cific set of training (based on training areas). We recommend creating
a matrix in your training plan—with training role against training
subject area—that you can refer to when creating training materials,
planning training delivery, etc.
You should When determining which personas need which types of training, refer
structure your again to your user stories as a baseline. You should also consider which
groups need training in non-functional areas and administrative areas.
assignment of Note that certain groups of users might require different levels of
training topics training on identical subjects. For example, trainers might require only
to users in a basic training on Dynamics 365 administration (to assist other users
during training and to implement training tools), while administrators
way that is and support desk personnel might require advanced training on the
comprehensive but same topic. You should structure your assignment of training topics to
not superfluous. users in a way that is comprehensive but not superfluous.
Training schedule
Successful integration of a training strategy into the main project
effort requires coordination with project management resources. Your
organization’s training strategy does not exist independent of other
ongoing efforts in the project, and it is important that key milestones,
resource requirements, dates, and tasks are acknowledged and ac-
counted for as part of your training plan and are incorporated as tasks
or dependencies in your overall project plan.
▪ Trainees
▫ Who is necessary to take part in training activities, and when
will these activities happen?
▫ Are there any resource dependencies (e.g., Resource A is re-
quired before Resource B)?
▫ How will training, and attendees, be staggered so that there is
no gap in coverage for day-to-day business functions?
▪ Training plan
▫ When will the training plan be completed?
▫ Who will review it, and what will be the plan for updating
it if necessary?
552
▫ If there are subsequent trainings, how will feedback be evaluat-
Fig.
ed and what is the cycle for processing this feedback?
19-4
Question
As you can see, many questions related to a training strategy have an
impact on, or are reliant upon, activities that occur separate from tradi-
Are any project-related activities
tional training activities. In Figure 19-4, we examine one question from
dependent on training?
this list to underscore the importance of asking and getting answers
Answer for these questions as part of your project planning.
Videos Trainers can create videos that explain key training areas in de-
tail and walk through the application, providing guidance on how users
can navigate the system and complete their job tasks. Videos can be
more helpful than documents for many users, since they “show” rather
than “tell” certain features of the application. Videos can also be record-
ed in small chunks that represent discrete processes in an application,
as opposed to creating one longer video that walks a user through an
end-to-end flow. This benefits trainers from a maintenance perspective;
instead of having to rerecord the entire video when changes are made
to your application (and create rework around having to record training
Guided help Dynamics 365 includes the ability to create custom help
panes and guided tasks, out of the box, to assist your users in walking
through the application and completing basic tasks in the system.
Guided help is easy to set up and implement and does not require an
additional solution on top of your application. Additionally, Dynamics
365 applications can install the Guides application, which allows for
visual or holographic displays to show step-by-step instructions on
tasks that your users need to perform.
554
multiple reference guides or a single complicated guide (in the case
of written training material), a trainer can create one set of scripts and
apply them to different groups of users.
Delivery approach
As a subset of your training plan, your organization should create a
training delivery document that contains specifics about the actual
We’ll now discuss the topics that your training delivery document
should cover.
557
Fig.
19-5
Live training The content is critical for Better interaction Scheduling challenges.
(in person or business function and you between trainer and
want strict control over participants as well as In-person variants of this type of
virtual)
delivery. between participants. training require a physical room with
computers, since Dynamics 365 is a
Content contains non- Immediate feedback. web-based application.
functional requirements,
or business processes that Collaboration in business In-person variants of this type of
require specific devices. processes is significantly training are limited to the number of
easier (i.e., front office people per room, per session.
and back office need
to work together to
complete an order).
Interactive Content is moderate in Web-based content Trainings are less effective than in-
web-based difficulty. can be built once person, since there is less interaction
and consumed by an and the end user faces more
training
Business processes are unlimited number of distractions.
best learned through users.
repetition and ongoing Trainings are not usually real-time,
training, since users’ People can train on their meaning any questions or challenges
access to the application own. No scheduling that users face during training might
is necessary. requirements and no not be answered quickly.
physical location needed.
Web-based training, when built
with digital adoption tools, can be
costly and also require technical
maintenance.
Self-paced Content can be easily Written and video This is the least interactive form of
training explained in a user content can be updated training, meaning users who rely on
manual or video series. quickly as future releases interaction either with a trainer or
and updates occur on with a web-based application might
Content is best your Dynamics 365 struggle to learn when having to
consumed as written application. read/watch content.
reference material that
can be used as needed. Training content is easiest
to create.
April
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Trainers work to End-user training #1 End-user training #2 End-user training for field
develop/translate Conducted in the US, Conducted in the US, technicians
material into English, France and Spain France and Spain Conducted in France
French, and Spanish
Training material Environment cleanup Environment cleanup
(including translations)
are validated
You can measure the effectiveness of your training through several dif-
ferent recommendations. The first is by assessing your users’ learning
at the end of, or shortly following, training—in particular, prioritizing
the processes that they will use most frequently in their jobs. It’s
much more important that sales users understand how to create and
561
update an account record in Dynamics 365 without assistance if it’s
a process they will need to do every day. Such prioritization should
be emphasized during the creation of any assessment method you
choose. Earlier in the chapter, we discussed how creating solid training
objectives would help you assess the effectiveness of your training. We
can now take those training objectives and use them to form the basis
for how we will assess our users, so long as the training objectives we
created contain the most critical topics that we are training users on.
to know if certain minutes to complete. This could be used to measure your organization’s
readiness for the new application. If certain business metrics must be
business processes met regardless of the application used, make sure these metrics are met
take your users 10 within the Dynamics 365 application. As an example, if your call center
minutes or 2 minutes staff is expected to process at least six calls an hour, it is important to
ensure that your staff has been trained to complete a single end-to-end
to complete. call process in your application within 10 minutes.
Similarly, after go live, you can use helpdesk statistics to determine the
Question
longer-term effectiveness of training. If users are submitting help desk
What if, during a training session, a tickets, or requesting assistance, on specific areas of your application,
critical bug is discovered that blocks
training from going any further?
it’s a good idea to review training or training documents related to
those features to see if they should be updated. Again, if necessary,
Recommendation additional user training might be needed.
Your training plan and project plan should also include dependencies,
which can be represented in a process workflow. They effectively show
that certain tasks or activities must occur before others as part of your
training strategy. Figure 19-8 depicts a sample process workflow.
Training as an
training are properly documented
(with screenshots, if necessary) and
sent for review or triage.
ongoing process
• If restoring from a Dynamics 365
backup, ensure that a backup for the
correct date is available. (Work with
your partner or IT staff, as well as
Microsoft if necessary, to ensure this As mentioned in the introduction of this chapter, your organization’s
is not a blocker.)
• If restoring from source control as training journey begins long before the execution of user training, and
part of your ALM process, ensure
its ending extends long after those users complete their first tasks in
all code and data are ready for an
environment refresh. your production application. Training is an ongoing and constantly
• Refresh your environment using your
chosen method.
evolving process that is reflective of changes in your organization and
• Validate user access and data via a application, as well as changes to the Microsoft Dynamics 365 ecosys-
smoke test, prior to the start of the
next training. tem. While developing a training strategy, it’s important to consider
the implications of both types of change.
Internal change
Fig.
Create training documentation on Internal change, for the purposes
19-8
lead/opportunity management of this section, refers to change
that, at a transactional level, is
unique to the organization ex-
Update Finalize
Create draft
Review 1 document Review 2 document periencing it. Each of these types
of document based on based on
of document of BPO
by SME feedback feedback of change could require training
from SME from BPO
material to be updated and sub-
sequently conducted on the same
or different groups of users.
onboarding, as they to do this, these users need to be trained. Note that it’s unlikely that
an organization would conduct multiple lengthy in-person training
might receive fewer sessions for a single user’s onboarding, and would instead rely more on
training resources self-paced or web-based training as a means of education. It’s important
than users who were that these new users are provided support during and after onboarding,
as they not only are initially behind more experienced users who have
present during the been using the application for a longer period of time, but they might
application’s initial receive fewer training resources than users who were present during the
rollout. application’s initial rollout.
Your organization’s training material for users who joined after go live
will be similar to that used by your end users before go live, but again,
be sure that these users are supported and given the assistance they
require to train on the application.
Application change
Many Dynamics 365 projects have adopted a phased rollout approach,
in which features of each application are released to production over
time, as opposed to a “big bang” approach, in which a single deploy-
ment to production prior to go live has all included functionality. While
the rest of this chapter covers the development and distribution of
training regarding the first release of application functionality, future
phases of the project must include updates to training material and
new training sessions.
566
of these projects, common themes have emerged. We share the most
important recommendations here as best practices—things to keep in
mind during your organization’s training process. These best practices
are by no means comprehensive, nor are they set in stone, but they are
meant to guide a project team to a more successful training outcome.
Earlier is better
Starting early (with a training plan) is important. Dynamics 365 projects
are often run on tight timelines with little to no leeway. It’s crucial that
a training plan, complete with staffing and resource availability, be
completed at the start of a project, to ensure no unexpected resource
requirements or shortages appear midway through implementation.
Give yourself sufficient time to prepare a training plan, develop con-
tent, and deliver training. Content development should be completed
Fig.
19-9 well before the start of training delivery. Additionally, project roles
and responsibilities in cloud solution deployments are rarely separate;
often a single IT resource will wear
Software update multiple hats, and one of these
We have made improvements on the schedule board that should result
in a 40% increase in performance. hats might be to create a training
Performance
Training needs improvements plan, review training material, or
Even though this update is relevant and beneficial to your users,
it would not require an update to training materials or necessitate maybe even conduct training.
additional training, since there is no underlying change
in how your users would conduct their work.
Making sure these individuals are
VS
aware at the outset of their con-
tributions to the training plan will
Software update
help them balance their workloads
We have made updates to the layout of the schedule board
Interface updates that should make navigation and creation of bookings easier
and complete certain activities
for your dispatchers.
Keep an eye on release notes to leverage new features. Work with your
More is
implementation partner and your internal teams on your heatmap for
Dynamics 365 capabilities. Have a regular review of future projects,
incorporating new features delivered.
Join the D365UG User Group for Dynamics 365 to learn about member-
driven education, networking, and events. (usually) better
Create a Yammer group or Microsoft Teams channel to continue In Dynamics 365 applications, a
conversations on best practices and learnings.
It’s also important that your training material does not contain too
much duplicate content. While it’s certainly critical that both the web-
based and live content cover the same features (especially if users are
not consuming multiple types of content), too much overlap can result
in training fatigue, and overall apathy regarding training. An organi-
zation would certainly want to avoid a situation in which users skipped
valuable content because they considered it repetitive.
meaningful adoption but none of them uses it correctly because of a failed training process.
Having comprehensive training material is but one step in this process.
of your Dynamics As stated earlier in this chapter, conducting effective training that leads
365 application. to excitement and high user adoption with accomplishment of critical
KPIs, in addition to the more obvious goals of high training partici-
pation and education, is the key that can separate adequate training
programs from excellent ones.
Accessibility concerns
Your organization must consider accessibility while creating training
material. All Office 365 products have an accessibility checker that can
help you spot potential issues in your content.
Product-specific guidance
Up to this point in the chapter, our training guidance has applied
to Dynamics 365 Operations as well as Dynamics 365 Customer
Engagement application projects. While both applications live in the
Microsoft Dynamics 365 ecosystem and customers frequently adopt
both systems (often simultaneously), there are differences between the
two, which can mean differences in how each project should train its
users. In this section, we highlight some of these differences and how
to apply them to your training material.
569
Dynamics 365 Operations
A number of resources provided in Dynamics 365 Finance, Supply Chain
Management, and Commerce can assist with product help and training.
Help on docs.microsoft.com
The Dynamics 365 documentation on Microsoft’s docs.microsoft.com
site is the primary source for product documentation for the previously
listed apps. This site offers the following features:
▪ Access to the most up-to-date content The site gives
Microsoft a faster and more flexible way to create, deliver, and
update product documentation. Therefore, you have easy access
to the latest technical information.
▪ Content written by experts Content on the site is open to con-
tributions by community members inside and outside of Microsoft.
▪ Content is personalized based on language selected If, for
example, a user is set up for the German language, then any Help
You can find content on docs.
microsoft.com by using any search content they access will be provided in German.
engine. For the best results, use a site
search, such as site:docs.microsoft.
▪ Help is now on GitHub Customers can copy and create person-
com Dynamics 365 “search term”. alized Help content and link it to their application. Customers can
create a personalized and contextualized Help resource for users
of their custom business processes.
In-product Help
In the Dynamics 356 client, new users can enter the Help system to
read articles that are pulled from the Docs site’s Dynamics 365 area
and task guides from the business process modeler (BPM) in Lifecycle
Services (LCS). The help is contextualized to the form that the user is in.
For example, if a user is in a sales orders screen and wants to know how
Consider the number of concurrent users to create sales orders, the Help system will show the Docs articles and
using the training environment as you
choose the right tier for the environment. task guides related to sales orders (see Figure 19-10).
Do not automatically select the default
tier 2 environment if the volume of
expected users exceeds the recommended Task guides
volume for the tier. For more information
about different environment tiers and
Other useful help and training features are called task recorder and
the volume of users each supports, read task guide. Task recorder allows you to record a user’s activity on the
this documentation on selecting the right
environment for your organization. UI. You can capture all the actions as well as any UI fields and controls
that were used. This recording can then be used in a task guide.
Fig.
19-12
References Numerous custom ISVs also offer similar gamification solutions that
Training plan template can be attached to Dynamics 365 Customer Engagement. Pick the one
Training plan charter template that addresses the needs and goals of your organization best.
TechTalk Series: Training Plans and Content for
your Finance & Operations Project (Microsoft
Custom help pages
Dynamics blog)
Dynamics 365 Customer Engagement lets you create guided help pag-
Dynamics 365 training offerings (Microsoft
Docs)
es that provide your users with in-product assistance. This assistance
can be customized to your application and user base and can include
In-product Help – Finance & Operations (Mic-
rosoft Docs) text, links, images, and video links. For more information about custom
help pages, read “Custom help panes and learning path.”
573
Checklist
Have a process to update the training plan incrementally Define a process for feedback and continuous improve-
as necessary to reflect scope, changes, risks, and depen- ments to training materials.
dencies to ensure adoption and engagement.
Identify a process to provide for continuous training in
Consider what to include regarding system process and alignment with updates and changes to the solution as
business process, so that the training provides the best well as changes in roles and responsibilities.
possible foundation to end users.
In the training plan, the company included a mix of high-level and de-
tailed training objectives. The high-level objectives included these goals:
▪ To not allow access to the live system without a solid
training program
▪ To prepare a core team of trainers to help support the initiative
▪ To continue receiving feedback and improving the
training approach
▪ To develop the training materials early and schedule classes early
▪ To prepare all application users to efficiently use the applica-
tion (Dynamics 365 Finance and Dynamics 365 Supply Chain
Management or Dynamics 365 Customer Engagement), as well as ad-
dress any key business process changes required in their job function
The team understood that for training to be successful and for mean-
ingful user adoption to be achieved, they needed to begin planning
early and set up a strong team to support it.
Formal feedback was recorded after the trainings and Microsoft Teams
channels were created for employees to continue providing feedback
to the team. Users were encouraged to share knowledge, ask questions,
and suggest improvements to the training materials. The team was
also able to collect feedback and create metrics using help desk tickets,
which helped them identify areas of the application that users found
particularly challenging.
The objective “To not allow access to the live system without a solid
training program” was a difficult objective to meet in the initial days,
but the company learned over time that making sure every user
received adequate training drove a significant reduction in business
process issues for the company.
In the first few months, an evaluation of key KPIs showed that the
organization was on track to meet all the detailed objectives set by
the team.
579
20
Implementation Guide: Success by Design: Service the solution
Guide
Service the
solution
580
Continuing the business applications journey.
Introduction
Your solution has been deployed to production.
Users have been trained and onboarded. A support
process is in place.
Service updates
proper plumbing and electricity in each apartment. They may add or
improve amenities such as an exercise gym or a garden in the yard.
Environment maintenance
But it is each renter’s responsibility to keep their apartment in good
Continue the business condition. The renter should clean the apartment and throw out their
application journey
garbage. They will need to replace the lightbulbs in the fixtures if it
Environment maintenance Dynamics 365 and the Power Platform have gone through rigorous
testing, so why do customers need to check that it’s running smoothly?
Continue the business
It’s because of each organization’s unique usage patterns and the
application journey
extensibility of the platform.
Performance monitoring
Microsoft recommends that Dynamics 365 project teams consider adding
performance testing to the project’s test cycle. Performance testing is an
effective way to gauge the impact that your customizations may have on
the baseline Dynamics 365 solution.
583
▪ The network traffic can vary throughout the day depending on an
organization’s usage patterns
▪ For remote workers, reliance on individual internet connections
Chapter 17, “A performing solution,
could cause different outcomes for each user
beyond infrastructure,” covers the
importance of having a performance
strategy that includes elements such as
defining performance requirements,
Ultimately, the responsiveness experienced by end users is caused by
establishing baseline metrics, and a mix of multiple factors that aren’t limited to the performance of the
ongoing performance testing.
Dynamics 365 application itself.
Users and integrations aren’t the only cause of storage growth. Logs
from system jobs, indexes created to help performance, and additional
application data added from new modules also contribute to storage
Refer to Chapter 10, “Data manage-
ment,” for details on storage enti- growth. Another scenario that impacts storage allocation is in the copy
tlements, segmentation, and impact and restore operations of an environment. Depending on the type of
to allocations when backing up and
restoring instances. copy, the size of the database can be very different. As an administrator,
you need to be mindful of who can create new instances and what their
For information on storage capacity for
Finance and Supply Chain Management, true needs are to minimize impact on the storage allocation as these
see the Dynamics 365 Licensing Guide.
copies are being restored.
Note that Dataverse storage capacity
entitlements and usage changed in 2019.
Administrators should monitor the volume of storage that is currently
used as well as its growth rate. This information will help you budget
for any additional storage needs or look into data archiving and
deletion to free up space. Scheduling and cleaning up data from
time to time will help as well. This is covered in the “Environment
maintenance” section of this chapter.
Administrators can also pull historical telemetry reports to see if any areas
of the application are at risk of hitting these limits. Then you can work with
the implementation team to make appropriate design changes. Or better
yet, tools like Azure Application Insights allow you to set thresholds on
these API calls so that when they’re exceeded, the administrator is notified
and can mitigate the risk of throttling or being shut down.
You can also use this information when estimating license needs. For
example, some licenses may be assigned to users who rarely access
the system; you can reassign these to someone new. Insights provided
by the telemetry are readily available in Dynamics 365. Use this data
to help improve business processes and allocate resources in the right
areas to maximize your investment.
You can use tools such as Azure Application Insights for Dynamics
365 and other applications and services that are part of the overall IT
solution for your organization. Application Insights lets you collect
Monitoring and Diagnostic tools in LCS
allow administrators to monitor and query telemetry both in and outside of Dynamics 365.
logs for issues detected in the system for
Finance and Supply Chain Management.
For example, if a process is initiated by a user in Dynamics 365 that calls
Trace logging in Dataverse provides
plugin error information for Customer
on an integrated, but external system, Application Insights can still detect
Engagement and the Power Platform. performance and exceptions at different stages of the execution pipeline.
You can also use Microsoft 365 service You can see these issues whether they occur at the user interface level, in
health to identify service issues, and Dynamics 365, or the external system. This empowers the administrators
administrators can be notified via email
or through the mobile app. who monitor alerts to react quickly the moment the exceptions surface.
They also have immediate access to information on the source of the issue.
Service updates
Monitor service health To enable businesses everywhere to accelerate their digital transfor-
mation, Microsoft is continuously enhancing Dynamics 365 with new
Service updates capabilities. We add product enhancements and performance improve-
ments at a rapid pace, so it’s important to make sure Dynamics 365
Environment maintenance
provides an optimized user and administrator experience. Our objective
Continue the business is to help you keep your computing environment current in a consis-
application journey tent, predictable, and seamless manner.
These notifications provide the information you need to start your plan-
ning—dates of release availability, release notes, and the process to opt in
for early access.
Each release wave includes features and functionalities that you can
enable for different types of users:
▪ Users, automatically These features include changes to the user
experience for users and are enabled automatically.
▪ Administrators, makers, or analysts, automatically These
features are meant to be used by administrators, makers, or
business analysts. They’re enabled automatically.
▪ Users by administrators, makers, or analysts These features
must be enabled or configured by the administrators, makers, or
business analysts to be available for their users.
If you choose to opt in to early access updates, you get features that
are typically mandatory changes automatically enabled for users. Each
feature in the release notes indicates which category it falls under.
Refer to the Message center Deprecation of features is an important part of a solution’s growth. As
for detailed information on
business and technology needs evolve, the solution needs to change
notifications, email preferences,
and recipients for service updates. to keep up with new requirements for security, compliance, and over-
592
must plan and prepare well before the removal of features to avoid
negative impact from the deprecation.
Another important area to cover for early access is to work with the
business sponsors to help them understand the impact to the end users.
As part of the release, some updates may impact user experience, such
as user interface (UI) navigation changes. Even small differences can
have a meaningful impact. Imagine users in a large call center scenario,
in which every additional second on a call with a customer can impact
their service goals. In such a case, business managers want to make sure
that the user group receives proper communication and takes time to
provide training if necessary.
Deployment rings are a technique used to decrease any risk associated with
rollouts for Azure and other Microsoft cloud services managed at global scale.
As Dynamics 365 updates are created each month, they progress
through a series of rings, with each ring providing broader exposure
and usage and validation through system telemetry (Figure 20-2).
Refer to the Dynamics 365 for
Finance and Operations Cloud The GA update benefits from extensive Microsoft testing as well as
Application Lifecycle for more
validation through each of the earlier rings.
information.
Update cadence
Customers are required to take a minimum of two service updates per
year, with a maximum of eight service updates per year (Figure 20-3).
594
Fig.
20-2
Safe deployment practice for
Finance and Operations
Finance and
Supply Chain Targeted First
Management team release release
▪ Extensive validation ▪ Preview early access program ▪ Select customers
▪ Compatibility checker ▪ Preview build ▪ Auto update
▪ Over 100 customer RVPs ▪ No production use ▪ Production ready
8 updates
Pausing a service update can apply to the designated user acceptance
delivered testing (UAT) sandbox, the production environment, or both. If the pause
per year window ends and the customer hasn’t self-updated to a supported
service update, Microsoft automatically applies the latest update based
Jan
on the configuration selection available in LCS.
Feb
Mar System updates follow these guidelines:
Apr ▪ Updates are backward-compatible
▪ Updates are cumulative
May
▪ Customers can configure the update window
June
▪ Quality updates containing hotfixes are only released for the
July current version (N) or previous version (N-1)
Aug ▪ System updates contain new features that you can selectively
choose to enable
Sept
Oct
Release readiness
Nov We strongly recommend that you plan ahead and work the updates into
Dec your internal project release cadence. To do so, take the following steps:
▪ Step 1: Plan Have a good understanding of the release schedule
and a plan to work this into your application lifecycle management
(ALM) strategy. Because you can pause updates up to three months,
you can set aside plenty of time for testing, impact analysis, and
developing a deployment plan. Use the impact analysis report from
LCS to identify areas of change that may affect your solution and
help determine the level of effort needed to remediate any impact.
▪ Step 2: Test We recommend using a non-production
environment such as your UAT instance to opt in early and apply the
release. You can configure service updates through LCS and specify
how and when you receive service updates from Microsoft to your
environments. As part of the configuration, define the update
environment (production) and an alternate sandbox (UAT). Use the
Regression Suite Automation Tool (RSAT) to perform regression
testing to identify any issues. Work any fixes into your ALM cycle
and deployment plans.
▪ Step 3: Deploy After you define environment and schedule for
Fig.
20-4 Safe deployment practice for
Customer Engagement
Customer First Standard release
Engagement Team release and servicing
Station 1
Station 2
Station 3
Station 4
Station 5
Station 6
Customer
Engagement Team Station 1: First release Station 2 JPN, SAM, CAN IND
▪ Extensive integration ▪ Production quality
testing and validation ▪ Early view of weekly release
Station 3 APJ, GBR, OCE
▪ Solution checker ▪ Select customers
Station 4 EUR
Update cadence
Customers receive two major updates per year, in the April and
October GA releases (Figure 20-5). You can get early access and opt
Dynamics 365 apps have a different in months before the GA dates. These updates apply to both Power
cadence from the major releases. For
example, Dynamics 365 Marketing and Platform and Dynamics 365 apps. We encourage you to opt in early
Dynamics 365 Portals have monthly to test and apply the release. The releases are production-ready and
updates. Apps from ISVs from AppSource,
Microsoft’s app marketplace, may also fully supported even when applying prior to the GA date. Activation
have a different cadence. You should
for major updates is automatic through safe deployment processes
consult with the ISVs for any third-party
apps you’re using. for the region where the Dynamics 365 instance resides, on the
deployment dates specified for the region.
Release readiness
We strongly recommend that organizations work updates into their
internal project release cadence. To do so, take the following steps:
▪ Step 1: Opt in for early access Before you apply the changes
to existing production or non-production environments (which
may disrupt users and developers), we recommend you create
a new instance. You can’t revert back to the previous version, so
all testing should be done on a new instance. Take a copy of the
test environment that has your latest solution and data and
Refer to the Dataverse storage capacity
create a new instance from it. Enable early access to apply the
guidance to understand the impact on your new release capabilities. After you opt in, some features are
storage allocation for your tenant when
creating new instances from a backup. turned on by default; others may require an administrator to
explicitly configure them. The details are documented in the
May Jun July Aug Sept Oct Dec Jan Feb Mar Apr May
Access to latest features (often weekly) Prep time latest release Latest release
(admin opt-in) (admin opt-in) (automatically applied)
F1
F2
F3 Features Features
F4 F1-F6 F1-F6
F5
F6 Access to latest features (often weekly) Prep time latest release Latest release
(automatically applied)
(admin opt-in) (admin opt-in)
F7
F8
F9
F10 Features Features
F11 F7-F12 F7-F12
F12
Continuous updates that can include feature code which has no UI impact (often weekly)
Opt-in to experience all UI features coming in the next scheduled update (preview)
Environment
maintenance
Monitor service health Protecting your solution and providing continuous availability of
service is your primary goal as the system administrator. In a cloud
Service updates environment, these maintenance jobs are automated, but it’s critical for
Environment an organization to have a strategy so that these routine jobs are
maintenance appropriately configured and scheduled. In some cases, you may need
Continue the business to perform these tasks manually but still in alignment with your overall
application journey planning and strategy.
Your organization’s data is likely one of the most important assets you’re
responsible for safeguarding as an administrator. The ability to build
apps and automation to use that data is a large part of your company’s
success. You can use Power Apps and Power Automate for rapid build
and rollout of these high-value apps so that users can measure and act
on the data in real time.
Data management
Data is central to all applications. It drives business decisions through
analytics and artificial intelligence. It also reveals crucial information
about the overall health of the system and what administrators need
to do for maintenance. The theme of this chapter is to be proactive
in planning and strategizing around the upkeep of your system. Data
maintenance is no different. In this section, we discuss the service
aspects of data management. To explore the broader topic, refer to
Chapter 10, “Data management.”
You can find details on storage allocation The rate of growth can fluctuate depending on the number of users or
and purchasing additional storage for
even during certain times of the year if your business practice has special
Finance and Supply Chain Management in
the Dynamics 365 Licensing Guide. circumstances that may impact record creation. Monitoring storage
Read about the storage capacity model for
growth and using historical trends will help estimate data growth. This
Dataverse and how to check storage growth. information can help you determine how often the data archiving and
removal process should take place.
You may have logs, notifications, and other system records that you can
delete with no impact to the business. You may also have other transactional
data that can be deleted. Because Dynamics 365 applications have heavy
parent-child relationships between records, pay careful attention to how
records are deleted and any impact to related records. Look for triggers
that run extension code or workflows when a record is modified or deleted.
A delete operation could potentially write a new entry in the audit log to
record the transaction. You must account for all these things when planning
for bulk deletion.
The only way to truly understand these benefits and how to best
apply them in your solution for the most return on your IT investment
Conclusion
In summary, take steps to be aware of your solution performance, be
proactive in taking action, and be prepared with a solid strategy for
maintaining your environments. Keep up on new trends and tools that
can help improve your solution and your organization.
It all starts with visibility into the health of the system through proper
monitoring. Having the right telemetry and managing notifications
from the system as well as Microsoft will help you to prioritize and act
to address maintenance needs.
607
Case study
Fruit company learns the
importance of servicing
the solution
An agricultural business that grows fruit implemented Dynamics 365
Finance and Dynamics 365 Supply Chain Management as soon as the
cloud version of Dynamics 365 became available. The company has
been a global leader distributing fruit across different regions, and
warehouse operations and transportation are part of the company’s
core business.
Dynamics 365 has been evolving since its initial release, when the
application and the platform were released as separate
components. By the time continuous updates and a single, unified
version became the norm for Dynamics 365, the fruit producer’s
operations in the cloud were mature in taking updates under this
modality. The company was ready to adopt the modernized update
pattern and take advantage of the continuous innovations from
Microsoft. They also wanted to fulfill one of their expected returns
But the company noticed that while the ISV kept the solution up to date
with the Dynamics 365 releases, the ISV always provided an update of their
solution using Microsoft’s last update ring—or missed it entirely. Because
of this, the company had to apply Microsoft’s updates late in the release
cycle or, in some cases, not at all.
Then Microsoft notified the fruit company about new functionality for
advanced warehouse operations in an upcoming release of the Supply
Chain Management app. Because of the expansion of the company’s
operations and the complexity of their warehouse management, these
new features were crucial to increasing productivity and saving time
and resources while managing the expansion.
To adopt this new functionality, the company had to test the features
in advance to align their processes. They realized that being late on
updating standard features wasn’t workable, and they wanted to
The challenge came when the ISV wasn’t aligned to that timeline.
The ISV’s usual practice of adopting Dynamics 365 releases when they
became generally available—in the last ring—meant the fruit
company frequently was one version behind on Finance and Supply
Chain Management releases.
After conversations with the fruit company, the ISV agreed to align
with Microsoft’s update rings and give the company—and their other
customers—an opportunity to test their entire solution when the
standard release was available.
Introduction
When introducing a new system into an organization,
we need to think through the various areas that will
influence how the project transitions from project
mode to support mode.
In this section, we discuss how to construct a strategy to help you
prepare, define, and operate a support model.
Fig.
21-1 Organizations that spend the necessary time and energy to construct
a strategy that explicitly addresses how to create a fit-for-purpose
Support scope support organization for their Dynamics 365 application tend to have
▪ Enterprise architecture better user satisfaction, better adoption of the system, and therefore
▪ Business and IT policies
▪ Dynamics 365-specific considerations
higher-quality outcomes for the business.
▪ Business processes
▪ Business continuity
▪ System update cycles If this is the first Dynamics 365 business application for your company,
you may not have experience in setting up the support organization,
Support models support infrastructure, and support procedures for this application.
Support models
The scope definition influences the decisions that need to be made to
identify the best support model and the constitution of the resulting
support organization. This helps define the “who” of the support model.
Support operations
Finally, we discuss the distinct support requirements that emerge from
the transition and hypercare project phases.
Business continuity
Enterprise architecture
System upgrade cycles
In most organizations, the Dynamics 365 system is embedded within
the wider enterprise system landscape. The Dynamics 365 application
has connections to multiple systems and to underlying and coexisting
technologies. When considering the strategy for defining the support
model, the focus is often solely on the Dynamics 365 application archi-
tecture. It’s worth accounting for the changes, constraints, procedures,
and influences from the surrounding architecture on the Dynamics 365
During the project, the new Dynamics 365 system will probably be
implemented within a special sandbox (test environment) and not
necessarily be subject to all the influences and rules that the pro-
duction system is subject to. This also applies to the third-party test
systems that are part of the middleware or integrations. For example,
production environments have more limited access to the Dynamics
SQL database, and the process by which custom code is promoted to
production or how the Microsoft system updates are applied isn’t the
same. You should specifically examine the implications of the produc-
tion system environment on support operations and not rely solely on
the experiences of the test environment.
The impact of, and on, the surrounding architecture can be difficult
for the customer project team working on the Dynamics 365 business
application to fully identify. In almost all cases, you need the enterprise
architects from IT and the business to be involved in identifying the
changes. Some changes may also impact the roles of individuals formally
part of a support organization and those in peripheral organizations.
In many cases, the policies apply not only to the creation of the support
System upgrade cycles
model and its scope of responsibility, but also to how it operates as an
organization and how it addresses the lifecycle of a support request.
You could set up some of these policies within the Dynamics 365 ap-
plication, such as new vendor approval policies, customer credit limits,
purchase order approval thresholds, and travel and expense policies.
The support team may need to help provide information on compli-
ance or enforce compliance as part of their duties.
The Dynamics 365 support team needs to work with these other
enterprise IT and business teams to define the rules and procedures
for managing some of the security topics that may impact Dynamics
365 applications:
▪ Azure Active Directory groups and identities
▪ Single sign-on (SSO)
▪ Multifactor authentication
▪ Mobile device authentication and management
▪ Authentication and management for custom applications working
on Microsoft Dataverse (such as Power Platform apps), which
requires an understanding of Dataverse security concepts
▪ Application access for third parties (such as vendors and customers)
▪ Application access for third-party partner support organizations
(such as technology partners and Microsoft)
▪ Service account and administrator account use and management
▪ Password renewals and security certificate rotations
▪ Secure and encrypted communications within the enterprise and
Identify any outside the enterprise (such as those involved with integrations with
enterprise-level internal systems, or with external web services or banking systems)
policies that
The Microsoft Trust Center can help your organization consider overall
intersect with security and managing compliance in the cloud. Chapter 12, “Security,”
application-level provides a more detailed discussion of security for Dynamics 365
requirements. business applications.
617
Data classification and retention
Consider how the organization’s data classification and retention pol-
icies reflect on and need to be expanded to include the new Dynamics
365 application:
▪ How is the support team expected to enable and enforce
these policies?
▪ What is the impact on the backup, restore, and archive process?
▪ What is the impact on creating and managing database copies?
▪ Do any data classification properties flow between systems, or do
they need to be recreated or audited by the support team?
All of these different areas of business and IT policies shape the nature,
size, and scope of the support organization. Early examination of these
factors will help the team be effective from the start.
Dynamics 365-specific
considerations
Enterprise architecture
Typically, you need to apply these tasks and considerations for the
following environments:
▪ Dynamics 365 application support environments, which are recent
copies of the production system
▪ Test environment for testing the next versions of the
application software
▪ Development environments
▪ Any related data migration, training, or integration environments
▪ Test environments for integrated systems
Integrations
In addition to defining the environments required for supported oper-
ations, it’s worth delving a little deeper into the support requirements
and effort required of infrastructure provision and maintenance for production systems is
reduced, which leaves more time to focus on managing and improving
to manage business process performance.
integrations
depends on Define the system maintenance requirements and what is within the
scope of responsibilities for the support teams. Typically, these are in
their complexity, the following areas:
criticality, and ▪ Servicing the non-production environments, which can include:
robustness. ▫ Requesting and configuring new Dynamics 365 environments
620
▫ Requesting and configuring database copies and restores
between environments
▫ Managing any customer-managed, cloud-hosted
environments
▫ Performing specifically requested backups
▪ Managing system operations, which can include:
▫ Assigning users to security roles
▫ Reviewing and running periodic system cleanup jobs
▫ Managing system administration messages and notifications
▫ Batch calendar management
▫ System update calendar
Performance management
Performance management for a Dynamics 365 business application is
a mix of tasks and responsibilities for Microsoft and for the customer. In
this section, we consider the implications on the support model.
The support team needs to have some way to proactively monitor and
respond to any questions from the users on performance. Therefore,
the support team needs to be able to do the following:
▪ Understand the impact of the reported performance issue
on the business
▪ Assign the relevant priority
▪ Diagnose the issue across the potential root causes from
Business processes
As a business application, supporting the users in the day-to-day pro-
cesses is a key function of a Dynamics 365 support organization. The
Enterprise architecture support organization is expected to provide coverage across all the key
business process, or at a minimum, know where to direct questions and
Business and IT policies issues that were agreed as being outside their support scope.
Dynamics 365-specific
considerations Based on the definition of the key business processes in scope, consider
Business processes the following for each process:
▪ What is the level of expertise needed in the business process?
Business continuity ▪ What are the likely support tasks and queries related
to this process?
System upgrade cycles
▪ What is the type of usage, frequency, and volume?
▪ How critical is this business process to the business outcomes, and
These questions and more will help shape the operating model for the
support team.
Business continuity
Many organizations need to have a business continuity strategy and
exercise regular checks of business continuity in case of a system-down
Enterprise architecture disaster. This may be required by external regulations or by internal
policies. In any case, the support organization is probably expected to
Business and IT policies play a major role.
Dynamics 365-specific
considerations Depending on the size and complexity of the system landscape and
Business processes the types of disaster scenarios being exercised, this may need a signifi-
cant amount of preparation, coordination, and timely communication
Business continuity between multiple parties.
624
mitigate to reduce the impact on the business. You may also need to
apply specific setups (such as IP allowlists) to the secondary site.
Business and IT policies Creating a calendar for the Microsoft updates helps the support team
Dynamics 365-specific plan for the resources and effort associated with the following:
considerations ▪ Microsoft Dynamics 365 updates
Business processes ▪ ISV updates
▪ Updates for custom code
Business continuity ▪ Associated testing, including any regression testing and
release management
System upgrade cycles
Implementation Guide: Success by Design: Transition to support 625
We saw in the previous section that the solution is unlikely to remain
static for long, and the solution functionality can change because of new
and backlog requirements for many different reasons. For details on
Microsoft system updates, refer to Chapter 20, “Service the solution.”
Support models
Defining the support model is ideally started in the Implement phase
Support so that it can be ready and exercised prior to user acceptance testing
models
(UAT), can help with the transition during the Prepare phase, and be
fully effective and working by the Operate phase.
Support organization
Support model options
Thus far, we have discussed topics that help define the scope of re-
sponsibilities and actions for a support team. Now we look more at the
“who” and the “how,” starting with an examination of the spectrum of
support models.
First level
Many customers have a concept of identifying and nominating super
users within a business unit. Typically, the super users gained their
deeper knowledge of the system and processes in their previous role as
subject matter experts (SMEs) during the project cycle.
627
▪ Triage and communicate issues in a way that makes it easier for
the helpdesk or resolving authority to understand the issue, repli-
cate it, and fix it rapidly
▪ Collate best practices, FAQs, and issues, and provide the core team
better data on areas of priority for them to tackle
▪ Provide early warning on issues that have the potential to escalate
and help with better adoption by the business
Super users per business unit Super users per business unit
1 or function
1 or function
1 Super users per business unit or function
Dynamics 365 CoE or Dynamics 365 Dynamics 365 CoE or Dynamics 365
3 Partner 3 project core team
3 project core team
5 Microsoft support or ISV support 5 Microsoft support or ISV support 5 Microsoft support or ISV support
For a fully outsourced model, the internal helpdesk registers the issue
and assigns it to the resolving authority (internal or external). This fully
outsourced model is tougher to deliver for highly-customized business
systems compared to less customized ones. Determining the correct
resolving authority can be tricky in business systems—even assuming
the super-user roles have eliminated the cause as a business process
or training issue, you may have many different system layers, such as
an internal network, infrastructure software managed internally or by
Microsoft, standard Dynamics 365, custom code, ISV code, or integration
software at either end.
In the fully internal model, the internal helpdesk also routes the issue
to a small internal 365 Dynamics team that triages the issue and helps
determine the best resolving authority. This team is also responsible for
ensuring the issue is driven to resolution, regardless of the number of
parties (internal or external) that may need to be involved. The differ-
ence is that the next resolving authority may be the internal Dynamics
365 Center of Excellence (CoE).
Third level
In a fully outsourced model, the partner is responsible for triage and
resolution. In the mixed model, the most common scenario is for the
internal Dynamics 365 support team to determine if they can resolve
the issue; if not, they work with the Dynamics 365 CoE and the partner
to ensure it’s well understood and correctly prioritized.
In the fully internal and mixed model, the Dynamics 365 CoE includes
functional and technical experts (including developers) who can
resolve issues with custom code. If the issue is seen to be with stan-
dard Dynamics 365 or an ISV, the CoE logs the issue and works with
Microsoft or the ISV to get it resolved.
Fourth level
In the fully outsourced model, the partner manages the process and
the customer is involved as necessary. Most customers tend to have
some parts of the solution written or managed by their internal IT team
(such as integrations), so you still need a definition of how the partner
should work with the customer.
In the fully internal model, the Dynamics 365 CoE takes on the diagno-
sis and resolution if it’s within their control. They only involve external
parties if the issue lies with standard Dynamics 365, platform hosting,
or an ISV.
630
In the mixed model, the internal Dynamics 365 CoE or core team typically
fixes the simpler issues, but involve a partner for issues that require deep-
er Dynamics 365 knowledge or for complex issues.
When the issue is resolved, the method to get the fix back into produc-
tion also requires a clear definition of the expected standards, testing
regimen, and deployment process. Even in a mostly outsourced model,
the partner drives this to a pre-production environment and the inter-
nal team deploys to production. Most customers don’t give partners
admin access to production environments.
Fifth level
Registering the issue with Microsoft support tends to be the common
escalation point, after the determination is made that the most likely
root cause is the standard Dynamics 365 software or service.
Support organization
As noted in the previous section, multiple operating models exist, and
within each model a spectrum of sub-options. Therefore, the specific
organization structure is influenced by the variant of the model that
Support model options is chosen. In this section, we explore some common patterns for roles
and structures.
Support organization
632
▪ Dynamics 365 application system update management (code
branching, any software development, unit testing new changes)
Responsibilities may also overlap with those for business process support,
with the following distinctions for the expected areas of understanding:
▪ Gaining and maintaining a sufficient understanding of the relative
criticality of the various functions within a business process
▪ Gaining and maintaining a deep understanding of the underlying
technical processes involved in the system functions (especially
integrations, custom code, and critical processes)
633
support team may also be expected to keep track of new features and
developments (positive and otherwise) that may impact the system.
Often, the planning for the break/fix part of a support team’s orga-
nization is well thought through, but the planning for other areas of
responsibility may be neglected. For example, many support team
members may also need to be involved in helping with the lifecycle of
the next system update. This may include the following duties:
▪ Helping with assessing the impact of proposed changes on the
current system
▪ Providing insights into areas that need reinforcement based on
statistical analysis of the areas with the most issues
▪ Testing the changes
▪ Updating documentation to reflect process and system changes
▪ Training in any new processes
▪ Sending communications of the system update
▪ Managing the next release through the development, test, and
production environments
▪ Refreshing the support environments with the latest
production environment
This section concentrated on the tasks and activities that your support
organization may need to cover. For details on operational consider-
ations, refer to Chapter 20, “Service the solution.”
The following is a typical set of roles (depending on the size and com-
plexity of the implementation, sometimes multiple roles may be filled
by a single individual):
Fig.
21-4
• Provides executive guidance • Clears roadblocks
• Sets priorities • Secondary escalation point
Executive
• Authorizes budgets sponsors
636
by analyzing the scope of the requirements for the support organiza-
tion (as discussed earlier in this chapter) as well as the size, complexity,
functional spread, and geographical (time zone, language, local data
regulations) distribution of the implementation.
Consider the full set of tasks and activities in scope for the support
organization (as discussed earlier) and map these to the various roles
and resolving authorities over the full lifecycle of a support job or request
so that no gaps appear in the flow. You can use this to make sure that the
specific responsibilities of all the roles and resolving authorities can be
mapped to agreements and service-level expectations.
For internal parties, this may mean defining budget and resource splits
and less formal service delivery agreements. For third parties, that may
mean formal contracts with formal SLAs. Mismatched expectations
between the customer and a third-party support provider in the mid-
dle of dealing with high-priority issues are not uncommon. A contract
created based on a deeper understanding of the expected tasks and
service expectations is much more likely to avoid misaligned expecta-
tions and provide a faster path through the process.
Support operations
The preparation and design work allows you to provide the level of
support expected by the business. There are many aspects to consider
Support when planning for the practicalities of operating the support services.
Preparation for support is necessary, but not sufficient. The support
operations team also needs to learn and practice to ensure they are ready for the
extra pressures of the period immediately following a go live. The next
section explores this in more detail.
Transition
Requirements management
Transition
Change management
Support organizations for business applications are rarely fully formed
Hypercare on the first day of go live. You normally have a transition period from
project mode to support mode. However, the transition period often
starts later than it ideally should, and the quality of the transition is
often not sufficient to provide the level of service that is expected. This
results in poor user experience, frustration from the support team, and
If, however, the existing SMEs supporting the legacy system are
expected to support the new system, consider formalizing their in-
volvement in the project during the Implement phase, from the early
build steps and especially in shadowing the business functional leaders
at critical points in design and software playbacks.
If the SMEs aren’t being exposed regularly
and extensively to the system during its Furthermore, consider involving the support business process roles
build, a gap will likely appear in their
understanding and ability to support, in the formal testing phases as authors of some of the test cases or
which means increased reliance on the
in conducting the testing. Other opportunities to get the team to
partner functional consultants for support
(and testing). directly experience using the new system can come from creating task
recordings and task guides in Dynamics 365 Finance and Supply Chain
Management applications, or other forms of training documents.
Most organizations run lean support teams, which doesn’t offer many
opportunities for the team to participate in the transition activities we
described without explicitly planning these activities so that the sup-
port team gets time away from their day-to-day operational duties to
participate fully during the project implementation.
Requirements
management
When the application is in production, it doesn’t mean that the sys-
tem solution can now be considered static for the next few years. The
solution will need to deal with new requirements for several reasons:
▪ Some projects implement a minimum viable product (MVP) on
Transition their first go live with the goal to incrementally add the lower-pri-
ority requirements over time
Requirements
▪ Some projects have planned multiple rollouts that may impact the
management
already live areas
Change management
▪ Some changes to Dynamics 365 are driven by changes in connect-
Hypercare ed systems in the enterprise
▪ Businesses need to react to the changing world around them
▪ In a modern cloud SaaS world, incremental functional and tech-
nical updates from Microsoft and ISVs help keep the customer’s
solution secure and updated
In any case, the support teams are often at the front line of the feed-
back and requests for new requirements. At the very least, they need
a way to capture and channel the new and emerging requirements to
the right parties.
Change management
We saw in the previous section that the solution is unlikely to remain
static for long, and the solution functionality can change because of
new and backlog requirements.
Some of these changes can be made within the agreed change control
process enforced and facilitated by the system itself. In other cases,
proposed changes need to be reviewed, prioritized, and approved by
business or IT stakeholders and any other relevant parties.
Hypercare
Most projects define a short period of time after go live in which
Conclusion
During operational use of the Dynamics 365 system, the support
operations are expected to function efficiently and evolve alongside
the expanding use of the system. For that to happen smoothly, the
preparation and definition of the support operating model are
essential precursors.
Use the system update cycles section, including managing new re-
quirements and changes, to define the means by which the Dynamics
solution can stay updated and evolve with the additional and im-
proved features. This should deliver a detailed definition of the tasks,
the roles responsible, and the standards expected to continuously
improve the business value from the Dynamics 365 system. This is im-
portant to ensure that the support organization can keep the current
operational system functioning optimally while also working in parallel
on the next update.
support model Finally, transition guidelines can help you make sure that the transition
that can evolve as from project to support happens incrementally and through practical
needed. experience of the system during implementation. We also encourage
643
you to validate the readiness of your support processes and organization
prior to go live.
Operating considerations
The nature and details of the support tasks are influenced by the spe-
cifics of the business application because they are so embedded in the
core business. The details of the support operations are further affect-
ed by the details of the circumstances of the business being supported.
We discuss some of the circumstances next.
All of these topics have a bearing on the operational patterns that the
team needs to support.
Avoid delays by
For example, if the policy only allows for partners to view anonymized
defining access customer data, having an automated way to copy the production
policies for partner system that includes anonymization will help reduce delays when
resources. troubleshooting or testing.
645
Checklist
During the Prepare phase, when some members of the support team
were asked to assist with the testing, the team didn’t feel ready to par-
ticipate until they had been through the training, which was scheduled
after the testing. The project SMEs didn’t feel they could conduct the
internal training because they also had very little hands-on experience
in the working system.
When UAT was complete, the project team had other unplanned
activities that were considered to be higher priority, and so the training
and handover to the support team was reduced to two days.
The go live went ahead on schedule, but the support team struggled
to adequately support the business. The initial assumptions that the
support team would just pick up the new system support were found
to be mistaken, because the underlying processes had also undergone
significant changes. The support team hadn’t been informed about
all the business process decisions and weren’t familiar with the new
control and approval requirements configured in the Dynamics 365
application. The shortened training and handover were conducted
without reference to the significantly changed business processes or to
the new enterprise architecture, so didn’t provide the necessary context
to support a production system.
The support operating model had been revised, in theory, from the
distributed, system-level team structure to be more business process
In the previous support operating model, all queries and issues came
directly (and with little structure) to an individual support team member,
and the full lifecycle of the issue was performed with little external visi-
bility. In the enterprise-level support, even with the first limited rollout,
the support team was receiving large numbers of administrative queries,
which reduced time to address the more business-critical tickets.
The project team had planned to start working full-time on the next
phase of the project after only two weeks of hypercare. The budget
hours for the implementation partner hypercare support were planned
to last four to six weeks, but were exhausted within two weeks or so
because the support team needed far more of their time. Many SLAs
were missed in the early weeks, and the business was forced to formally
extend the hypercare period for both the project team and the imple-
mentation partner because trying to get the timely attention of the
implementation teams was difficult given that they had other planned
priorities for the next project phase.
After the P1 and P2 support tickets were resolved and the volume of
new tickets was down to a manageable level, the post-go live ret-
rospective review made the following key recommendations to be
enacted as soon as possible, to improve the support outcomes during
the next rollout phase:
▪ The scope of the activities, tasks, and actions expected from the
support team should be explicitly defined, in the context of the
new system, processes, people, and architecture.
This story exemplifies the adage “Poor service is always more expensive
than good service.”
Faisal Mohamood
General Manager, FastTrack for Dynamics 365
654
Acknowledgments
This book celebrates Microsoft’s desire to share the collective thinking
and experience of our FastTrack for Dynamics 365 team, which is
currently made up of more than 140 solution architects who serve
Microsoft Dynamics 365 customers and partners around the world.
For being the kind of leader who purposely carves out time in your
busy schedule to think about, write about, and share your vision for
Dynamics 365 and the Microsoft Power Platform, thank you, James
Phillips. We hope that the way you’ve inspired us comes through on
every page of this book.
For your tireless efforts in making Dynamics 365 a better product and
for making us better solution architects, thank you, Muhammad Alam.
We’re also grateful for the time you gave us to round out the content
in Chapter 1.
the help of others. Singh, Dave Burman, Jeremy Freid, Jesper Livbjerg, Matthew Bogan,
Pedro Sacramento, Rich Black, Richa Jain, Satish Panwar, Saurabh
The wise and Kuchhal, Seth Kircher, Tak Sato, Veselina Eneva, and Vidyasagar
the confident Chitchula. At the start of this journey, you were given the task to write
acknowledge this the truth into these chapters, to trust what you know, and to make sure
your message is valuable to the reader. That’s exactly what you all did,
help with gratitude.
in addition to your day jobs. Now please go share this book you wrote
-Alfred North Whitehead with those closest to you.
655
Also, a special thanks to Alok Singh and Pedro Sacramento for their
insistence that the checklists at the end of most chapters be included
in the first version of this book. You went above and beyond in your
efforts, and we’re certain that readers will find the checklists helpful
and provocative.
To the book’s many reviewers, including Ajay Kumar Singh, Dan Ogren,
Faisal Mohamood, Gokul Ramesh, Gregg Barker, Jason Kim, Jayme
Pechan, Matt Sheard, Paul Langowski, Praveen Kumar Srinivasan
Rajendiran, Swamy Narayana, Timo Gossen, and Umran Hasan—you
met us at the edge of exhaustion and pressed us to go on and to make
a better book. Thank you.
To Rishi Manocha for your critical eye and tireless assistance with all
things related to the marketing of this book. Thank you.
656
Appendix
Implementation Guide: Success by Design: Appendix 657
Phase
Initiate Regardless of which implementation methodology
you use, here are the typical implementation
deliverables, activities, and recommended roles that
need to be performed by the project team during
the Initiate phase of the project.
Test strategy Test plan Plan the testing strategy. Test lead
Test consultant
Define test activities such as
unit testing, functional, user Functional consultant
experience (UX), integration, Solution architect
system integration testing Customer IT team
(SIT), performance, and time
Customer business
and motion study.
consultants
Data strategy Data migration Define data that needs to be Customer IT architect
plan and migrated to Dynamics 365 Solution architect
execution apps, along with data volumes,
Functional consultant
strategy approach, and tools to be used.
Data migration
Define high-level plan of Lead/architect
data migration activities,
such as data extraction,
transformation, and load;
migration runs; initial and
incremental migrations;
data validations and data
verification; and production
migration.
Security strategy Security strategy Outline the strategy, including: Customer information
▪ Scope security group (ISG)
▪ Key requirements team
▪ Approach Solution architect
▪ Technology and tools
Identity SME
Federation, SSO Define the needs for single Customer
sign-on and federation with infrastructure SME
other identity providers.
Customer IT architect
Security roles Define all the roles required
and assign these roles to
business process activities so as
to map them to business roles/
personas.
Security strategy Azure Active Define the requirements for Customer information
continued Directory Active Directory or Azure security group (ISG)
(AAD) access Active Directory, identity team
management integration needs, and cloud Solution architect
identities.
Identity SME
Information Elicit security, privacy, and Customer
security compliance requirements; infrastructure SME
regulatory needs; and audit
Customer IT architect
events.
ALM strategy Release process Define the processes for DevOps consultant
continued
release management of the Customer IT architect
solution.
Solution architect
ISV ALM Define the process for Technical lead
managing the DevOps process
Customer IT team
for other components and
solutions (such as Azure/non-
Azure components) that make
up the solution.
Program strategy Project scope and Following all related activities, Customer PMO
requirements list the project scope and RTM to Customer business
signoff be signed off on. stakeholders
Cutover strategy Complete this deliverable by Project manager
and plan the start of the build. Solution architect
Solution design Once design is documented, Functional consultant/
signoff it needs to be signed off on. architect
All changes that are required
follow review, approval, and
version control.
Test strategy Develop unit test Build unit test cases for the Test lead
cases defined functionality to ensure Test consultant(s)
TDD (test-driven development)
DevOps consultant
practices are adhered to.
Technical consultant(s)
These unit test cases should Performance test
be executed as part of CI/CD consultant
pipelines.
Evaluate migration in
Dynamics vs. data warehouse
storage.
Security strategy Security roles When a custom role needs to Technical lead
continued (F&O) be created, a good practice Solution architect
is to not modify the standard
Functional consultant
roles because it will impact
the continuous updates from Customer IT architect
Microsoft. Customer information
security group (ISG)
Apply strict change
management control.
Testing strategy UAT test cases Refine UAT test cases used for Customer business
validating the solution. users (UAT test users)
UAT test report Prepare and present UAT test Test lead
execution report that baselines Functional consultant
the UAT signoffs. Test consultant(s)
Testing strategy Performance These results help establish the Performance test lead
continued benchmark confidence to go live for the Solution architect
testing results anticipated concurrent loads,
Customer IT architect
and transactions.
Data strategy Data migration Team completes the last Tech lead
execution results/ cutover migration of the Technical consultant(s)
signoff data required following the
Solution architect
shutdown of the incumbent
systems before opening Customer IT architect
Dynamics 365 for go-live.
ALM strategy Escrow Escrow all the project artifacts Tech lead
collected during the Initiate, Technical consultant(s)
Implement, and Prepare
Solution architect
phases.
Customer IT architect
Knowledge Register the documents and
transition maintain all pre- and post-
documents deployment activities that the
support team should consider.
User adoption/ User adoption Document the solution usage Customer business
usage document (user patterns and user interviews, users
interviews, along with strategies to Customer IT architect
feedback loops) improve the adoption of the
PMO
system.